Google's CEO Vibe Codes in His Free Time. What's Your Excuse?
Plus my rant on how pessimists are right but optimists end up rich 💰
Earlier this week, I watched something that stopped me cold.
Sundar Pichai, CEO of a $2 trillion company, casually mentioned he'd been "vibe coding with Replit" in his spare time.
Not reviewing quarterly reports. Not sitting in boardrooms.
"I was Vibe Coding with Replit a few weeks ago. I mean the power of what you're going to be able to create on the web; we haven't given that power to developers in 25 years" — Sundar Pichai
Here's the guy who oversees Google's entire tech stack—Android, Chrome, Google Cloud, AI development—and he's personally tinkering with AI code generation tools.
Sundar doesn't vibe code because he's bored. He does it because he sees where development is heading.
But here's where it gets interesting. While Sundar's building the future with AI coding agents, there's a whole crowd of people sitting on the sidelines, pointing at security vulnerabilities like they're holding a smoking gun.
"The apps are not secure enough," they say. "The risks are too high," they warn. "Wait until AI is perfect," they advise.
And they're not wrong about the risks. They're just wrong about what matters.
The mistake I see everywhere
I spend my days surrounded by some of the most talented founders globally. And there's a pattern I see over and over again:
The people who wait for perfect conditions get lapped by the people who take action. Even in imperfect conditions and imperfect tools.
AI coding agents (aka vibe coding tools) are not the first technology advancements to be called “uncomfortably risky and disruptive”.
Remember when "serious" financial advisors said retail investors had no business managing their own portfolios? They pointed to real risks—lack of education, emotional trading, market manipulation.
Then Robinhood came along in 2013 and democratized stock trading. Were there problems? Absolutely. The GameStop drama in 2021 highlighted the challenges and risks with giving the power of retail trading to Tom, Dick and Harry.
But guess what happened? The problems got solved. Regulations improved. Education increased. And now retail investors make up 20% of trading volume across multiple platforms.
Today, Robinhood has 25.2 million funded customers, 3.2 million paid Gold subscribers, and a $55.75 billion market cap.
For what its worth, I’m a paid Robinhood user too and I am definitely not an expert. But it made it possible for people like me to be involved in markets.
From a startup dismissed as "too risky for average people" to one of the most valuable fintech companies in the world.
Historical examples
This isn't the first time we've been here. Every major technology democratization follows the same playbook:
Desktop publishing (1980s): Professional typesetters warned about the "ransom note effect"—terrible design from untrained users. They were right. Early desktop publishing looked awful. But software got better, templates emerged, and suddenly anyone could create professional-looking documents.
Digital photography (1990s): Professional photographers insisted film had "soul" that digital lacked. They worried about image quality and artistic integrity. They were right—early digital cameras were terrible. Until they weren't. Now smartphone cameras rival professional equipment.
3D printing (2000s): Industrial engineers controlled expensive equipment requiring specialized expertise. Early consumer printers produced inferior results and had serious safety concerns. They were right about the limitations. Until materials science advanced, software became user-friendly, and costs plummeted.
Online banking (1990s): Traditional bankers emphasized face-to-face service necessity and genuine security vulnerabilities. They were right about early risks—identity theft and privacy concerns were real. But encryption improved, fraud detection advanced, and now we bank on our phones.
Financial APIs (2010s): Do you remember Plaid? The unified banking API startup. They faced massive resistance from traditional banks who viewed third-party data access as fundamentally insecure. Early concerns about screen scraping, credential storage, and unauthorized access were legitimate—major banks like Chase threatened to block fintech apps entirely, and PNC actually did block Plaid access citing security risks.
Critics worried about giving login credentials to third parties, data breaches at aggregator companies, and lack of regulatory oversight. These concerns proved valid short-term - security vulnerabilities and privacy issues were real problems that needed solving.
But the infrastructure evolved. API standards emerged, OAuth protocols replaced credential sharing, and regulatory frameworks like Section 1033 legitimized open banking. Today, Plaid connects over 12,000 financial institutions and powers more than 7,000 financial apps including Venmo, Chime, and Robinhood. What started as "dangerous third-party access" became the foundation of modern fintech.
See the pattern?
The security argument that's missing the point
Let me be clear: AI vibe coding tools are still premature and the security risks are real. Account impersonation, injection failures, data exposure/leaks —these aren't imaginary problems. And AI coding agents like Bolt.new, Lovable, Replit and Cursor face similar vulnerabilities when they generate code without proper security guardrails.
But here's what the security-first crowd is missing: AI is already solving this faster than humans can identify the problems.
While we're debating whether AI-generated code is "secure enough," the same AI systems are achieving 95% bug detection rates. GitHub Copilot's autofix is reducing vulnerability remediation time from 90 minutes to 28 minutes. Google's Big Sleep project just discovered the first AI-identified vulnerability in SQLite.
The timeline isn't mysterious:
Today: AI tools catch 95% of bugs with one-click fixes
2025-2027: 50% of security operations run on AI with human oversight
2027-2030: Most security vulnerabilities get detected and patched automatically
By the time security concerns are "fully resolved," the learning curve advantages will belong to people who started building with Cursor and Bolt.new today.
The dot-com lesson nobody talks about
Everyone remembers the dot-com crash. Pets.com became a punchline. Webvan burned through billions.
The pessimists were right about the bubble. Speculation was rampant. Business models were garbage. The crash was inevitable.
But here's what they don't teach in business school: The optimists who survived made generational wealth.
Amazon went from $107 to $7 during the crash. Jeff Bezos lost $147 billion on paper. The pessimists felt vindicated.
Amazon's worth $1.7 trillion today.
The pessimists were right about short-term pain. The optimists who take action captured long-term gain.
Even the failures mattered. Those "worthless" dot-com companies built telecommunications infrastructure that made broadband internet cheap and accessible. Their bankruptcy became everyone else's foundation.
Why pessimists are right but optimists get rich
There's a quote I think about often:
"The pessimist is usually right, but it's the optimist who changes the world."
Pessimists have an important job. They identify real risks. They prevent disasters. They keep us grounded.
But they don't build the future.
Venture capital operates on this principle. 75% of VC-backed startups fail to return capital. The pessimists could point to any individual investment and probably be right about why it won't work.
But the 5-7% that succeed follow power law distributions. A few massive wins compensate for many small losses.
The same logic applies to individual builders. Even if your specific AI coding project fails, you're learning the tools that will matter. You're building relationships with platforms like Bolt.new and Lovable that will evolve. You're creating infrastructure others will build on.
The Sundar signal you can't ignore
When the CEO of Google personally uses AI coding tools and compares their impact to 25 years of web development evolution, that's not casual commentary.
That's a signal.
Sundar doesn't vibe code because he's bored. He does it because he sees where development is heading. When someone who oversees the world's most sophisticated technology infrastructure chooses AI-powered development for personal projects, you pay attention.
This isn't theoretical validation from conference speakers. This is hands-on endorsement from someone who could use any development tool on Earth.
What I'm doing about it
Here's my commitment: I'm building at least one app/product each month in my spare time using AI coding agents like Cursor, Bolt.new, or Lovable.
I already shipped FileForge (using Replit) and AgencyHunt (using Bolt) and had a ton of fun and learning.
I don’t think vibe coding tools are perfect. But I think this future is inevitable.
I don’t think security concerns are invalid. Because waiting for perfect security is a luxury I can't afford because AI is moving at lightning speed and you don’t want to miss out on one of the greatest technology shifts in human history.
I'm implementing defense-in-depth approaches that combine platform security with additional safeguards. I'm building sustainable business models alongside innovation.
Just before big launches though, I will consult my technical friends (real developers) and ask them to audit my output for any obvious security gotchas. I will use Claude and AI testing tools like Momentic to test as thoroughly as I can.
Most importantly, I'm adopting a portfolio mindset. Even if this specific project fails, the knowledge, relationships, and infrastructure I develop will compound into future opportunities.
Your moment of choice
Right now, you have a choice that will define your next five years:
Option 1: Wait for security concerns to be fully resolved. Wait for perfect documentation. Wait for enterprise-grade everything. Wait until the risks are minimized and the path is clear.
Option 2: Start building with AI coding agents while implementing appropriate safeguards to the best of your ability. Learn from rapidly evolving platforms like Replit, Cursor and Bolt.new. Position yourself to benefit from technological maturation.
The pessimists will cheer you for choosing Option 1. They'll point to every vulnerability, every limitation, every reason to wait.
And they'll be right about the risks.
But the optimists choosing Option 2 will be the ones with compound experience when the infrastructure matures. They'll understand the platforms that matter. They'll have relationships with tools that evolve. They'll capture the value and upside.
Here’s the bat signal: 🦇
It’s time to step out of the sidelines. It’s time to build. See you all in the arena :)
What’s your take on this? What are some other considerations I might have missed? What else can I help answer? Leave a comment or hit reply :)
This example in particular resonated: Digital photography (1990s): Professional photographers insisted film had "soul" that digital lacked. They worried about image quality and artistic integrity. They were right—early digital cameras were terrible. Until they weren't. Now smartphone cameras rival professional equipment.