I'm reading Gene Kim and Steve Yegge's book "Vibe Coding":
When we say vibe coding, we mean that you have AI write your code—you’re no longer typing in code by hand (like a photographer going into a darkroom to manually develop their film). Although the most visible and glamorous part is code generation, AI helps with the whole software life cycle. AI becomes your partner in brainstorming architecture, researching solutions, implementing features, crafting tests, and hardening security. Vibe coding happens whenever you’re directing rather than typing, allowing AI to shoulder the implementation while you focus on vision and verification.
I've "vibe-coded" a few medium-small things, but they were enough to impress friends who were techie but not in the loop.
Still I am more or less persuaded that the future of dev will likely be Yegge's kitchen model:
You’re the head (or executive) chef of the kitchen, and AI represents the army of chefs who help bring your vision to life. AI serves as your sous chef (your second in command) who understands your intentions, handles intricate preparations, and executes complex techniques with precision under your guidance. But AI is also your army of station chefs and cooks, specialists who help handle various technical details.
But it's not clear to me how individual devs should ramp up to that style of production if their current day job isn't there yet.
So some thoughts after reading the first section of the book:
- The book spends a page talking about how complex web app construction has gotten, and how many things enterprise developers need to know - "package managers (npm, Yarn), bundlers (webpack, Rollup), transpilers (Babel), task runners (gulp, Grunt), testing frameworks, CSS preprocessors, build toolchains, deployment pipelines" and then says "Because of the DevOps philosophy of 'you build it, you run it,' you also need to learn Docker, Kubernetes, AWS, and infrastructure-as-code tools like Terraform, not to mention a whole host of AWS, GCP, or Azure services." That's definitely been true for my professional life, but I'm still shocked at how little of that I need for my side projects (and I laugh at how "Server Side Rendering" was the new hotness a few years ago, when older, simpler stacks never left that.) So when I try and get into a fuller vibe coding model, I have to decide if my monolithic, buildless, evergreen stack is still the best bet (and frankly AI understands it really well) or if I should use it as an excuse to add some buzzword bingo to my resume.
- As a side note, it's still startling how bad LLMs can be if they don't have access to the right subtools. I was feeling lazy (or experimental, at least) and asked both Claude and ChatGPT to do a simple "here's a big hero splash image for a porchfest site with an embedded date, please keep the image the same but update the date" task. ChatGPT choked on it, Claude got it in at a barely acceptable level, and I realized I should have just sucked it up and done it by hand.
- There's such a cyberpunk feel to this moment, or at least the near future. A little hard to explain if you're not familiar with the genre, but like the decision to use a "corpo" LLM vs maybe a grungy LLM you have on your own hardware... or just everyone running around with their own bespoke software. It smells like mirrorshades.
- The book has a chapter of cautionary tales, examples of LLM going way off the rails, sometimes causing damage. I ran into one of those cases of LLM shortcutting when I vibecoded a CSV to address label PDF app - Claude was all too happy to keep using hardcoded test data for the actual print pages even after I had had it switch to live data for the setup and config pages. (I only had one set of data I cared about to test, so it took me a bit to notice)
- One thing the book is a little light on discussing frankly is costs. This stuff threatens to create a real divide between the have and the have nots. I have a couple buddies (both formerly involved in guitar bands, interestingly) where tech was the path out of poverty. It seems like that could be harder if the people who can afford $20-$200/month in AI helpers are the only ones who really learn to leverage helpers at scale.
- And of course there's the other lingering fear for developers... we have to hope that the people we report to now, the PMs and POs, don't find out we're not necessary - that running AI at scale will be enough of a skill that we still have value to add.
- There can be such a religious fevor to both the AI-believers and the AI-rejectors. A while back I grabbed this passage from Jim Holt's "When Einstein Walked with Gödel": "[Richard Rorty] also liked to cite Nietzsche’s observation that truth is a surrogate for God. Asking of someone, “Does he love the truth?” Rorty said, is like asking, “Is he saved?”"

