In 1982, Atari had a movie trailer commercial featuring the game "Yar's Revenge"....specifically, it was a fanciful "making of" that had the programmer making all kinds of absurdly high level voice commands and gestures to construct an original game.
Of course, this commercial was hitting differently from 1982-2022 (the introduction of accessible LLMs) than it does now - it's still out there but less of a crazy fantasy.
And we've had a year or two of extreme hype. Was LLM - FINALLY - the fulfillment of the dream (started with COBOL, believe it or not) of "*now* the non-programmers can make a program from plain text"?
Well, sort of.
In some places the wheels have fallen off the hype bus. It's tougher to "vibe code" your way to complete system than was promised. When you use a complex "do it all" system like Claude, at least without detailed BDUF (big design up front)/waterfall, regressions show up all the time, and if you're not throwing tons of money at it, you hit token/usage ceilings frustratingly quickly.
I'm fascinated by the modes of collaborating with AI to program. I think I've seen three modes:
A basic chat ala ChatGPT, where you CAN copy and paste a file or block in, but its mostly working out of its sandbox
B do it all systems like Claude... still chat based, but it's more geared at getting a lot of instructions at the outset and doing it all for you
C context sensitive, in-editor solutions like CoPilot in VS Code
I've had a lot of luck with style A, and use style C for work.
I've been writing a bespoke todo webapp with Claude in style B, and the results have been mixed. I repeatedly hit usage limits with the $17/mo program (charged annually) and I'm not positive if that represents Anthropic being more upfront about the costs, or if I'm just asking Claude to do to much, to keep to much in its cyberhead.
Which is the decisive factor right now - when top down things go badly, are you just not doing enough design in your prompts? Should you be throwing more money at it? Or would be better if to take a more "paired programmer" approach.
I think I need to start paying for some CoPilot... letting the human drive the context probably gets better results from "dumber" AI than have an LLM do it all.
As a sci-fi loving kid I said "I think the sweet spot will be computers working in tandem with humans, over computers or humans alone". That seems (optimistically!) weirdly prescient at this moment.

