Tuesday, May 2, 2023

thoughts on the future of programming in an AI-heavy world

 Going down a rabbit hole of figuring out how the increasing presence of AI will impact the career scene for me and my peers... Don’t worry, ChatGPT will NOT replace all programmers starts with a summary of a more alarmist article it is responding to.

It's easy to suspect we're at the crook of a hockey-stick curve for NLP tech, but there's absolutely a chance we may be near the top of an "S" curve - (much like how we are approaching hard quantum-ish limits for our chip manufacturing, and Moore's Law no longer seems to be in effect.) Anecdotally, I shelled out for the GPT-4 flavor of ChatGPT, and it seems prone to the same kind of problems 3.5 has - at some point the way a language model doesn't really model the problem and explore the implications limits what it can do.

So right now we have climbed two steps in tech with products that are widely available to any programmer:

Step 1: ChatGPT can write decent code snippets and serve as a replacement for Stackoverflow (which seems like a relief given that Google seems less of a reliable resource for such tech things than it used to) You will usually need to adjust this code and verify and test it, but it's very good at writing variants and amalgamations of boilerplate stuff - and remembering the nooks and crannies of a language or library on behalf of the programmer

Step 2: (or maybe, 1.5) - Copilot is more integrated into editors such as VS Code. I have dabbled with this and don't love it - when combined with other CodeSense-y autocompletion features, the interactions can get a bit messy, and while there were a few incidents that impressed me with the cleverness, when it chokes it can be very hard to pick your way out of the mess (for example, it was notable worse at autocompleting includes than the older style autocomplete was)

So I'm now wondering if anyone who is working on making a product for the next step, where an AI examines your entire code base and can write files directly.

Until (or unless) they get the hallucinations under control, there will be a need for a human mediator. (The need for human oversight may be lessened if they start to set up semi-adversarial AIs to test what the core AI has made and is proposing to put into place.) But right now these AIs are just as confident when they are falling off the edge of their knowledge as when they are smack dab in the middle of it, and the industry doesn't have a embodied knowledge as to what types of subtle mistakes AIs might make - especially on the security front, so companies might be more vulnerable to attacks my folks who understand what the AI was thinking better than you did...




No comments:

Post a Comment