Monday, November 6, 2023

the possibilities and limits of ChatGPT for MVP prototypes

"If a thing is worth doing, it is worth doing badly."  --G.K. Chesterton (patron saint of MVP prototypes)

TubaChristmas Map 2023: 

Last week I took an afternoon to make an MVP prototype of a map and listings page for TubaChristmas (a collection of local annual events that gather up low brass musicians to play holiday music) As a tuba player and programmer, I would love to help improve the process of site registration and management (which is said to be stuck in a fax-machine era) and thought a quick and dirty prototype would open people's eyes to what could be done.

The first crucial step was to scrape the data from the TubaChrismas listings page:  (click and enjoy the 1998 www vibe!) Since that page was made with a template it wasn't too hard to cook up a DOM-to-JSON converter in the browser but the data was still notably messy -   promotional descriptors being included in the city name, for example.

After cleaning the data by hand, I had ChatGPT write code to update the JSON structure with Lat and Lng in GoogleMap APIs. (I had a API key handy from my work on Porchfests)

For the page itself I decided to give ChatGPT a try in making a Leaflet based solution. The process started off promisingly (first prompt: "using JS and HTML (maybe a vanilla js library) I'd like to: show a map of the US with clickable icons on certain cities. what's the easiest way to do that" and then iterating) but got bogged down - each cycle of refinement tended to create more regressions, and when I finally gave up and went to take over the code, the internal structure was extremely "Jr Developer". I did a minimal amount of restructuring and hacking to get to the filtering/search w/ map highlighting I wanted to prove out and called it a day.

I guess I was heartened by encountering ChatGPT's limits (and this was the paid "GPT4" version) - on the one hand maybe it's just pointing out my own prompt engineering deficiencies, but researches have shown that LLMs can't really create beyond their own training models - and so I think the future is collaboration and refinement of what it means to be a coder.

I do think at this point if you are a programmer who isn't at least using a LLM assistant as a sort of sophisticated "StackOverflow" - providing customized answers to the API details that aren't worth keeping in your own head - you are at risk for falling behind. (My preference is using ChatGPT as a standalone consultant, vs the Co-Pilot hyper-autosuggest model integrated into the editor.) There's absolutely time and effort efficiencies to be had on all levels of work.

Also a reminder: I am on a job hunt, so if your team could use a 10+ year veteran UI/UX engineer who is comfortable coding React/TypeScript/Node but also aware of how automation can help team efficiency - where to trust the machine and when not to - let me know! Available for work near Boston, hybrid, or full-remote. 

No comments:

Post a Comment