Wednesday, May 29, 2024

o'reilly on building with LLMs

Good article on building with LLMs. Ironically I think ChatGPT or similar is going to be a help for some people unfamiliar with the terms of art used in the article - ("n-shot prompting", "Retrieval-Augmented Generation (RAG") etc.

Heh, I remember when O'Reilly was the first line of defense for coding (I especially loved their "Cookbook" and "Phrasebook" formats.) It shifted to sites like experts-exchange, and then Google + Stackoverflow, and now ChatGPT.


Monday, May 27, 2024

backlog review...

 AI for Web Devs - In this blog post, we start bootstrapping a web development project using Qwik and get things ready to incorporate AI tooling from OpenAI.

Nice intro to WebComponents - I do love the advent calendar format...

https://tiiny.host/web-hosting-free-sites/ or https://glitch.com/ for static hosting of HTML?

WildAgile: just enough process

 It's probably meant half tongue-in-cheek but WildAgile is a minimalist approach to scrum, one that attempts to somewhat codify (or possibly "legitimize") the process many companies kind of land on - a pragmatic (and sprintless) approach.

Saturday, May 25, 2024

on being dense

Smart article on UI Density - I'm a sucker for anything that cites Edward Tufte. 

Its summary quote is:

UI density is the value a user gets from the interface divided by the time and space the interface occupies.

I think that's good at advancing the art, but it might be falling into the trap of only considering things that can be readily quantified. A UI will live or die based on the mental headspace it takes to understand it. A compacted interface can be off-putting because it requires to much external knowledge to grok it, and so just appears overwhelming.

lifehacks to stay afloat

 I've been floundering with email for a few weeks now.


This morning I came up with an idea to try and gate things a little better: no tumblr (my favorite way of keeping up with the memes) til I'm at just about Inbox Zero. We'll see how it goes.


But one of the things that trip me up are these newsletters - Frontend Focus, Javacript Weekly , ui.dev's bytes. Like on the one hand I'm grateful for these, 5 or 6 years ago I was really feeling out of the loop where frontend tech was heading - React sort of crept on to me - and now I feel much better informed.
But I still am disheartened, and a bit suspicious of the extreme complexity flavor of the month stuff. I think this video puts it kind of well: 

Tuesday, May 21, 2024

just trust us (i.e. our Bot)

 
Google is pivoting to REALLY leaning into preferring AI summaries of information rather than traditional web links and snippets. This article is a workaround that (at least for now) makes it easier to stick to web results.

But this trend so infuriating! Google made its mark with a brilliant "BackRub" algorithm: banking on the idea that a source is more useful and trustworthy based on how many OTHER sites refer to it. That's not Truth, but it's a good first order approximation.

This throws that out the window. They are putting all their eggs in a basket of how "people just want a simple answer" (As Nolan Bushnell puts it, "A simple answer that is clear and precise will always have more power in the world than a complex one that is true") and so replacing its history of "trust us, we'll show you who you can trust!" to a simplistic "just trust us (i.e., our Bot)"

And sure, knowing whom to trust online has always been as much an art as a science - people have to develop their own nose (starting with their own preconceptions) using the content and (for better or worse) the design and presentation of a site. (And while not infallible, I think it shows the wisdom of Wikipedia's approach - insist on citations, and let knowledgeable parties slug it out. Of course conservatives suspect it has a slant - but progressives tend to think of Colbert's "It is a well known fact that reality has liberal bias") But the "blurry JPEG" that is AI causes information to lose its flavor, all piped through the same Siri- or Alexa- or Google-Assistant- friendly stream of words.

And when the AI is wrong - boy there are some anecdotes out there. The one about what to do about a Rattlesnake bite is a killer. Possibly literally. It's almost enough to make one hope for some huge punitive lawsuits.

There is a weird "Idiot Ouroboros" aspect to Google's pivot away from connecting people to other knowledge sources - the AI has its knowledge base from what was gleaned from the web. And now the incentives for building up a reputable parcel on the wider information landscape fades away, and eventually the whole web starts to look like those dark corners of social media where Spambots try to pitch their wares to fake user account bots, endlessly.

Saturday, May 18, 2024

draw a line anywhere on a page!

Two quick points:
They released GTP-4o. The o stands for "optimized", though based on a few random tech problems I through at it I'm thinking it stands for "oh, not as good as 4" - I think they tuned it to be fast and take more types of inputs in in a fluid way, but for my main use (programmer's lil' helper) the web version provides lesser results

Anyway, 4 had better suggestions for my problem where I wanted to draw arbitrary lines on page (in my case I'm revamping my "poll editor", and I realized it would be great if the element used to set up an individual poll entry could point to the appropriate place on the preview.)

Enter LeaderLine! To rip off that page:

I love it - if your page is dynamically reformatting you might have to call "line.position()" to reset things and there were a few cases (like when the page was distorted from a textarea being weirdly resized) where it wasn't 100% perfect, but I think it nails most usescases and meets my criteria for using a library vs writing bespoke code:
1. it pulls its weight by solving a relatively difficult problem
2. it has a reasonable simple API
3. it works with vanilla.js without a build system


Friday, May 17, 2024

BASIC and this one weird trick with line numbers

Happy Birthday BASIC - it just had its 60th birthday.

Sometimes I say Perl taught me 5 big things, after taking up C in college: maps, first class strings (vs C character arrays), duck typing, regular expressions, and not having to micromanage memory. But really, BASIC was my exposure to 3 of those things!

Famously Dijkstra said

It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration.

Pretty rough talk! Though he may have been talking about a particularly primitive flavor of it.

Line numbers are probably the thing most reviled - certainly they don't encourage modular thinking the way named subroutines do, but as my friend Jeremy "SpindleyQ" Penner pointed out, they are at least a good way to getting new (and young!) programmers to understand the step-by-step thinking that is critical to writing programs. (I remember seeing a listing of Amiga BASIC and was blown away that it had no line numbers! Also it reminds me that 8-bit computers had weird flavors of "full screen editing" - line numbers were a way of managing the program listing structure in an age before text editors were always available.)

One trick you heard while learning BASIC in the 80s - number by 10s, so that you can easily insert code before or after a specific lines. Weirdly this came in handy when I was creating my 50th Birthday comic book - it started as a bunch of standalone thoughts, which I turned into comic panels (and later grouped into 4 sections). I had a rough idea of the ordering, and so I started each file name with a 3 digit multiple of ten ("010-BEGINS-TO-BAND.png", "020-FIXED-MINDSET.png") etc, so that the filesystem could easily show me the ordered list.  (I'm not sure if writing in all caps was another nod the old 8-bit computer days.)

Wednesday, May 15, 2024

uh-oh is chatgpt slipping

 ChatGPT has proven itself a valuable programming partner but with the new version when I asked it to make a React component and it started:

import React, { useState } from 'recat'; import styled from 'styled-components';

"recat"? That does not bode well...

Wednesday, May 1, 2024

remember to not require authentication for OPTIONS requests on your authenticated endpoints!

Recently I was on a project where we started using "bearer tokens" (like bearer bonds, bearer tokens means if you have possession of it, you're trusted)

But we were getting CORS errors - but they read like cross-origin issues?And the request with the bearer token would go ahead and work when exported as cUrl commands....

Cut to the chase, the browser was sending an OPTIONS request to sniff around to the "Access-Control-Allow-Origin" header - but accidentally that was set to require the token - but that's not the way OPTIONS are supposed to work, so the browser doesn't include it. Anyway, so the 401 rejection of the OPTIONS request meant that the browser wouldn't try the actual GET. 

A little frustrating. It's like "c'mon browser - live a little, send the request, let the server figure out if it doesn't want to talk to you" -- sort of like giving a pep talk when you're playing wingman for your single friend at a bar.

coors = mediocre beer, CORS = annoying security gotchas

chatgpt as lossy compression

My favorite sci-fi author Ted Chiang wrote ChatGPT is a blurry jpeg of the web:
Imagine what it would look like if ChatGPT were a lossless algorithm. If that were the case, it would always answer questions by providing a verbatim quote from a relevant Web page. We would probably regard the software as only a slight improvement over a conventional search engine, and be less impressed by it. The fact that ChatGPT rephrases material from the Web instead of quoting it word for word makes it seem like a student expressing ideas in her own words, rather than simply regurgitating what she's read; it creates the illusion that ChatGPT understands the material. In human students, rote memorization isn't an indicator of genuine learning, so ChatGPT's inability to produce exact quotes from Web pages is precisely what makes us think that it has learned something. When we're dealing with sequences of words, lossy compression looks smarter than lossless compression.
Admittedly this was written last year but I think he underestimates the usefulness of ChatGPT in applying knowledge to a particular case at hand:
This analogy makes even more sense when we remember that a common technique used by lossy compression algorithms is interpolation--that is, estimating what's missing by looking at what's on either side of the gap. When an image program is displaying a photo and has to reconstruct a pixel that was lost during the compression process, it looks at the nearby pixels and calculates the average. This is what ChatGPT does when it's prompted to describe, say, losing a sock in the dryer using the style of the Declaration of Independence: it is taking two points in "lexical space" and generating the text that would occupy the location between them. ("When in the Course of human events, it becomes necessary for one to separate his garments from their mates, in order to maintain the cleanliness and order thereof. . . .") ChatGPT is so good at this form of interpolation that people find it entertaining: they've discovered a "blur" tool for paragraphs instead of photos, and are having a blast playing with it.
I'm willing to grant that asking ChatGPT to apply its embedded gleaned knowledge to a particular problem is basically that kind of of interpolation, but in practice it is far more useful than making entertaining mashups. In my case, especially for technical tasks - as I previously quoted David Winer:
ChatGPT is like having a programming partner you can try ideas out on, or ask for alternative approaches, and they're always there, and not too busy to help out. They know everything you don't know and need to know, and rarely hallucinate (you have to check the work, same as with a human btw). It's remarkable how much it is like having an ideal human programming partner. It's the kind of helper I aspire to be.

Interesting world.