Saturday, September 23, 2023

on the limits of the ChatGPT crowd

 Elegant and powerful new result that seriously undermines large language models - LLMS such as ChatGPT make interesting mistakes that point out the limits of what they do, and may prove to be a sidebranch on any path the general "true" Artificial Intelligence. They aren't building their own model of the world "in their head", instead they are phenomenally good at predicting at what our model of the world - communally established in the form of written language - would say when confronted with a new variation.

But that lack of model is weird! The case study they give (which I duplicated via ChatGPT/GPT4) is that it can't tell you who Mary Lee Pfeiffer is the parent of... but then can tell you that one of Tom Cruise's parents is Mary Lee Pfeiffer. And this kind of gap was predicted in discussion of earlier forms of neural networks - which may indicate it's a fundamental problem, a shortcoming that can't readily be bridged. 

It reminds me of Marvin Minsky's late 60s work with Perceptrons. ChatGPT was able to to remind me of the details - 

Minsky and Papert proved that a single-layer perceptron cannot solve problems that aren't linearly separable, like the XOR problem. Specifically, a single-layer perceptron cannot compute the XOR (exclusive or) function.
Of course later multilayer networks (with backpropgation, and later transformers (not the cartoon toys)) overcame these limits, and gave us the LLMs we know are establishing our wary relationships with. So who knows if there could be another breakthrough.

But the results we get with LLMs are astounding - they are a type of "slippery thinking" that is phenomenally powerful... Hofstadter and Sandler called their book "Surfaces and Essences: Analogy as the Fuel and Fire of Thinking" and I would argue that so much of intelligence is an analogy or metaphor - far branches from the human situation of having to make a model with us as a bodily agent in the world. 

And as we find more uses for LLMs, we need to be careful of adversaries thinking one step ahead. Like the once seemingly unstoppable, alien intelligence of AlphaGo derived Go players can be beaten by amatueur players - once other machine learning figured out what AlphaGo can't see on the board. 

Suddenly, Captain Kirk tricking intelligent computers with emotional logic doesn't seem as farfetched as it once did...


No comments:

Post a Comment