aiaia
Aeaea, of course, was the island where Circe lived, and turned men into beasts.
I want to get down the obligatory “AI thinkpiece” — a bunch of thoughts about large language models, some McLuhan-isms, and so forth. This will hopefully be reasonably coherent, but it might age very poorly — the state of LLMs where I work, in software, has transformed in the last 6 months, and may (or may not!) transform again pretty quickly.
(n.b. that I don’t think that LLMs should be used for any kind of human to human communication, at all — but they sure can write code.)
The Code
A few years ago, I wrote a few words in this parish about how code, which one would think would be a hot, deterministic, high-information medium, has been getting colder and colder, as programming languages move up the stack, as the network and the internet become sources of indeterminism, etc.
With LLMs, especially with the current move to “the code does not matter, only measuring does”, the code is getting ice cold, approaching absolute zero. It’s a very strange feeling to ask an LLM to write some code, try the results, and ship it — especially when the LLM recalls a piece of syntax correctly that you recalled wrong.
This coldness has some consequences though, in that we don’t know how our systems will work. We know what they do, and perhaps why, but the how will become unintelligible.
This is, to say the least, a reverse of where writing software has been for the past umpteen years. We appear to now be working more like “real” engineers, who design, specify, verify, and occasionally go to Utah to yell at construction works on a mine that we designed.
(Except that “real” engineers have a few hundred years of culture and process to make sure that their work does not break … whereas we have 30 years of “move fast and break things”, and we may have other, different LLMs write code to verify the other code — hmm.)
There is an idea that the craft of writing code will go away. This is appealing in many ways — but it is not without risk. When the machines can’t save us, and pursuant to my prior post about coldness and moving down a layer of abstraction, at some point you need to debug the kernel, or patch core postgres code, etc.
There’s further an idea, based on the Jevons paradox, that we may or may not run out of software to write. Based on the Feature Request list at my current company, we’re nowhere close to that — but we may end up in a world where code is so “cheap” that people stop paying for software in toto. As yet, unclear.
The Culture
To riff back to McLuhan and coldness, McLuhan talked about oral culture vs print culture, but it is print culture that defines our entire* history of education. When all we do is talk to LLMs (and some people are literally moving away from typing to chatbots to speech with chatbots!), where and how will we study, in the traditional sense? And what will we learn?
(* Since … 1075 AD or so)
I also want to touch quickly on hosting. It is madness to me that we currently have this magic in the hands of 3 to 5 companies — but the hosting and training of these models is no joke. The phrase “I ran it on the University AI” keeps running through my brain, much more so than “I ran it on the city AI”, or “the government AI”.
Finally — if computers can do everything, what will we do? I’d point you to Iain Banks’ The Culture series as a utopian take, but I also want to point to a very current thing. Chess became pointless for humans in 2006, and it is now pointless for everyone, all the time — my phone has a strong enough CPU to beat the strongest human players every time.
And yet, in 2020, care of Netflix, the pandemic, and a dude from Norway with a good jawline, we had, and are still having, a chess boom. Why? What’s the point? Any answer other than “Because it’s there” is beyond the scope of this post, but “Because it’s there” is a pretty good answer.
To close on a cautionary note — I also keep thinking about Rudy Rucker, and his bizarro-world “Big Pig” internet, and the “oh, I’ll get the info from the Big Pig” addiction. It sure looks like many parts of LLM usage will not keep our minds working; we should be careful to take care of our minds, as they’re all we’ve got.
