Time for another “nuance blog”.

I think LLMs are fundamentally bad, on balance. The problems are entirely social, though.

First, I am a “traditional Luddite”: I despair at the human cost of these “innovations”. I’ve already seen management decide to let go of skilled professionals, on the belief that the LLM will do the work. So already, we have a pretty big negative. I am generally not a fan of any technology that puts people out of work, and especially in such an early phase of its life.

Second, I worry about the more credulous members of society. LLMs can generate very real sounding, but entirely fake texts. Some call them “bullshit generators”, precisely for this reason. Right now we live in something of a “choose your own facts” world, where extremely powerful people push blatantly (sometimes, obviously) false narratives or ideas. We saw what happened as Facebook and Youtube radicalized people - and that was before we had the ability to generate the texts at the push of a button. I’m not hopeful.

Lastly, I think the embrace of LLMs is driven largely by the need for SIlicon Valley to be in a perpetual hype cycle, and it’s been lacking one. Web3 fizzled out - obvious horseshit from the start, backed only by monied interests and gambling addicts - it burned bright but once everyone saw it in practice, it rightfully died. LLMs are, for many, a quick pivot to the next hype cycle. They need it, because otherwise they’ll have to get real jobs, or something.

I think the technology itself should be explored. It clearly is useful. Technologies like this are intended to augment humans, not replace them - a bicycle for the mind, and all that. It’s just that it feels like we are at the too-soon step of going all-in on them.