Do you remember four years ago when we first saw those AI generated photos of people’s faces and were told “this person does not exist”?
I remember the disorientation of that moment. It seemed incredible that an AI had acquired such a deep knowledge of the complexities of the human face, as well as the capability to render with photographic realism. When I was only confronted with the flawless images, it was easy to jump to that conclusion.
Those images were created with an AI model called StyleGAN. It’s the same family of model I’ve been using in a lot of my work, including the video I shared last month “Cosmic Insignificance Therapy”.
These days, I'm more familiar with its mistakes.
StyleGAN’s mistakes aren’t simple factual errors like colouring the pupils green or drawing the eyes too big. It seems to completely lose grasp of what physical reality looks like. They fall into the uncanny valley, that disturbing gulf between the cute and the lifelike where zombies and ghosts belong.
The weirdness of deconstructed intelligence
the weird is a particular kind of perturbation. It involves a sensation of wrongness: a weird entity or object is so strange that it makes us feel that it should not exist, or at least it should not exist here. Yet if the entity or object is here, then the categories which we have up until now used to make sense of the world cannot be valid. Mark Fisher, The Weird and the Eerie
I think StyleGAN’s uncanny mistakes can be disturbing because they violate my gut assumptions about how intelligence manifests in this world. I’m used to seeing human-like intellectual capabilities together as a bundle, what I consider human intelligence. If a human can draw photorealistic faces, I might assume they have mastered many other intellectual abilities, like a deep sensitivity to human physiology and how it exists in physical reality.
But the sight of that woman’s face slowly degenerating through deformity into smudges reveals a thinking process lacking these abilities. To feel in the presence of some part of intelligence without the rest is weird. It disrupts my assumptions of what I can expect from reality, shaking me into a world where something approximating human-like intelligence can arise from unrecognisable ingredients. It’s scary like a zombie who has enough agency to animate a human corpse into violence but lacks the capacity for compassion, reason or pain that might stop it.
The danger of seeing intelligence as a spectrum
StyleGAN takes one piece of the intelligence bundle, isolates it and amplifies it. Those initial uncanny images forcefully unbundled my conception of intelligence. This feels like an important experience.
StyleGAN is now already over four years old. But I’ve spoken to a few people who I think might be experiencing a similar uncanny reckoning with ChatGPT. An initial overwhelm of its impressive abilities is followed by disappointment on finding its limitations. In some cases this is met with a sense of vindictive relief. Maybe this AI is not so intelligent after all. Thank God for that.
There are indeed good reasons to talk down the “intelligence” of AI. For example, last month the Romanian Prime Minister Nicolae Ciuca unveiled an “AI advisor” called Ion which will eventually inform the government of the thoughts of the population so it can make better decisions. The story was reported in the Guardian with zero critical analysis after a press launch featuring Ciuca speaking to a mirror that was pretending to be an AI.
To give Ion its human name, and call it an AI, implies some kind of intellectual authority. It leans into the intuition that intelligence is a singular trait, and so one intellectual capability implies all the others. But Ion is, at best, a software package for statistical analysis, and possibly little more than a PR stunt. And call me a cynic, but in my experience statistics tend to be used by those in power to justify their decisions rather than inform them. I can’t imagine a more poignant image of a government AI advisor than the PM speaking into a mirror.
But let’s take care. Even if ChatGPT is little more than turbocharged statistical analysis of the web, to call it “unintelligent” is as problematic as calling it “intelligent”. It can likely do things - intellectual things - beyond what we can even dream up right now. It may have just a slice of the intellectual capabilities of a human, but it scales to a capacity that gives qualitatively different results.
For example, ChatGPT can code, test its code to see if it works, modify that code and then iteratively build up a piece of software much like a human coder. But, unlike humans, we can spawn thousands of instances of ChatGPT to work in parallel for a relatively tiny cost. Someone can (so probably will) give it an internet connection, a list of their enemies and ask it to discover new hacking techniques to dig up dirt on them all.
It’s difficult to anticipate how far it would get. Ten years ago, many were expecting self-driving cars to be the dominant mode of transport by now. Sometimes you need far more parts of the bundle we call intelligence than it seems at first. This may also be true of ChatGPT coding, but its limited ability for original critical analysis gives much less of a meaningful signal than it would for a human.
Intelligence is far more complex than a bundle of intellectual capabilities, but I’m finding the bundle a more useful analogy than the ascending ladder of abilities with humans at the top. Some capabilities which have previously only been present in humans are now in machines. Others are coming into existence that we’ve never seen before.
Montreal, 21 April 2023