Meandering through memories with a hallucinating AI

An AI-rendered image from The Limits of Abstraction by Tim Murray-Browne

What we understand but could never explain

How do computers change the way we think? How does our sense of who we are and what we’re capable of become shaped by the digital tools we live with? What new forms of thinking might be unlocked by different forms of computational interaction. These questions have shaped much of my thinking in the past few years.

In February, I shared Sonified Body, a research project where we translated the live movements of dancers into sound. But most exciting for me about this project, is that we didn’t program how exactly that should happen. Instead, we used AI to allow a movement-based interface to emerge from observations of how a body moves.

The next step is The Limits of Abstraction: a video of the hallucinations of an AI model trained exclusively on my personal collection of tens of thousands of photos taken over the past 20 years, right back to when I was a teenager shooting on 35mm film.

The music to the video is a track I composed one evening in 2018 when I lived in Bucharest. I came across it while searching for a soundtrack and was struck by the sample of Alan Watts I’d dropped in the middle:

Because, after all, if I talk all the time I can’t hear what anyone else has to say, and if I think all the time—and by that I mean specifically talking to yourself subvocally inside your skull—if I think all the time, I have nothing to think about except thoughts. And so I’m never in touch with the real world.

Now, what is the real world? Some people have the theory that the real world is material or physical. They say it’s made of a kind of stuff. Other people have the theory that the real world is spiritual or mental. But I want you to point out that both those theories of the world are concepts. They are constructions of words. And the real world is not an idea, it is not words.

When we interact with the virtual, be it editing document or viewing a social media profile, we do so by manipulating virtual objects, abstractions usually designed initially to represent something from the real world, such as a paper document or a living person. As our lives become ever more integrated with digital systems, we spend more time thinking and acting through these virtual abstractions.

Both Sonified Body and this video above are interactive systems that aim to sidestep the need for the human to think like a computer. Instead, the computer thinks a bit more like a human. With Sonified Body, this was by teaching the computer to see a body in terms of how it moves in reality, rather than as a set of body parts floating in space. With the visual hallucinations above, I’ve shaped the language of abstraction around the many thousand of photos that capture what I find special when I look at the world.

Now if you find AI learning about how you move and see to be more terrifying than exciting, then I’m with you. This is not an apolitical project. I have a lot to say on this topic, and I’ll return to it in future updates. I haven’t totally arrived at a name, but I think I may end up calling the whole project The Limits of Abstraction.

An AI-rendered image from The Limits of Abstraction by Tim Murray-Browne

And as well as the art, there is the science:

I recently published a paper with my collaborator Panagiotis in the peer reviewed journal Applied Sciences to explore these ideas in a more detail. The paper considers what is lost through the use of abstraction within digital interfaces and speculates on alternatives. It’s available open access: Emergent Interfaces: Vague, Complex, Bespoke and Embodied Interaction between Humans and Computers. I’d guess it’s most likely of interest to those into the philosophical realms surounding computer science and embodied cognition.

For those who work on very similar things to me, we also published a short paper at the New Instruments for Musical Expression (NIME) conference which describes how the Sonified Body system works and what we think is special about the approach. That is here: Latent Mappings: Generating Open-Ended Expressive Mappings Using Variational Autoencoders.

And finally, I’m planning to share a prototype for a new IRL interactive piece as part of this project at an event in Glasgow on 27 November 2021. Watch this space for updates.

As always reply if you have any thoughts or feedback you’d like to share.

Tim Glasgow, 4 October 2021