Self Absorbed

Real-time embodied interaction with AI

I want to release any mindset oriented around control. While sensitive and delicate, this is not an instrument for me to express myself. I am simply a voyager in 16-dimensional curved space. Don't think too hard. My body will know what to do. Trust it and release into this uncanny world where the boundary between the intimate and the alien dissolves.

SELF ABSORBED is interactive choreo-audio-visual installation exploring how AI extracts selfhood from humans. Three AI models are trained on my physical and digital life: how I move, every photo I've taken, every sound I've recorded.

My movements are analysed by the model trained on how I move. It forms a representation of how I'm moving given what it knows about how I move. This is then used to control the generation of the visual and audio models. Together, they create an experience of embodied immersion into a simulacra of my memories.

The system is created using unsupervised learning, resulting in an interaction that can be understood in the body if not the mind. It invites the visitor to release a control-oriented mindset and release into a sense of embodied immersion.

SELF ABSORBED aims to provoke questions about current applications of AI that generalise and appropriate human creativity. In refocusing these tools on the full diversity of the individual, it aims to reveal a side more mystical than the now-familiar story of extractive capitalism.

An embodied experience of the distorting lens of AI

I'm using a custom AI model trained on how my body moves to control an image generation model (StyleGAN) which has been trained on 20 years of my photos. They are unsupervised models which form their own sense of meaning in the training data.

Exploring with my body lets me switch off my rational mind. My embodied mind is already familiar with manipulating a high dimensional parameter space in the form of the muscles of my body.

Playing with AI in this way emphasises the topology of its latent space, i.e. the landscape of the AI's inner world. It really does feel like a space I move through. What quickly becomes clear with StyleGAN is that it’s not a semantic space. There is no possibility to go deeper into these images, to relive the moments they suggest. They are just that: images.

Just as I’m learning about my own emotional resonances through this process, because I’m using AI to explore a world I know the best, I’m able to learn about the AI model too. The AI model is a kind of distorting lens. How do we see a lens except by looking at how it distorts a known image? Here, I’m able to experience how the model works at an embodied level.

We all here know that AI does not, and possibly cannot, give an undistorted perspective onto truth. We know this, but I think experiencing it in the body by using AI to look back at myself, really brings it into my psyche in a more profound way.

Of course, StyleGAN and my own custom body model are very different from, say, Stable Diffusion and ChatGPT. Those models do represent things in a more semantic way, albeit still a million miles from how a human mind works. But getting a feel of an AI model through the body rather than reasoning gives a more gooey sense of how it thinks.

It’s the closest I’ve felt to fusing with an AI model.

Adventures in AI Self-Custody

Is AI simply the next chapter of extractive capitalism? It appropriates human creativity, displacing those it learns from. It systematises prejudicial biases. It takes our identities, averages them and pretends this homogenised human model is somehow representative of us. It’s scary.

I learnt to build AI models because I wanted to fall in love with AI, to see it as a collaborator in human agency, to be excited rather than afraid.

I didn’t want to train them on the creative work of others and dissolve my individuality into a pool. So I trained them on me.

In the safety of my studio, I gave them everything: every photo I’d taken, every sound recorded. I danced for them so they could understand my movement and let me move my body to explore inside them. This isn’t a ‘big data’ model generalising across populations. It’s a bespoke model of a single individual’s digital footprint. Little data.

Taking custody of my own data and my own models, I felt a freedom from AI’s homogenising force. Instead, we both ended up somewhere much weirder.

Work-in-progress preview of SELF ABSORBED by Tim Murray-Browne. An AI render of a space that looks like a graveyard with trees in the centre of the image, with a tunnel-like warping effect around it to make it seem far away.

The models create a simulacrum haunted by ghosts of my past: beaches, trees, 35mm self-portraits of my teenage self. But I was confronted by the uncanny juxtaposition of intimacy and alienness. I’m wandering through the meeting of my own subconscious dreamworld with the AI’s mechanical hallucinations.

I felt like the models themselves had been freed from the rigorous squeezing we apply to make them functional.

It’s a self-portrait, but I’m not sure if the subject is me or them.

AI generated artefact from Self Absorbed by Tim Murray-Browne

Acknowledgements

SELF ABSORBED was being created during a residency with the Machine Agencies research group at the Milieux Institute and with support from the Webster Library Visualization Studio, both at Concordia University, Montreal.

It builds on prototypes shared with support from the Centre for Contemporary Art, Glasgow, the Scotland Mixed Reality Meetup and Electro-Magnetic Field festival. The body analysis model was created in collaboration with Panagiotis Tigas for Sonified Body. Thanks to David Clark, Fenwick McKelvey, Adriana Minu, and to Andrew, Iolanda, Saverio, Mattie, Kory, Gabriel, Zeph, Ceyda, Matthew, Eldad, Alain, Max, Uktu, Michał, Simon, Thierry, Etienne, Hadi and Sophie for feedback during development.

    Published:
    Updated: