
We are at risk of turning into the machines we create. Iain McGilchrist
How can we interact with machines without thinking like a machine? Without moving like a machine?
The Wilds is ongoing artistic research begun in 2020 exploring how AI can unlock new forms of interaction between humans and machines. By analysing human dance with unsupervised AI, I devise body-centric interfaces that let humans remain wild humans rather than being domesticated into machine operators.
The body of work includes texts, performances, interactive installations and audio-visual works (see below).
There are three critical focuses:
-
Embodiment. Work with our non-rational intelligence. Embrace the full complexity of the moving body without classifying it into ‘gestures’ or other reductive abstractions. Being with instead of using the interface. Play and resonance over the manipulation of virtual realms.
-
Emergence (Non-designed Interaction). Let interaction between human and machine emerge from their existing affordances and ways of being. Build tools for expression that aren't infused with the preconceptions of their designers. Embrace not knowing.
-
Extreme Individualisation. Remember, a perfectly unbiased AI is one that eliminates individual differences. What happens if we go for maximal bias and each have our own AI? Avoid appropriating the creativity of others by training on myself. AI that is all of me rather than a tiny bit of everyone.
The Wilds rejects inevitablism, instead approaching AI as an opportunity to detangle our minds from the hyper-rationalism of our technological epoch.
Points 1 and 2 are critical of general tendancies of interactive technologies to marginalise the human body, and to funnel human expression into predefined forms (posts, selfies, etc). My manifesto Against Interaction Design articulates this in more detail.
Here, AI is a beacon of hope. Works like Sonified Body speculate on more human-friendly forms of human-computer interaction rooted in embodiment, resonance and feelings of belonging.
Point 3 is critical of AI itself: its tendancy to appropriate human creativity and to seduce us into homogenous forms of expression. Self Absorbed addresses this by refocusing AI's extractive capabilities entirely onto myself.
These are the most important parts of The Wilds to me. The full list follows.
Interactive Works
Audio-visual Works
Texts

Against Interaction Design
Look at the words on my keyboard: ⌘command, ⌃control, fn function, ⇧shift. These describe the processes of a factory or a military unit, not a conversation, nor a dance, nor friends eating together. What of the rest of being human?
Behind every interface is a model world.
People are modelled as profiles.
Emotions are modelled as emojis 🤷.
The interface defines how I can sense and shape that world.
It defines my relationship to that world: what I can do, who I am, and who I can be.
reference
T. Murray-Browne, “Against Interaction Design,” arXiv:2210.06467 [cs.HC], 30 Sep 2022.
bibtex
@misc{murray-browne2022against-interaction-design, author = {Murray-Browne, Tim}, title = {Against Interaction Design}, year = {2022}, month = {September}, day = {30}, publisher = {arXiv}, doi = {10.48550/ARXIV.1907.10597}, url = {https://timmb.com/against-interaction-design} }

Emergent Interfaces: Vague, Complex, Bespoke and Embodied Interaction between Humans and Computers
The philosophical underpinning of Sonified Body.
Human-computer interaction is currently dominated by a paradigm where abstract representations are manipulated. We propose an alternative paradigm of emergence and resonance, which can be realised through unsupervised machine learning. This shift requires us to release the requirement that interactions with computers be precise and explainable, and instead embrace vagueness, ambiguity and embodied cognition.
reference
T. Murray-Browne and P. Tigas, “Emergent Interfaces: Vague, Complex, Bespoke and Embodied Interaction between Humans and Computers,” Applied Sciences, 11(18): 8531, 2021.
abstract
Abstract
Most human–computer interfaces are built on the paradigm of manipulating abstract representations. This can be limiting when computers are used in artistic performance or as mediators of social connection, where we rely on qualities of embodied thinking: intuition, context, resonance, ambiguity and fluidity. We explore an alternative approach to designing interaction that we call the emergent interface: interaction leveraging unsupervised machine learning to replace designed abstractions with contextually derived emergent representations. The approach offers opportunities to create interfaces bespoke to a single individual, to continually evolve and adapt the interface in line with that individual’s needs and affordances, and to bridge more deeply with the complex and imprecise interaction that defines much of our non-digital communication. We explore this approach through artistic research rooted in music, dance and AI with the partially emergent system Sonified Body. The system maps the moving body into sound using an emergent representation of the body derived from a corpus of improvised movement from the first author. We explore this system in a residency with three dancers. We reflect on the broader implications and challenges of this alternative way of thinking about interaction, and how far it may help users avoid being limited by the assumptions of a system’s designer.
bibtex
@article{murraybrowne2021emergent-interfaces, author = {Murray-Browne, Tim and Tigas, Panagiotis}, journal = {Applied Sciences}, number = {8531}, title = {Emergent Interfaces: Vague, Complex, Bespoke and Embodied Interaction between Humans and Computers}, volume = {11}, year = {2021} }

Authored and Unauthored Content: A Newly Necessary Distinction
Imagine waking up to find a car parked on your lawn. Before AI, there’s was one explanation: somebody did this. It required human agency, therefore it is, on some level, an authored act. I can trace this state of affairs back to someone’s intentions. What does it mean? Why did they do this? Now imagine waking up to find a self-driving robo-taxi parked on your lawn. Did someone program it to park there? Or is this some kind of glitch? In other words, is this act authored or unauthored? With AI in the picture, the answer is no longer self-evident. I can use AI to author work with full presence and intention. I can also unwittingly set off a complex chain of events that leads to the exact same work.
Latent Mappings: Generating Open-Ended Expressive Mappings Using Variational Autoencoders
A short technical paper for the NIME conference that describes the approach Panagiotis and I developed in Sonified Body to interpret human movement. We use an AI model trained entirely on an individual's movement to form a compressed representation of how their body moves. We use this to generate a primary mapping for transforming movement into sound that is based entirely on their existing vocabulary of movement.
In the paper we argue why we feel this approach is effective in open ended creative work, such as the real-time transformation of a dancer's movement into sound, as we're doing.
reference
T. Murray-Browne and P. Tigas, “Latent Mappings: Generating Open-Ended Expressive Mappings Using Variational Autoencoders,” in Proceedings of the International Conference on New Interfaces for Musical Expression (NIME), online, 2021.
abstract
Abstract
In many contexts, creating mappings for gestural interactions can form part of an artistic process. Creators seeking a mapping that is expressive, novel, and affords them a sense of authorship may not know how to program it up in a signal processing patch. Tools like Wekinator [1] and MIMIC [2] allow creators to use supervised machine learning to learn mappings from example input/output pairings. However, a creator may know a good mapping when they encounter it yet start with little sense of what the inputs or outputs should be. We call this an open-ended mapping process. Addressing this need, we introduce the latent mapping, which leverages the latent space of an unsupervised machine learning algorithm such as a Variational Autoencoder trained on a corpus of unlabelled gestural data from the creator. We illustrate it with Sonified Body, a system mapping full-body movement to sound which we explore in a residency with three dancers.
bibtex
@inproceedings{murray-browne2021latent-mappings, author = {Murray-Browne, Tim and Tigas, Panagiotis}, booktitle = {International Conference on New Interfaces for Musical Expression}, day = {29}, doi = {10.21428/92fbeb44.9d4bcd4b}, month = {4}, note = {\url{https://doi.org/10.21428/92fbeb44.9d4bcd4b}}, title = {Latent Mappings: Generating Open-Ended Expressive Mappings Using Variational Autoencoders}, url = {https://doi.org/10.21428/92fbeb44.9d4bcd4b}, year = {2021}, }

Embodied thinking: The antidote to being in the zone
Earlier this month I was at a residency called the Choreographic Coding Lab with an amazing group of dancers and coders. I shared my recent work on building complex full-body interfaces with AI. I was encouraged by how people responded, not just to how it feels but also the critical ideas I'm exploring through this work: how digital interfaces shape the way we think and move.
I've heard it said that if you want to know what tech everyone will be into in 10 years then look at what those in the Silicon Valley bubble are excited about today. When it comes to tech, they see just a bit further.
Today, this is, apparently, trans-humanism and augmented reality glasses. I'd also add self-custody of our digital content. Ten years ago, I have to stretch my memory – online privacy and bitcoin?
I think there may be an equivalent observation with dancers. If you want to know what everyone will be doing in 10 years to recover body and mind from technology, then observe how dancers work today. When it comes to the body, dancers sense just a bit deeper.
Acknowledgements
This project kicked off through Sonified Body, a collaboration with artist and AI researcher Panagiotis Tigas which was funded by Creative Scotland with support from Present Futures festival and the Centre for Contemporary Art, Glasgow, Feral and Preverbal.
Sonified Body developed through residencies with dancers Adilso Machado, Catriona Robertson and Divine Tasinda, film-maker Alan Paterson and mentored by Ghislaine Boddington.
Self Absorbed developed with support from the Milieux Institute, Concordia University, Montreal.
