We are at risk of turning into the machines we create.
How can we interact with machines without thinking like a machine? Without moving like a machine?
Embodied Dialogues is a research project to construct AI models that interpret human movement without looking for anything in particular.
We use these to automatically generate a full-body interface between human and machine that is bespoke to an individual's existing vocabulary of movement, rather than asking them to learn a designed gestural language.
This project kicked off through a collaboration with artist and AI researcher Panagiotis Tigas.
Look at the words on my keyboard: ⌘command, ⌃control, fn function, ⇧shift. These describe the processes of a factory or a military unit, not a conversation, nor a dance, nor friends eating together. What of the rest of being human?
Behind every interface is a model world.
People are modelled as profiles.
Emotions are modelled as emojis 🤷.
The interface defines how I can sense and shape that world.
It defines my relationship to that world: what I can do, who I am, and who I can be.
The philosophical underpinning of Sonified Body.
Human-computer interaction is currently dominated by a paradigm where abstract representations are manipulated. We propose an alternative paradigm of emergence and resonance, which can be realised through unsupervised machine learning. This shift requires us to release the requirement that interactions with computers be precise and explainable, and instead embrace vagueness, ambiguity and embodied cognition.
A short technical paper for the NIME conference that describes the approach Panagiotis and I developed in Sonified Body to interpret human movement. We use an AI model trained entirely on an individual's movement to form a compressed representation of how their body moves. We use this to generate a primary mapping for transforming movement into sound that is based entirely on their existing vocabulary of movement.
In the paper we argue why we feel this approach is effective in open ended creative work, such as the real-time transformation of a dancer's movement into sound, as we're doing.
Earlier this month I was at a residency called the Choreographic Coding Lab with an amazing group of dancers and coders. I shared my recent work on building complex full-body interfaces with AI. I was encouraged by how people responded, not just to how it feels but also the critical ideas I'm exploring through this work: how digital interfaces shape the way we think and move.
I've heard it said that if you want to know what tech everyone will be into in 10 years then look at what those in the Silicon Valley bubble are excited about today. When it comes to tech, they see just a bit further.
Today, this is, apparently, trans-humanism and augmented reality glasses. I'd also add self-custody of our digital content. Ten years ago, I have to stretch my memory – online privacy and bitcoin?
I think there may be an equivalent observation with dancers. If you want to know what everyone will be doing in 10 years to recover body and mind from technology, then observe how dancers work today. When it comes to the body, dancers sense just a bit deeper.