Sonified Body

Transforming dance into sound with AI

Sonified Body uses AI to transform dance into sound. We train an AI model on free creative movement to create a personalised body-computer interface that responds to the human body holistically and as it already exists. It forms part of research into non-symbolic interaction between humans and computers.

Conventional interfaces rely on the manipulation of abstract entities (parameters, documents). Here, we instead construct a bespoke language of interaction from how an individual naturally moves.

Using a Variational Autoencoder, we use Machine Learning to devise an automatic interpretation of the body based on many hours of recording one person move.

This creates an entangled multifaceted interface. Any body part will affect any part of the sound. Like singing or riding a bike, you can't learn it through reason and analysis. Through play and exploration it starts to make sense in the body.

It takes practice to get a feel for it but the result is something complex and expressive.

We derive the interaction from the dancer's existing vocabulary of movement rather than designing it in our heads. This helps us to avoid imposing onto the dancer a designer's preconceptions of how a body might move and behave. Rather than interaction design it is more interaction negotation as the system attempts to meet the human as they are, rather than the human needing to adapt their movement and body to the system.

Rather than navigating the abstract space of a user interface, we navigate the embodied space of your body.

Rather than thinking and planning, we improvise, guided by sensual exploration.

Photo of dancer Catriona Robertson working with Tim Murray-Browne in a studio at the Centre for Contemporary Arts, Glasgow. Photo by Alan Paterson.

Sonified Body results from art-led research into Negotiated Interaction between human and computer.

It speculates on a future where computers let humans be human.

It reclaims AI as a weapon wielded by states and corporations to categorise and manipulate people. It reframes it as a gateway to bring embodied individual expression into the digital sphere.

The Sonified Body System

An unsupervised AI model seeks out the shared space where a person’s existing movement language overlaps with the computer’s capability to sense that movement.

We use the AI’s internal representation to drive a range of synthesisers.

The AI interprets the body holistically without reducing it to explainable components. Its interpretation is 16 dimensional, entangled, and unexplainable. This means any movement of the body can affect every parameter of the interpretation. And while consistent in behaviour, there is no logical reasoning that will explain how the interface interprets movement. Like singing or riding a bike, it must be learnt first-hand through play and embodied exploration.

A dancer also considers the body holistically. They weave together body and mind to attune for sensitivity, intuition, inhibition and action. This allows movement to be improvised in continual resonance with the dynamics of the space. The complex relationship between body and sound can be explored and understood at an embodied level.

Between sessions, the system can be retrained based on the dancer’s evolving language of movement. The updated model will define a completely new landscape of sound to explore. This is not a loss because the dancer relies on knowledge of their body rather than knowledge of the system