Sonified Body uses AI to transform dance into sound. We train an AI model on free creative movement to create a personalised body-computer interface that responds to the human body holistically and as it already exists. It forms part of research into non-symbolic interaction between humans and computers.
Conventional interfaces rely on the manipulation of abstract entities (parameters, documents). Here, we instead construct a bespoke language of interaction from how an individual naturally moves.
Using a Variational Autoencoder, we use Machine Learning to devise an automatic interpretation of the body based on many hours of recording one person move.
This creates an entangled multifaceted interface. Any body part will affect any part of the sound. Like singing or riding a bike, you can't learn it through reason and analysis. Through play and exploration it starts to make sense in the body.
It takes practice to get a feel for it but the result is something complex and expressive.
We derive the interaction from the dancer's existing vocabulary of movement rather than designing it in our heads. This helps us to avoid imposing onto the dancer a designer's preconceptions of how a body might move and behave. Rather than interaction design it is more interaction negotation as the system attempts to meet the human as they are, rather than the human needing to adapt their movement and body to the system.
Rather than navigating the abstract space of a user interface, we navigate the embodied space of your body.
Rather than thinking and planning, we improvise, guided by sensual exploration.
Sonified Body results from art-led research into Negotiated Interaction between human and computer.
It speculates on a future where computers let humans be human.
It reclaims AI as a weapon wielded by states and corporations to categorise and manipulate people. It reframes it as a gateway to bring embodied individual expression into the digital sphere.
The Sonified Body System
An unsupervised AI model seeks out the shared space where a person’s existing movement language overlaps with the computer’s capability to sense that movement.
We use the AI’s internal representation to drive a range of synthesisers.
The AI interprets the body holistically without reducing it to explainable components. Its interpretation is 16 dimensional, entangled, and unexplainable. This means any movement of the body can affect every parameter of the interpretation. And while consistent in behaviour, there is no logical reasoning that will explain how the interface interprets movement. Like singing or riding a bike, it must be learnt first-hand through play and embodied exploration.
A dancer also considers the body holistically. They weave together body and mind to attune for sensitivity, intuition, inhibition and action. This allows movement to be improvised in continual resonance with the dynamics of the space. The complex relationship between body and sound can be explored and understood at an embodied level.
Between sessions, the system can be retrained based on the dancer’s evolving language of movement. The updated model will define a completely new landscape of sound to explore. This is not a loss because the dancer relies on knowledge of their body rather than knowledge of the system
Sonified Body uses AI to create a personalised interface. We intentionally ‘overfit’ the model to a single individual. The generated interface is a unique language between the person and the system.
Rather than attempting to design an interface optimised for everybody, we generate a bespoke interface optimised for one person.
Sonified Body aims to provoke a rethink of the relationship between AI and diversity. Training big models on diverse datasets creates one-size- fits-all systems that, paradoxically, deny that diversity. Sonified Body instead aims for diversity in interfaces, each fit exclusively to the individual using it.
Rather than asking the person to learn how to use a proprietary system, we ask the system to learn to respond to the person. The person develops agency by deepening their relationship with their body.
The value of this approach lies in artistic and social contexts where open-ended activities, authenticity and sensitivity are more important than efficiency and accuracy.
Dance fits this criteria. It can express lived experience with all its vulnerability, ambiguity and humanity without recourse to the specific and explicit. It is also a fundamental form of human expression, neglected by conventional interfaces.
But our process could be applied to other open-ended creative acts, such as vocalisation, positioning objects or moulding a material. Likewise, the interface can drive other creative outputs, such as image generation.
Events
Artistic residency
Centre for Contemporary Arts, Glasgow. Jan 2022. (Video above)
Latent Voyage (interactive installation version with AI-generated visuals)
Public sharing, Centre for Contemporary Arts, Glasgow, Jan 2022.
Research presentation
Live online presentation at Present Futures festival, featuring recorded dances from R+D labs, Feb 2020. Video
Publications
T. Murray-Browne, “Against Interaction Design,” arXiv:2210.06467 [cs.HC], 30 Sep 2022.
abstract
Abstract
bibtex
@misc{murray-browne2022against-interaction-design, author = {Murray-Browne, Tim}, title = {Against Interaction Design}, year = {2022}, month = {September}, day = {30}, publisher = {arXiv}, doi = {10.48550/ARXIV.1907.10597}, url = {https://timmb.com/against-interaction-design} }
T. Murray-Browne and P. Tigas, “Latent Mappings: Generating Open-Ended Expressive Mappings Using Variational Autoencoders,” in Proceedings of the International Conference on New Interfaces for Musical Expression (NIME), online, 2021.
abstract
Abstract
In many contexts, creating mappings for gestural interactions can form part of an artistic process. Creators seeking a mapping that is expressive, novel, and affords them a sense of authorship may not know how to program it up in a signal processing patch. Tools like Wekinator [1] and MIMIC [2] allow creators to use supervised machine learning to learn mappings from example input/output pairings. However, a creator may know a good mapping when they encounter it yet start with little sense of what the inputs or outputs should be. We call this an open-ended mapping process. Addressing this need, we introduce the latent mapping, which leverages the latent space of an unsupervised machine learning algorithm such as a Variational Autoencoder trained on a corpus of unlabelled gestural data from the creator. We illustrate it with Sonified Body, a system mapping full-body movement to sound which we explore in a residency with three dancers.
bibtex
@inproceedings{murray-browne2021latent-mappings, author = {Murray-Browne, Tim and Tigas, Panagiotis}, booktitle = {International Conference on New Interfaces for Musical Expression}, day = {29}, doi = {10.21428/92fbeb44.9d4bcd4b}, month = {4}, note = {\url{https://doi.org/10.21428/92fbeb44.9d4bcd4b}}, title = {Latent Mappings: Generating Open-Ended Expressive Mappings Using Variational Autoencoders}, url = {https://doi.org/10.21428/92fbeb44.9d4bcd4b}, year = {2021}, }
Press
- Springback Magazine, Present Futures, 22 Mar 2021
Credits
Created by
Tim Murray-Browne
Dancers
Catriona Robertson
Adilso Machado
Divine Tasinda
Camera
Alan Paterson
Top photo: Alan Paterson
Mentor
Ghislaine Boddington (body>data>space)
Acknowledgements
Created with support from Creative Scotland, Present Futures festival and the Centre for Contemporary Arts, Glasgow.
Produced by Feral.
Technical production by Preverbal.