Sonified Body

Transforming dance into sound with AI

Sonified Body uses AI to transform dance into sound. We train an AI model on free creative movement to create a personalised body-computer interface that responds to the human body holistically and as it already exists. It forms part of research into non-symbolic interaction between humans and computers.

Conventional interfaces rely on the manipulation of abstract entities (parameters, documents). Here, we instead construct a bespoke language of interaction from how an individual naturally moves.

Using a Variational Autoencoder, we use Machine Learning to devise an automatic interpretation of the body based on many hours of recording one person move.

This creates an entangled multifaceted interface. Any body part will affect any part of the sound. Like singing or riding a bike, you can't learn it through reason and analysis. Through play and exploration it starts to make sense in the body.

It takes practice to get a feel for it but the result is something complex and expressive.

We derive the interaction from the dancer's existing vocabulary of movement rather than designing it in our heads. This helps us to avoid imposing onto the dancer a designer's preconceptions of how a body might move and behave. Rather than interaction design it is more interaction negotation as the system attempts to meet the human as they are, rather than the human needing to adapt their movement and body to the system.

Rather than navigating the abstract space of a user interface, we navigate the embodied space of your body.

Rather than thinking and planning, we improvise, guided by sensual exploration.

Photo of dancer Catriona Robertson working with Tim Murray-Browne in a studio at the Centre for Contemporary Arts, Glasgow. Photo by Alan Paterson.

Sonified Body results from art-led research into Negotiated Interaction between human and computer.

It speculates on a future where computers let humans be human.

It reclaims AI as a weapon wielded by states and corporations to categorise and manipulate people. It reframes it as a gateway to bring embodied individual expression into the digital sphere.

The Sonified Body System

An unsupervised AI model seeks out the shared space where a person’s existing movement language overlaps with the computer’s capability to sense that movement.

We use the AI’s internal representation to drive a range of synthesisers.

The AI interprets the body holistically without reducing it to explainable components. Its interpretation is 16 dimensional, entangled, and unexplainable. This means any movement of the body can affect every parameter of the interpretation. And while consistent in behaviour, there is no logical reasoning that will explain how the interface interprets movement. Like singing or riding a bike, it must be learnt first-hand through play and embodied exploration.

A dancer also considers the body holistically. They weave together body and mind to attune for sensitivity, intuition, inhibition and action. This allows movement to be improvised in continual resonance with the dynamics of the space. The complex relationship between body and sound can be explored and understood at an embodied level.

Between sessions, the system can be retrained based on the dancer’s evolving language of movement. The updated model will define a completely new landscape of sound to explore. This is not a loss because the dancer relies on knowledge of their body rather than knowledge of the system

Sonified Body uses AI to create a personalised interface. We intentionally ‘overfit’ the model to a single individual. The generated interface is a unique language between the person and the system.

Rather than attempting to design an interface optimised for everybody, we generate a bespoke interface optimised for one person.

Sonified Body aims to provoke a rethink of the relationship between AI and diversity. Training big models on diverse datasets creates one-size- fits-all systems that, paradoxically, deny that diversity. Sonified Body instead aims for diversity in interfaces, each fit exclusively to the individual using it.

Rather than asking the person to learn how to use a proprietary system, we ask the system to learn to respond to the person. The person develops agency by deepening their relationship with their body.

The value of this approach lies in artistic and social contexts where open-ended activities, authenticity and sensitivity are more important than efficiency and accuracy.

Dance fits this criteria. It can express lived experience with all its vulnerability, ambiguity and humanity without recourse to the specific and explicit. It is also a fundamental form of human expression, neglected by conventional interfaces.

But our process could be applied to other open-ended creative acts, such as vocalisation, positioning objects or moulding a material. Likewise, the interface can drive other creative outputs, such as image generation.

Catriona Robertson dancing in the studio at the Centre for Contemporary Arts, Glasgow, with a PC and 3D camera in the foreground. Photo by Tim Murray-Browne.


Artistic residency

Centre for Contemporary Arts, Glasgow. Jan 2022. (Video above)

Latent Voyage (interactive installation version with AI-generated visuals)

Public sharing, Centre for Contemporary Arts, Glasgow, Jan 2022.

Research presentation

Live online presentation at Present Futures festival, featuring recorded dances from R+D labs, Feb 2020. Video


  • T. Murray-Browne, “Against Interaction Design,” arXiv:2210.06467 [cs.HC], 30 Sep 2022.

    • pdf


      author = {Murray-Browne, Tim},
      title = {Against Interaction Design},
      year = {2022},
      month = {September},
      day = {30},
      publisher = {arXiv},
      doi = {10.48550/ARXIV.1907.10597},
      url = {}
  • T. Murray-Browne and P. Tigas, “Emergent Interfaces: Vague, Complex, Bespoke and Embodied Interaction between Humans and Computers,” Applied Sciences, 11(18): 8531, 2021.

    • pdf


    Most human–computer interfaces are built on the paradigm of manipulating abstract representations. This can be limiting when computers are used in artistic performance or as mediators of social connection, where we rely on qualities of embodied thinking: intuition, context, resonance, ambiguity and fluidity. We explore an alternative approach to designing interaction that we call the emergent interface: interaction leveraging unsupervised machine learning to replace designed abstractions with contextually derived emergent representations. The approach offers opportunities to create interfaces bespoke to a single individual, to continually evolve and adapt the interface in line with that individual’s needs and affordances, and to bridge more deeply with the complex and imprecise interaction that defines much of our non-digital communication. We explore this approach through artistic research rooted in music, dance and AI with the partially emergent system Sonified Body. The system maps the moving body into sound using an emergent representation of the body derived from a corpus of improvised movement from the first author. We explore this system in a residency with three dancers. We reflect on the broader implications and challenges of this alternative way of thinking about interaction, and how far it may help users avoid being limited by the assumptions of a system’s designer.

        author = {Murray-Browne, Tim and Tigas, Panagiotis},
        journal = {Applied Sciences},
        number = {8531},
        title = {Emergent Interfaces: Vague, Complex, Bespoke and Embodied Interaction between Humans and Computers},
        volume = {11},
        year = {2021}
  • T. Murray-Browne and P. Tigas, “Latent Mappings: Generating Open-Ended Expressive Mappings Using Variational Autoencoders,” in Proceedings of the International Conference on New Interfaces for Musical Expression (NIME), online, 2021.



    In many contexts, creating mappings for gestural interactions can form part of an artistic process. Creators seeking a mapping that is expressive, novel, and affords them a sense of authorship may not know how to program it up in a signal processing patch. Tools like Wekinator [1] and MIMIC [2] allow creators to use supervised machine learning to learn mappings from example input/output pairings. However, a creator may know a good mapping when they encounter it yet start with little sense of what the inputs or outputs should be. We call this an open-ended mapping process. Addressing this need, we introduce the latent mapping, which leverages the latent space of an unsupervised machine learning algorithm such as a Variational Autoencoder trained on a corpus of unlabelled gestural data from the creator. We illustrate it with Sonified Body, a system mapping full-body movement to sound which we explore in a residency with three dancers.

        author = {Murray-Browne, Tim and Tigas, Panagiotis},
        booktitle = {International Conference on New Interfaces for Musical Expression},
        day = {29},
        doi = {10.21428/92fbeb44.9d4bcd4b},
        month = {4},
        note = {\url{}},
        title = {Latent Mappings: Generating Open-Ended Expressive Mappings Using Variational Autoencoders},
        url = {},
        year = {2021},



Created by

Tim Murray-Browne

Panagiotis Tigas


Catriona Robertson

Adilso Machado

Divine Tasinda


Alan Paterson

Top photo: Alan Paterson


Ghislaine Boddington (body>data>space)


Created with support from Creative Scotland, Present Futures festival and the Centre for Contemporary Arts, Glasgow.

Produced by Feral.

Technical production by Preverbal.