Against Interaction Design

“The more interaction is designed, the more human agency becomes constrained to fit within the models conceived by the designer.” A short manifesto that distils a position that’s emerged through a decade of creating interactive art.

reference

T. Murray-Browne, “Against Interaction Design,” arXiv:2210.06467 [cs.HC], 30 Sep 2022.

bibtex
@misc{murray-browne2022against-interaction-design,
  author = {Murray-Browne, Tim},
  title = {Against Interaction Design},
  year = {2022},
  month = {September},
  day = {30},
  publisher = {arXiv},
  doi = {10.48550/ARXIV.1907.10597},
  url = {https://timmb.com/against-interaction-design}
}

Emergent Interfaces: Vague, Complex, Bespoke and Embodied Interaction between Humans and Computers

Academic paper detailing the philosophical underpinning of my project The Wilds. Human-computer interaction is currently dominated by a paradigm where abstract representations are manipulated. How can we instead build interfaces out of our capacity for emergence and resonance?

reference

T. Murray-Browne and P. Tigas, “Emergent Interfaces: Vague, Complex, Bespoke and Embodied Interaction between Humans and Computers,” Applied Sciences, 11(18): 8531, 2021.

abstract

Abstract

Most human–computer interfaces are built on the paradigm of manipulating abstract representations. This can be limiting when computers are used in artistic performance or as mediators of social connection, where we rely on qualities of embodied thinking: intuition, context, resonance, ambiguity and fluidity. We explore an alternative approach to designing interaction that we call the emergent interface: interaction leveraging unsupervised machine learning to replace designed abstractions with contextually derived emergent representations. The approach offers opportunities to create interfaces bespoke to a single individual, to continually evolve and adapt the interface in line with that individual’s needs and affordances, and to bridge more deeply with the complex and imprecise interaction that defines much of our non-digital communication. We explore this approach through artistic research rooted in music, dance and AI with the partially emergent system Sonified Body. The system maps the moving body into sound using an emergent representation of the body derived from a corpus of improvised movement from the first author. We explore this system in a residency with three dancers. We reflect on the broader implications and challenges of this alternative way of thinking about interaction, and how far it may help users avoid being limited by the assumptions of a system’s designer.

bibtex
@article{murraybrowne2021emergent-interfaces,
    author = {Murray-Browne, Tim and Tigas, Panagiotis},
    journal = {Applied Sciences},
    number = {8531},
    title = {Emergent Interfaces: Vague, Complex, Bespoke and Embodied Interaction between Humans and Computers},
    volume = {11},
    year = {2021}
}

Rewilding Human-Computer Interaction

What if every step we take to limit toxic online behaviour is instead fuelling that behaviour? To let humanity leak into digital space, we need to increase our bandwidth with computers, to let in more noise.

reference

T. Murray-Browne, “Rewilding Human-Computer Interaction.” https://timmb.com/rewilding-human-computer-interaction, 30 Nov 2022.

bibtex
@misc{murray-rewilding-human-computer-interaction,
  author = {Murray-Browne, Tim},
  howpublished = {\url{https://timmb.com/rewilding-human-computer-interaction}},
  month = {November},
  day = {30},
  title = {Rewilding Human-Computer Interaction},
  year = {2022}
}

AI’s weird and uncanny mistakes reveal the gaps in how I perceive intelligence

I’m used to seeing human-like intellectual capabilities together as a bundle, what I consider human intelligence. If a human can draw photorealistic faces, I might assume they have mastered many other intellectual abilities, like a deep sensitivity to human physiology and how it exists in physical reality.

The satnav effect: Is AI stopping me from learning?

Is coding with AI like driving with a satnav? Will the parts of my brain that learn from doing atrophy as I delegate the brainwork to ChatGPT? Or does it leave me as the director of a film, able to focus on the bigger picture?

Latent Mappings: Generating Open-Ended Expressive Mappings Using Variational Autoencoders

A short technical paper for the NIME conference describing how AI is used in Sonified Body to interpret human movement to control live sound.

reference

T. Murray-Browne and P. Tigas, “Latent Mappings: Generating Open-Ended Expressive Mappings Using Variational Autoencoders,” in Proceedings of the International Conference on New Interfaces for Musical Expression (NIME), online, 2021.

abstract

Abstract

In many contexts, creating mappings for gestural interactions can form part of an artistic process. Creators seeking a mapping that is expressive, novel, and affords them a sense of authorship may not know how to program it up in a signal processing patch. Tools like Wekinator [1] and MIMIC [2] allow creators to use supervised machine learning to learn mappings from example input/output pairings. However, a creator may know a good mapping when they encounter it yet start with little sense of what the inputs or outputs should be. We call this an open-ended mapping process. Addressing this need, we introduce the latent mapping, which leverages the latent space of an unsupervised machine learning algorithm such as a Variational Autoencoder trained on a corpus of unlabelled gestural data from the creator. We illustrate it with Sonified Body, a system mapping full-body movement to sound which we explore in a residency with three dancers.

bibtex
@inproceedings{murray-browne2021latent-mappings,
    author = {Murray-Browne, Tim and Tigas, Panagiotis},
    booktitle = {International Conference on New Interfaces for Musical Expression},
    day = {29},
    doi = {10.21428/92fbeb44.9d4bcd4b},
    month = {4},
    note = {\url{https://doi.org/10.21428/92fbeb44.9d4bcd4b}},
    title = {Latent Mappings: Generating Open-Ended Expressive Mappings Using Variational Autoencoders},
    url = {https://doi.org/10.21428/92fbeb44.9d4bcd4b},
    year = {2021},
}
The dancer Divine Tasinda improvising during a lab for Sonified Body by Tim Murray-Browne and Panagiotis Tigas. Photo: Panagiotis Tigas.

Who is that voice in your head speaking to?

On the observation that much of my thinking seems to take the form of imaginary conversation with people, I start to pay attention to who those people are.

The Cave of Sounds: An interactive installation exploring how we create music together

A paper at the NIME conference describing the experimental creative process of non-hierarchical organic creation that led to Cave of Sounds.

reference

T. Murray-Browne, Dom Aversano, S. Garcia, W. Hobbes, D. Lopez, P. Tigas, T. Sendon, K. Ziemianin, D. Chapman, “The Cave of Sounds: An Interactive Installation Exploring How We Create Music Together,” in Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 307-310, London, UK, 2014.

abstract

Abstract

The Cave of Sounds is an interactive sound installation formed of eight new musical instruments exploring what it means to create instruments together. Each instrument was created by an individual but with the aim of forming a part of this new ensemble, with the final installation debuting at the Barbican in London in August 2013. In this paper, we describe how ideas of prehistoric collective music making inspired and guided this participatory musical work, both in creation process and in the audience experience of musical collaboration. Following a detailed description of the installation itself, we reflect on the successes, lessons and future challenges of encouraging creative musical collaboration among members of an audience.

bibtex
@inproceedings{murray-browne2014cave-of-sounds,
    address = {London, UK},
    author = {Murray-Browne, Tim and Aversano, Dom and Garcia, Susanna and Hobbes, Wallace and Lopez, Daniel and Sendon, Tadeo and Tigas, Panagiotis and Ziemianin, Kacper and Chapman, Duncan},
    booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression},
    pages = {307-310},
    title = {The {C}ave of {S}ounds: An Interactive Installation Exploring How We Create Music Together},
    year = {2014},
}
Cave of Sounds exhibited at the Barbican 2013. The photo shows a circle of plinths facing inwards with different Digital Musical Instruments with various people playing them. Photo: Tim Murray-Browne.

Interactive Music: Balancing Creative Freedom with Musical Development

My PhD. What is interactive music? What is the point of making interactive music systems? What makes them actually engaging and interesting? What brings depth and meaning to a creative participatory experience.

reference

T. Murray-Browne. Interactive Music: Balancing Creative Freedom with Musical Development. PhD thesis, Queen Mary University of London, 2012.

abstract

Abstract

This thesis is about interactive music – a musical experience that involves participation from the listener but is itself a composed piece of music – and the Interactive Music Systems (IMSs) that create these experiences, such as a sound installation that responds to the movements of its audience. Some IMSs are brief marvels commanding only a few seconds of attention. Others engage those who participate for considerably longer. Our goal here is to understand why this difference arises and how we may then apply this understanding to create better interactive music experiences.

I present a refined perspective of interactive music as an exploration into the relationship between action and sound. Reasoning about IMSs in terms of how they are subjectively perceived by a participant, I argue that fundamental to creating a captivating interactive music is the evolving cognitive process of making sense of a system through interaction.

I present two new theoretical tools that provide complementary contributions to our understanding of this process. The first, the Emerging Structures model, analyses how a participant's evolving understanding of a system's behaviour engages and motivates continued involvement. The second, a framework of Perceived Agency, refines the notion of ‘creative control’ to provide a better understanding of how the norms of music establish expectations of how skill will be demonstrated.

I develop and test these tools through three practical projects: a wearable musical instrument for dancers created in collaboration with an artist, a controlled user study investigating the effects of constraining the functionality of a screen-based IMS, and an interactive sound installation that may only be explored through coordinated movement with another participant. This final work is evaluated formally through discourse analysis.

Finally, I show how these tools may inform our understanding of an oft-cited goal within the field: conversational interaction with an interactive music system.

bibtex
@phdthesis{murray-browne2012phd,
    Author = {T. Murray-Browne},
    School = {Queen Mary University of London},
    Title = {Interactive music: Balancing creative freedom with musical development},
    Year = {2012}
}
Photo of dancer Nicole Johnson performing with the Serendiptichord, a wearable instrument in red leather made by Di Mainstone and Tim Murray-Browne.