An image of two people. The person on the right is lifelike. The person on the left starts off lifelike but the nose is deformed and the eye transforms into paint-like smudges.

AI’s weird and uncanny mistakes reveal the gaps in how I perceive intelligence

Do you remember four years ago when we first saw those AI generated photos of people’s faces and were told “this person does not exist”?

I remember the disorientation of that moment. It seemed incredible that an AI had acquired such a deep knowledge of the complexities of the human face, as well as the capability to render with photographic realism. When I was only confronted with the flawless images, it was easy to jump to that conclusion.

Embryonic Self-portrait in Latent Space. Artwork by Tim Murray-Browne. A grid of images showing abstract swirls faintly reminiscent of Tim's face

The satnav effect: Is AI stopping me from learning?

A couple of years ago I started trying to drive without using a satnav (or more precisely, Google Maps on my phone).

I’ve always been suspicion of satnavs. All they do is bark out orders. It seems harmless enough, but I suspect somewhere in my subconscious it’s reinforcing the narrative: ‘better do what this machine tells me to’. I can’t help but feel a step towards the subjugation of humans by machines. Perhaps people felt this way when they introduced traffic lights. Grumble grumble.

But my issue with the satnav was more practical. I noticed I wasn’t learning my way around when driving in the same way I did when walking or cycling.

A photograph of a web of branches by Tim Murray-Browne.

Authored and Unauthored Content: A Newly Necessary Distinction

Imagine waking up to find a car parked on your lawn. Before AI, there’s was one explanation: somebody did this. It required human agency, therefore it is, on some level, an authored act. I can trace this state of affairs back to someone’s intentions. What does it mean? Why did they do this? Now imagine waking up to find a self-driving robo-taxi parked on your lawn. Did someone program it to park there? Or is this some kind of glitch? In other words, is this act authored or unauthored? With AI in the picture, the answer is no longer self-evident. I can use AI to author work with full presence and intention. I can also unwittingly set off a complex chain of events that leads to the exact same work.

Photo of a derelict plot in Glasgow abundant with green plants. Photo by Tim Murray-Browne.

Rewilding Human-Computer Interaction

What if every step we take to limit toxic online behaviour is instead fuelling that behaviour?

Sometimes it seems the internet is a magnet for our worst instincts and the worst people.
Sometimes it seems to transform even the best people into the worst.

And we seem to be polarised across two unappealing options. We have content moderation: the widespread censorship of human expression by algorithms governed by corporations. The other option is what we might call let it rip: just putting up with toxic spaces filled with bullying, abuse, propaganda and spambots.

But this is a false dilemma. There is a third option: tapping into the human superpower of self-organising into mutually supportive communities that cooperate and get on. The same one that made free society possible in the physical world.

reference

T. Murray-Browne, “Rewilding Human-Computer Interaction.” https://timmb.com/rewilding-human-computer-interaction, 30 Nov 2022.

bibtex
@misc{murray-rewilding-human-computer-interaction,
  author = {Murray-Browne, Tim},
  howpublished = {\url{https://timmb.com/rewilding-human-computer-interaction}},
  month = {November},
  day = {30},
  title = {Rewilding Human-Computer Interaction},
  year = {2022}
}
A photo of rocks stacked into human figures on a beach, taken by Tim Murray-Browne.

Against Interaction Design

Look at the words on my keyboard: ⌘command, ⌃control, fn function, ⇧shift. These describe the processes of a factory or a military unit, not a conversation, nor a dance, nor friends eating together. What of the rest of being human?

Behind every interface is a model world.
People are modelled as profiles.
Emotions are modelled as emojis 🤷.
The interface defines how I can sense and shape that world.
It defines my relationship to that world: what I can do, who I am, and who I can be.

reference

T. Murray-Browne, “Against Interaction Design,” arXiv:2210.06467 [cs.HC], 30 Sep 2022.

bibtex
@misc{murray-browne2022against-interaction-design,
  author = {Murray-Browne, Tim},
  title = {Against Interaction Design},
  year = {2022},
  month = {September},
  day = {30},
  publisher = {arXiv},
  doi = {10.48550/ARXIV.1907.10597},
  url = {https://timmb.com/against-interaction-design}
}
An AI generated image of people in an art gallery staring at their phones

Desensitising to the Endless Soma Bliss of Optimised Art

AI that generates images from text is hitting the mainstream.

… But when I tell participants the model's trained on my own visual experiences, I feel a shift in how they receive it. They're no longer just navigating a hallucinatory machine, but glimpsing the human entangled inside that machine.

Catriona Robertson dancing with the Sonified Body system by Tim Murray-Browne and Panagiotis Tigas during a residency at the Centre for Contemporary Arts, Glasgow.

Emergent Interfaces: Vague, Complex, Bespoke and Embodied Interaction between Humans and Computers

  • Tim Murray-Browne and Panagiotis Tigas
  • Peer-reviewed journal article

The philosophical underpinning of Sonified Body.

Human-computer interaction is currently dominated by a paradigm where abstract representations are manipulated. We propose an alternative paradigm of emergence and resonance, which can be realised through unsupervised machine learning. This shift requires us to release the requirement that interactions with computers be precise and explainable, and instead embrace vagueness, ambiguity and embodied cognition.

reference

T. Murray-Browne and P. Tigas, “Emergent Interfaces: Vague, Complex, Bespoke and Embodied Interaction between Humans and Computers,” Applied Sciences, 11(18): 8531, 2021.

abstract

Abstract

Most human–computer interfaces are built on the paradigm of manipulating abstract representations. This can be limiting when computers are used in artistic performance or as mediators of social connection, where we rely on qualities of embodied thinking: intuition, context, resonance, ambiguity and fluidity. We explore an alternative approach to designing interaction that we call the emergent interface: interaction leveraging unsupervised machine learning to replace designed abstractions with contextually derived emergent representations. The approach offers opportunities to create interfaces bespoke to a single individual, to continually evolve and adapt the interface in line with that individual’s needs and affordances, and to bridge more deeply with the complex and imprecise interaction that defines much of our non-digital communication. We explore this approach through artistic research rooted in music, dance and AI with the partially emergent system Sonified Body. The system maps the moving body into sound using an emergent representation of the body derived from a corpus of improvised movement from the first author. We explore this system in a residency with three dancers. We reflect on the broader implications and challenges of this alternative way of thinking about interaction, and how far it may help users avoid being limited by the assumptions of a system’s designer.

bibtex
@article{murraybrowne2021emergent-interfaces,
    author = {Murray-Browne, Tim and Tigas, Panagiotis},
    journal = {Applied Sciences},
    number = {8531},
    title = {Emergent Interfaces: Vague, Complex, Bespoke and Embodied Interaction between Humans and Computers},
    volume = {11},
    year = {2021}
}

Latent Mappings: Generating Open-Ended Expressive Mappings Using Variational Autoencoders

  • Tim Murray-Browne and Panagiotis Tigas
  • Peer-reviewed conference paper

A short technical paper for the NIME conference that describes the approach Panagiotis and I developed in Sonified Body to interpret human movement. We use an AI model trained entirely on an individual's movement to form a compressed representation of how their body moves. We use this to generate a primary mapping for transforming movement into sound that is based entirely on their existing vocabulary of movement.

In the paper we argue why we feel this approach is effective in open ended creative work, such as the real-time transformation of a dancer's movement into sound, as we're doing.

reference

T. Murray-Browne and P. Tigas, “Latent Mappings: Generating Open-Ended Expressive Mappings Using Variational Autoencoders,” in Proceedings of the International Conference on New Interfaces for Musical Expression (NIME), online, 2021.

abstract

Abstract

In many contexts, creating mappings for gestural interactions can form part of an artistic process. Creators seeking a mapping that is expressive, novel, and affords them a sense of authorship may not know how to program it up in a signal processing patch. Tools like Wekinator [1] and MIMIC [2] allow creators to use supervised machine learning to learn mappings from example input/output pairings. However, a creator may know a good mapping when they encounter it yet start with little sense of what the inputs or outputs should be. We call this an open-ended mapping process. Addressing this need, we introduce the latent mapping, which leverages the latent space of an unsupervised machine learning algorithm such as a Variational Autoencoder trained on a corpus of unlabelled gestural data from the creator. We illustrate it with Sonified Body, a system mapping full-body movement to sound which we explore in a residency with three dancers.

bibtex
@inproceedings{murray-browne2021latent-mappings,
    author = {Murray-Browne, Tim and Tigas, Panagiotis},
    booktitle = {International Conference on New Interfaces for Musical Expression},
    day = {29},
    doi = {10.21428/92fbeb44.9d4bcd4b},
    month = {4},
    note = {\url{https://doi.org/10.21428/92fbeb44.9d4bcd4b}},
    title = {Latent Mappings: Generating Open-Ended Expressive Mappings Using Variational Autoencoders},
    url = {https://doi.org/10.21428/92fbeb44.9d4bcd4b},
    year = {2021},
}
Tim-Murray-Browne-Adi-in-the-kitchen-still

Who is that voice in your head speaking to?

There seems at first something a bit pathetic about spending your time rehearsing conversations you might have with people. And yet, I recently observed that this seems to be how my thinking process operates. My internal monologue, the inner voice of the mind, is near exclusively in the form of an imaginary conversation with somebody. I have a hunch that this might be quite common. After all, it took me a while to admi…

The Cave of Sounds: An Interactive Installation Exploring How We Create Music Together

  • Tim Murray-Browne, Dom Aversano, Susanna Garcia, Wallace Hobbes, Daniel Lopez, Tadeo Sendon, Panagiotis Tigas, Kacper Ziemianin and Duncan Chapman
  • Academic conference paper

A paper at the NIME conference describing the experimental creative process of non-hierarchical organic creation that led to Cave of Sounds.

reference

T. Murray-Browne, Dom Aversano, S. Garcia, W. Hobbes, D. Lopez, P. Tigas, T. Sendon, K. Ziemianin, D. Chapman, “The Cave of Sounds: An Interactive Installation Exploring How We Create Music Together,” in Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 307-310, London, UK, 2014.

abstract

Abstract

The Cave of Sounds is an interactive sound installation formed of eight new musical instruments exploring what it means to create instruments together. Each instrument was created by an individual but with the aim of forming a part of this new ensemble, with the final installation debuting at the Barbican in London in August 2013. In this paper, we describe how ideas of prehistoric collective music making inspired and guided this participatory musical work, both in creation process and in the audience experience of musical collaboration. Following a detailed description of the installation itself, we reflect on the successes, lessons and future challenges of encouraging creative musical collaboration among members of an audience.

bibtex
@inproceedings{murray-browne2014cave-of-sounds,
    address = {London, UK},
    author = {Murray-Browne, Tim and Aversano, Dom and Garcia, Susanna and Hobbes, Wallace and Lopez, Daniel and Sendon, Tadeo and Tigas, Panagiotis and Ziemianin, Kacper and Chapman, Duncan},
    booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression},
    pages = {307-310},
    title = {The {C}ave of {S}ounds: An Interactive Installation Exploring How We Create Music Together},
    year = {2014},
}
A photo of people walking about a build at Resonate Festival 2014, by Tim Murray-Browne.

Highlights from Resonate 2014

The past few days I've been in Belgrade for Resonate Festival. It's been a fantastic event pulling in a diverse group of individuals working in creative technology. I'm currently at the beginning of a new collaboration with dancer Jan Lee to create a dance work involving interactive sound and visuals, and much of my attention was grabbed by presentations and discussions in this area.

Here are some of the highlights for me.

On the first night Klaus Obermaier, Kyle McD…

Interactive Music: Balancing Creative Freedom with Musical Development

I completed my PhD in 2012 at the Centre for Digital Music, Queen Mary University of London, under the invaluable supervision of Mark D. Plumbley and Nick Bryan-Kinns. It was examined by Atau Tanaka and François Pachet.

My thesis explores how we can create interactive music experiences that let you be creatively involved in what you hear, but also draw you in to the composer’s musical world, maintaining the hypnotic connection we are familiar with from linear music. For me, interactive music experiences are at their best when they are a shared creation between composer and audience where music is something that happens with you rather than to you. In this way, composing interactive music is as much about musical actions as it is about sound. It requires you to move and to consider how your behaviour affects the environment around you.

Creating a captivating interactive music experience is challenging. How can we create a musical narrative and shape our audience’s experience without reducing their sense of creative freedom? Addressing this question has led me to examine musical structure and the perception of skill through perspectives rooted in information theory, social psychology and human-computer interaction. My thesis draws upon a number of fields and methodologies and considers composed instruments, interactive music systems, narrative structures within interactive art, the perception of agency within music and a brief analysis of conversational interaction.

I created a number of artworks as part of the PhD: the Serendiptichord, the Manhattan Rhythm Machine and finally IMPOSSIBLE ALONE, which encapsulated many of the ideas on narrative and agency that I had developed.

Transcripts of interviews referred to within the thesis may be found here.

reference

T. Murray-Browne. Interactive Music: Balancing Creative Freedom with Musical Development. PhD thesis, Queen Mary University of London, 2012.

abstract

Abstract

This thesis is about interactive music – a musical experience that involves participation from the listener but is itself a composed piece of music – and the Interactive Music Systems (IMSs) that create these experiences, such as a sound installation that responds to the movements of its audience. Some IMSs are brief marvels commanding only a few seconds of attention. Others engage those who participate for considerably longer. Our goal here is to understand why this difference arises and how we may then apply this understanding to create better interactive music experiences.

I present a refined perspective of interactive music as an exploration into the relationship between action and sound. Reasoning about IMSs in terms of how they are subjectively perceived by a participant, I argue that fundamental to creating a captivating interactive music is the evolving cognitive process of making sense of a system through interaction.

I present two new theoretical tools that provide complementary contributions to our understanding of this process. The first, the Emerging Structures model, analyses how a participant's evolving understanding of a system's behaviour engages and motivates continued involvement. The second, a framework of Perceived Agency, refines the notion of ‘creative control’ to provide a better understanding of how the norms of music establish expectations of how skill will be demonstrated.

I develop and test these tools through three practical projects: a wearable musical instrument for dancers created in collaboration with an artist, a controlled user study investigating the effects of constraining the functionality of a screen-based IMS, and an interactive sound installation that may only be explored through coordinated movement with another participant. This final work is evaluated formally through discourse analysis.

Finally, I show how these tools may inform our understanding of an oft-cited goal within the field: conversational interaction with an interactive music system.

bibtex
@phdthesis{murray-browne2012phd,
    Author = {T. Murray-Browne},
    School = {Queen Mary University of London},
    Title = {Interactive music: Balancing creative freedom with musical development},
    Year = {2012}
}