What does Interactive Art have to teach us about AI?

The interactive sound installation “Cave of Sounds” Tim Murray-Browne and the Music Hackspace Ensemble. A man stands in front of a plinth casting shadows on it with his hands, at the edge of an art installation. People stand around watching him. Photo: Tim Murray-Browne

Cave of Sounds at 90dB Sonic Arts Festival, Rome, 2014. The guy is playing an instrument by casting shadows with his hands.

On Thursday I’m doing an AMA (Ask Me Anything) for Music Hackspace on AI and Artistic Identity. Here’s my starting point for it. Details for the event at the bottom.

We need better questions to understand how new technology changes us. There are a lot of hot takes on how AI is going to transform human creativity, for better or worse. But there are many possible futures ahead of us. It’s on all of us to navigate towards a future we want to live in. To help, I have three questions to share from my experiences making interactive art.

Digital Interactive Art is a space to explore the whole gamut of possible human-machine interactions. Most technology is made to solve a problem, and for profit. But as art, technology is freed from utility, and it becomes easier to explore it for what it is rather than what it can do for us. My interest is how our dynamic with a machine changes us - as a person and as a society. Building interactive art lets me put an audience into the place of someone who doesn’t understand how technology works, or how it even should work.

My favourite place to do this is in the gallery, in the form of a new physical interface. Here, you see firsthand someone’s journey of discovery. You see how social dynamics affect things. You see how vulnerable they are as they try to retain their identity and express something of themselves when confronted with an opaque system whose reactions remain unpredictable.

The first question I ask of a new interactive system is this: What roles are being assumed here?

Is this machine a tool to help me do something? Is it a space for me to explore? Is it a creative partner to collaborate with? Is it a petty official with opinions about what I should be doing? A toy? A trusted colleague? Who am I? A user? An explorer? An artist? A subject?

Of course, it’s just bits and bytes. But we can’t help but project what we know onto what we encounter. The roles we project are how we get a sense of what ‘should’ be happening, and when a system isn’t working properly.

The roles we project aren’t random. There are many cues, both within the system and how it’s presented. The person who made it imbued it with assumptions about the roles you and it would have, whether or not they were conscious of those assumptions. With any designed interaction, there is always an invitation to take a role.

In my own niche, an example of this is whether I call an artwork a musical instrument or an interactive sound installation. Many things I make could be described as either, but an instrument puts the person in the role of a creator, whereas an installation puts them in the role of an audience. It leads to a different experience.

A sense of the roles helps reveal the second question: Who feels responsible for what happens?

I don’t just mean who do we decide is responsible, or even who is actually responsible, but who do we feel is responsible? If something unexpected and exciting happens is heard in a gallery, people look at where the sound came from. Then they look at who they assume is responsible, if they aren’t in the same place as the sound. I don’t think we even need to think about this - our minds seem to always have an intuition ready, as part of our basic survival instincts.

If you interact with a system and its output becomes unexpectedly pornographic, do you feel embarrassed? Like, people are going to assume this is somehow a reflection on you?

In my PhD, I described this as perceived agency. Who do we feel is in command of what happens? Who gets the credit? Who gets the blame?

Often, how we perceive agency in a technological interaction doesn’t align with the reality of who actually has objective agency. This leads to my final question: Is this an honestly designed system?

Does the system actually give you the power it seems to imply it does? Does it promise you the world and leave you feeling like a failure when it doesn’t deliver (I’m looking at you all dating apps and social media)? Does it present as a musical instrument but only let you make the same tired tunes as everyone else? Does it present as an open communication platform but actually choose who gets heard based on an opaque algorithm that selects content based on third party interests (again, social media...)?

The dishonesty usually goes one way. It’s easier to sell people power then give them a toy, than it is to sell people a toy that’s actually a powerful tool.

Dishonest systems can humiliate people. They make us feel ashamed for the bad things they do, and like failures for the good things they don’t do. Even in the contained world of an interactive artwork in a gallery, I’ve seen people humiliated by my own work when it fails to deliver on the promises it implied. An instrument that works for everybody else but then breaks when they try to use it. As the designer, it’s my fault, yet I see them blame themselves.

When things really count, in socialising, in dating, in politics, I feel it should be a lot less surprising that technology turns so many people aggressive or sad. We may know rationally that a system is rigged, but I suspect at a deeper level we can’t quite escape our intuitions of blaming the people involved.

Tim
Los Angeles, 12 June 2024

Details for the AMA: It’s on Zoom, costs $7, on Thursday 13 June 2024 at 6pm UK time. Registration is here: AI and Artistic Identity: A talk with Digital Interaction Artist Tim Murray-Browne.

    Published:
    Updated: