Diffeomorphic Landscapes

Image series
A photo of framed prints on the wall. Taken at Tim Murray-Browne's exhibition “Small Frame Infinite Canvas” at South Block Project Space, Glasgow, Dec 2023. Photo: Tim Murray-Browne.
.wide

Diffeomorphic Landscapes is a series of images on the periphery of real and unreal, rendered by AI through a process of creative coding, glitch and personal memories.

Online exhibition

Click an image to enlarge.

“World Without End (1075)” by Tim Murray-Browne, rendered using a custom-trained AI model. Part of his print series “Diffeomorphic Landscapes”.
“knots (50333)” by Tim Murray-Browne, rendered using a custom-trained AI model. Part of his print series “Diffeomorphic Landscapes”.

World Without End (1075)

knots (50333)

“woods (50169)” by Tim Murray-Browne, rendered using a custom-trained AI model. Part of his print series “Diffeomorphic Landscapes”.
“pastoral (50097)” by Tim Murray-Browne, rendered using a custom-trained AI model. Part of his print series “Diffeomorphic Landscapes”.

woods (50169)

pastoral (50097)

“Cosmic Insignificance Therapy (0112)” by Tim Murray-Browne, rendered using a custom-trained AI model. Part of his print series “Diffeomorphic Landscapes”.
“garden (50169)” by Tim Murray-Browne, rendered using a custom-trained AI model. Part of his print series “Diffeomorphic Landscapes”.

Cosmic Insignificance Therapy (0112)

garden (50169)

“lake blue sky (1458)” by Tim Murray-Browne, rendered using a custom-trained AI model. Part of his print series “Diffeomorphic Landscapes”.
“woody lake (2839)” by Tim Murray-Browne, rendered using a custom-trained AI model. Part of his print series “Diffeomorphic Landscapes”.

lake blue sky (1458)

woody lake (2839)

“caravan (1536)” by Tim Murray-Browne, rendered using a custom-trained AI model. Part of his print series “Diffeomorphic Landscapes”.

caravan (1536)

Process

I trained the StyleGAN AI on my entire lifetime of photos, then modified it to reveal the surreal realms of fractal forms that normally remain invisible beyond the borders of the frame.

The exhibition of large prints concludes three years of research into alternative ways of creating with AI, ways that reject appropriation and the pursuit of ideal imitation, and instead offer something more personal, weird and – in its own way – humane.

Each image is generated from the same customised AI, which I trained to imagine new memories based on my lifetime archive of 25,000 photos. It has only seen what I myself saw at one point and felt moved to record. The images it generates echo my visual world: Scottish landscapes, tree branches criss-crossing under the sky, rock textures from Cornish beaches and the contours of the human body. (I excluded photos of other people’s faces, but kept many close up studies of the human body.)

Yet my photo archives are a more diverse range of images than this AI was designed to handle. It struggles to make its creations ‘lifelike’ and veers from the uncanny into more abstract realms. In doing so, it reveals the algorithmic marks of its generative processes.

I modified the AI to expand its frame, creating ever larger images that bring these hidden glitches to the foreground. Each image here contains a small square frame near its centre. As we move away from that optimised frame, the image disintegrates into an endless canvas of offcuts and mathematical building blocks.

Our relationship with AI is still fluid. In my mind, it jostles for contradictory roles: a tool, a collaborator, a pathological copycat, a new kind of mind – sometimes all these at once. The technology is racing ahead faster than my intuitions can.

A common approach to training AI is to harvest huge amounts of images off the web, aiming for a monolithic AI. When I began researching AI in 2018, I felt a strong instinct to train my own system using my own data in the safety of my own PC. If we all express ourselves through the same systems handed down from above, we will surely all find the same paths and end up saying the same things. I needed a way to make it my own, to fuse its world with mine.

In 2021, I stepped back from the overwhelming stream of new AI systems to dig deeper into what’s possible with this single AI I trained. In the years since, I’ve developed a relationship with it. As with many AIs, at first everything is overwhelmingly magnetic. Then it all starts looking the same. But then, after some time, some things emerge that resonate more deeply.

In making the leap to the physical world, I wanted to embrace the scale of the images. They combine detail and scale in a way that can be lost in the fluidity of digital display.

Artist Talk

I gave a talk at the exhibition opening. You can watch the recording or read the tidied up transcript below.

Show edited transcript

Edited transcript

For the preliminary thank yous - see the acknowledgements below.

AI is very much in the zeitgeist at the moment. It’s a field moving so fast that I think it’s hard to have proper intuitions about it. As soon as we get a feel for it it’s changed again.

There’s a lot of anxiety and controversy around AI. Much of these are justified to some extent in my opinion. A lot of things about it are scary. There’s a lot of problems with it. So I want to describe why I’m working with it, and my own critical angle.

First of all, what is AI? It’s presented in the media as this mysterious new thing, this new thing that’s suddenly been invented. But the technologies behind today’s AI come from a topic of computer science that’s been going on for 20 years: machine learning.

The regular, old school way of programming a computer is to design a program step by step. You code every single thing that the computer does, building from the bottom up. Complexity emerges from the combination of simple elements that somebody, somewhere knows inside-out.

But in today’s machine learning, the approach is instead to just make a really, really complicated random program. We call this a model. Your program is defined by thousands, millions, maybe billions of numbers. It’s more complicated than we could think up from scratch.

As a random program it’s not yet useful. Next, you get a load of examples of how it should work. That’s your training data. You’re saying “when I give you this input, I want to see this output”.

And then you go through these examples, again and again. Each time, you see how well your program does, then tweak it program a tiny amount to make it better on that example. If you get a few things right, over time, your program will start to converge to do the things you want.

Nobody can really understand fully how the thing you end up with works - why those numbers solves the problem. It’s got that mystery to it.

Machine Learning is the technique behind the AI that’s dominating the headlines is generative AI: ChatGPT and image generation systems where you type in “I want a picture of a tomato” and out comes a picture of a tomato.

But machine learning is also at play when you swipe on your iPhone and the phone recognises this clump of contact points as a one-finger swipe instead of a two-finger swipe.

I give this introduction to AI because it’s relevant, I think, to the critical angle of this series of prints.

There are many problems with AI. The ones I’m most interested in here are those affecting us right now rather than these possible apocalyptic scenarios in the future.

Firstly, AI can solidify many of the prejudices that we have in society today. It puts them into concrete form and then duplicates and automates them. It’s displacing people from work. It’s removing some sense of the value of being human in some sense to kind of have like the things that we do that we’ve worked really hard to develop. And it’s also being used to rip off the creative work of people who have invested a lot of their lives into developing that creativity.

For me, a lot of these problems are not problems with AI itself. As I said, it’s already ubiquitous in many non-problematic ways. These are problems with how the technology is being used and introduced. Many of them emerge from a the basic selling point of AI, which is that it takes things that were previously quite complicated to do and make them really easy to do again and again, at scale.

So on the question of prejudice: we all have prejudices, but because we’re all different people, there is some diversity that allows many of these to balance out. And even for those that remain, we have hope as we’re continuously changing.

But when you take a monolithic AI system, and you train it on all of the data of humanity, and then you clone that system and put it out in the world and reproduce it, then all of the problems and quirks get reproduced in the same way. The problems don’t get balanced out by diversity. Instead they get amplified.

The focus on my practice is how we use computers for self-expression. I’m particularly interested in how we’re using AI for self-expression. For example, we can use ChatGPT to have a conversation, but we can also use ChatGPT to help us express ourselves, like to help us write a letter to somebody. But when we do that, we’re all using the same ChatGPT, so we all end up expressing ourselves with the same set of identical quirks and problems that ChatGPT has.

We end up with a homogenization of expression. We’re outsourcing expressing ourselves to a singular tool that’s being used over and over again.

And when I’m using it, you get this sense of power. I can use it to do this thing, and that thing. I can generate a thousand funding applications in a day. That’s great. But everybody else has exactly the same tool. So your sense of identity, the thing that makes you who you are or makes you feel special or makes you feel you have something to contribute: it’s getting diluted in the same way. By expressing ourselves through the same singular tools, we’re merging into the same person.

When I began this AI research project in 2020, I decided from the outset to only use my own data in training these models. I want to put myself into it as much as I can. Even if I don’t understand what it’s doing, I know everything it creates comes from images that have come from me. This gives me a sense of authenticity in what I’m seeing. Even though it’s unpredictable, I can trust that it’s coming from me.

The AI model I’m using is called StyleGAN3. It’s not as big and capable as the systems where you type in your text and you get an image out. Those are huge systems by comparison. The way StyleGAN works is you give it a collection of loads of images and train it to make new images that look like they could belong to that collection.

The benefit is StyleGAN3 is small enough that I can train it from scratch on my own computer in my own studio. I gave it every single photo I’ve ever taken. Well, I took out photos of other people’s faces, but that still left 25,000 photos. It took six weeks to train.

So this AI is effectively trying to make photos that look like something that I’ve seen before. It’s only ever seen what I’ve seen. But my entire set of photos is actually more diverse than what this model can handle. It started glitching and making errors and these kind of uncanny mistakes.

To tell you the truth, with 25,000 photos, I don’t need any more. My goal was not imitation, but to explore how these glitches and mistakes can sometimes be quite beautiful. Sometimes it was mysterious too. It almost seemed to reveal something about how we perceive the world - certainly, they reveal something about how this AI is perceiving the world. It might start off looking like a face or a tree, but then dissolve into these repeating forms.

I also found is that the AI image was creating this little square picture, but it had the capability to move that picture. I moved it to the side and found more and more of the picture hidden beyond the frame borders. The image continued forever, but the further away I got from that little frame, the weirder and more glitchy it got.

So I modified the system to move its square frame around and build up a larger image by tiling these squares together. That’s what makes these images here. If you look, you’ll see in the centre it’s the most life-like. As you get further out, it gets more and more disintegrating.

What I’m seeing here, is the kind of mathematical forms and the building blocks of how the AI is making these images. For me, that’s super exciting, because I get to understand it as a medium rather than as a mystery.

Working with it for a few years – including this interactive work Self Absorbed where you can morph from one image to another by moving your body – was another way of exploring not just what images I can make with it, but how those images sit together in some kind of landscape of possibilities.

For me, the first thing is the personal nature of it, the fact that it feels like it’s me. It’s this kind of collaboration between my memories and the medium of this AI model.

But another thing I found – I actually started generating these images about two and a half years ago. When I first saw them, I just didn’t know what to make of them. Part of the problem I had is that when you work with technology, and some new technology comes out, it’s sometimes very easy to make something that looks cool, that you feel is yourself. But then when you put it out there, you find everyone else making stuff that looks very similar. The technology seduces you into feeling like these creations are your own voice, when actually they are already latent in the tool.

So I’ve spent a couple of years working with this model, watching how the field of AI has evolved around it, getting this ever larger set of images, and finding just some that resonate with me, and a huge amount that doesn’t so much.

The images you see here I built up over that time. They have emerged through the relationship I have with the many thousands of images I’ve generated. They’re each connected in some way to everything I’ve seen in my life, but some seem more resonant than others to me.

And then a final thing: I got annoyed, because James who was doing the promo told me that one reviewers said he wasn’t coming because he doesn’t want to review any more works that glorify AI. This position, that AI is bad and we need to criticize it in every possible opportunity, is built on a reductive idea of what AI is, based on how it’s being used in certain contexts.

AI is already here. It’s been around us for a while. Right now we’re in a moment of transformation and it’s on us to not just criticize how we don’t want it to be used, but to build the ways we do want it to be used, to imagine the world that we want to have, and to put that into being.

My real intention with all of this is to give an example of my own little weird journey into AI, what it does for me, why I’m still using it, why I’m still very excited about it, and, hopefully, offer a speculation on how our relationship with it might be different.

Thank you very much for coming.

Exhibitions

  • 1 Dec 2023 – 9 Jan 2024, South Block Project Space, Glasgow, UK (under the title “Small Frame Infinite Canvas”)
    • 1 Dec 5.30pm, Opening and Artist Talk

Credits

Created by

Tim Murray-Browne

Creative Consultant

Adriana Minu

Printing

Lighthouse Photographics

Framing

The Framing Workshop

Marketing

Opening Lines Media

Acknowledgements

Created with support from Creative Scotland awarding funds from the National Lottery, Wasps Studios and Preverbal Studio.

This work uses the StyleGAN3 AI model, developed at NVIDIA by Tero Karras, Miika Aittala, Samuli Laine, Erik Härkönen, Janne Hellsten, Jaakko Lehtinen and Timo Aila.

Special thanks to Fenwick Mckelvey and Caitlin Callaghan.

Part of the project Diffeomorphism.

A still from Cosmic Insignificance Therapy, an audio-visual work from the series Diffeomorphism by Tim Murray-Browne.

Diffeomorphism

2023My visual lifetime in AI

Green abstract image. Still from “Cosmic Insignificance Therapy” by Tim Murray-Browne.

Cosmic Insignificance Therapy

2023AI-rendered short film

AI-rendered abstract image reminiscent of dense foliage. Still from “World Without End” by Tim Murray-Browne.

World Without End

2023AI-rendered short film

A still from ‘A short ride through hyperspace’ by Tim Murray-Browne. The AI-rendered image shows a dry plain seemingly suspended in a blue sky.

A Short Ride Through Hyperspace

2023AI-rendered immersive AV installation

Adventures in AI Self-Custody

I learnt to build AI models because I wanted to fall in love with AI, to see it as a collaborator in human agency, to be excited rather than afraid.

AI’s weird and uncanny mistakes reveal the gaps in how I perceive intelligence

I’m used to seeing human-like intellectual capabilities together as a bundle, what I consider human intelligence. If a human can draw photorealistic faces, I might assume they have mastered many other intellectual abilities, like a deep sensitivity to human physiology and how it exists in physical reality.

    Published:
    Updated: