Diffeomorphism is a series of audio-visual works aiming to see AI in its own terms. Rather than applying the utilitarian perspective of demanding a model match the ‘ground truth’ of its training set, I’m exploring an AI’s unique way of seeing the world.
Between 2021 and 2023, I’ve worked with the StyleGAN family of AI image generation models. I developed a sensitivity to their unique visual language. I've traversed its latent space searching for images unlike anything I see elsewhere, and written custom software to animate them into the audio-visual works of Diffeomorphism.
The official pre-trained versions of StyleGAN are trained on uniform datasets like headshots to give ‘good’ results. In contrast, I trained on every photo I’ve taken (excluding images of others for ethical reasons, which still left 25,000 images). The resulting visuals hold echoes of my own photos, but the diversity of this training data glitches the model pushing it into abstract realms that reveal the nature of its inner world.
In topology, diffeomorphism describes a smooth correspondence between two manifolds. A manifold is a world that appears like our familiar reality at a local level but have a different structure at a global scale. The curvature of space-time is one manifold. The smooth folding between images that make up StyleGAN’s inner world feels, to me, like another.
Unlike text-to-image models like Stable Diffusion, StyleGAN doesn’t know concepts or objects. The series is a reminder that within “AI” there is a diversity of models about which each have their own way of perceiving the world. StyleGAN perceives by finding visual commonalities between images. It's a way of seeing without classifying, something that adult humans can attain only with training or psychedelics.
Spending time with the StyleGAN model has shifted my sense of reality. In it, I see my familiar world nestled as a brief moment within a complex, beautiful hyperspace, reminiscent of our world existing as a thin slice running through a higher-dimensional universe.
Works
Acknowledgements
Thank you to the Webster Library Visualization Studio at Concordia University, Montreal for support while developing this work. Thanks to Adriana, Lynn, Andrew, Iolanda, Saverio, Mattie, Kory, Gabriel, Zeph, Ceyda, Matthew, Eldad, Alain, Max, Michał, Simon, Thierry, Etienne, Hadi and Sophie for feedback during development.
Many works in this project are created using a customised version of Nvidia's StyleGAN3 AI model.
Created with support from Creative Scotland awarding funds from the National Lottery, Wasps Studios and Preverbal Studio.