Self-Absorbed Face Dance is a short video of distortions of my face. It was created using a GAN (a type of generative AI model) trained to reproduce images of me. I moved through the range of output images by moving my body in a live dialogue. The underlying system is the same as the Latent Voyager prototype I shared in the last newsletter.
A hallucinating AI trained on around 50,000 photos that I’ve taken over the past 20 years.
Part of a project exploring new forms of dialogue between human and computer that are not based on the manipulation of virtual objects. Instead, I’m working with the embodied, lived experience and our capacity for resonance.
Sonified Body is a research project using AI to create an instrument that transforms the moving body into sound in real-time, created in collaboration with Panagiotis Tigas. Our aim is to create a system that responds holistically and continuously to movements of the body in a way that feels intuitive.
Post-Truth and Beauty aims to create a sensory experience analogous to the ungraspable nature of ‘truth’ by presenting partial glimpses into an abstract world of light and sound. Visitors are invited to interact with the work one at a time by entering into the speaker ring. Light and sound both change in response to where the viewer’s head is positioned. Different parts of this world are revealed as the viewer moves their head and changes their perspective.
Movement Alphabet combines interactive technology with immersive performance to visualise the physical personality embodied in how we move in daily life.
Each visitor of the interactive artwork is led by a guide on a journey through their own memories and stories. During this, their movements are analysed and rendered into a Movement Portrait through a process akin to a digital calligraphy of the whole body.
Anamorphic Composition (No. 1) is an interactive sound installation experienced through head movement. A frozen moment of music is scattered into shards of sound, cutting through physical space and audible when touched by the listener’s head. This sound can no longer be sensed holistically in an instant but explored as individual parts. The areas where these shards intersect create sweet spots, where fragments of a greater harmony echo ephemerally.
Music to Swim to is a piece composed for swimmers performed at the Olympic swimming pool in the London Aquatics Centre on 25 November 2015 as a part of Bigga Fish Festival.
The piece is a 25 minute meditation inspired by the meditative mechanics of the swimming body as it propels itself through coordinated cycles of releasing energy into the water.
This Floating Tracker is an open source release of the software used in This Floating World.
It includes algorithms to extract a pixel skeleton from the 3D silhouette returned by the Kinect. This gives a more stable analysis than the Kinect’s skeleton tracking algorithms when dealing with the diverse body dynamics of a dancer. It analyses for optical flow, tracks end points of the skeleton over time using a Kalman Filter and other features such as centroid and bounding box.
This Floating World is a dance solo performed in an interactive environment of computer generated visuals and sound. The piece is inspired by vines whose form is cast by the buildings against which they grow, and riverbeds that both guide and are moulded by the water they carry.
The dancer’s movement on stage is tracked using a 3D camera. Custom software analyses this movement. In combination with live input from a nearby laptop, this generates visuals for the onstage projection and modulates the musical score.
The Cave of Sounds is an interactive sound installation exploring the power of music to bind individuals together and the visceral urge to use technology to broadcast our identity. Inspired by the prehistoric origins of music, the work is formed of eight original musical instruments, arranged in a circle facing inwards. Each instrument has been designed and created by an individual as an embodiment of their own artistic practice, but also to exist together as a new ensemble. In the hands of its audience, the work is crafted to provoke participants to connect and resonate with each other through musical expression.
On 25 November 2012, I ran a workshop for the Music Hackspace on creating an instrument from the webcam on the laptop. Drawing inspiration from Memo Akten’s Webcam Piano 2.0, in the class we looked at the basics of openFrameworks, video processing with Open CV, driving Ableton Live with MIDI and musical geometries like the Tonnetz.
Slides and source code from the project is available on the Webcam Instruments repo on Github.
IMPOSSIBLE ALONE is an interactive installation exploring the space between musical improvisation, creative movement and games. Using Kinects to track the pose and movement of two individuals, the installation hides a myriad of sonic interactions — invisible instruments that both provoke and demand creative movement. However, this interactive soundscape may only be explored by the participants whilst they mirror each other’s movements exactly. Responding to both each other and what is heard, they inevitably provoke each other to move in unfamiliar ways. How long can they persist in the uncharted journey of shared creation?
The Manhattan Rhythm Machine is an interactive generative beat maker. Loops for each instrument are represented with cut up segments of a circle. These are moved through a two dimensional space of rhythms with axes of edginess and density which are mapped to rhythms through a beat hierarchy derived from how off-beat each position in the bar is.
The result of a cross-disciplinary investigation spanning fashion, technology, music and dance, the Serendiptichord is a wearable musical instrument that entices the user to explore a soundscape through touch and movement.
This curious device is housed in a bespoke box and viewed as part of a performance. Unpacked and explored on and around the body, the Serendiptichord only reveals its full potential through the intrepid curiosity of its wearer…