CV

Education

  • PhD in Electronic Engineering, Queen Mary University of London, 2008-2012
    • Research focus: Interactive art and music systems
  • Master of Mathematics and Computer Science, University of Oxford, 2004-2008
    • First class honours, combined bachelors/masters

Awards

  • Intersección (International Selection), 2023
  • Ars Electronica STARTS Prize (Nomination), 2019
  • Zealous Emerge Digital Art Award (Shortlist), 2018
  • New European Media Art Award (Shortlist), 2017
  • PIARS International Sonic Arts Award (Winner), 2014

Reviews

Cave of Sounds is one of the most successful examples of an interactive installation I’ve ever seen. Its elegant simplicity and expert execution drew me to it: this work is inclusive and fun, and most of all sounds great.” – Martin Scheuregger, Curator at Google Cultural Institute

This Floating World drives towards its crescendo with an intensity teetering on the dogged ... A startlingly emotive and natural end to a piece with technology at its core, showing just how much you can do with clarity, control and computer programming.” – Lilia Prier Tisdall, official critic at The Place

“I loved your piece «Agency of chaos, unmoved», there really is something uncanny with how you combined different models and I think it’s one of the best examples of how deep learning can create something else.” – Antoine Caillon, IRCAM, creator of the ‘RAVE’ AI model

“More beguiling was the beautiful IMPOSSIBLE ALONE.” – Liz Elise, New Scientist Culturelab

Solo shows

  • Cave of Sounds, Leonardo da Vinci Museum of Science and Technology, Milan, 2022-2023.
  • Cave of Sounds, Music Hackspace, London, 2018
  • Movement Alphabet, GAS Station, London 2016
  • Anamorphic Composition No. 1, Genesis Cinema, London, 2016
  • This Floating World, Arebyte Gallery, London, 2015

Selected group shows

  • Frequency Festival, Lincoln, UK, 2023.
  • It’s Complicated, Museum of Discovery, Adelaide, 2021.
  • Present Futures, Glasgow/online, 6 Feb 2021.
  • Sonica, Centre for Contemporary Arts, Glasgow, 2019.
  • Silk Roads, UNESCO Peace Garden Peace Festival, Beijing, 2018.
  • Athens Science Festival, Technopolis, Athens, 2018.
  • The Engine Room, Morley Gallery, London, 2017; again in 2022.
  • Uniqlo Lates, Tate Modern, London, 2016.
  • Sweet Gongs Vibrating, San Diego Art Institute, San Diego, 2016.
  • Digital Performance Weekender, Watermans Arts Centre, London, 2016.
  • DRIFT, Centro Popular de Conspiração Gargarullo, Miguel Pereira, Brazil, 2015.
  • Resolution, The Place, London, 2015.
  • Live Performers Meeting, Nuovo Cinema Aquila, Rome, 2015
  • Trip the light fantastic toe, Tripspace, London, 2015.
  • THEMUSEUM, Waterloo, Canada, 2015.
  • 90dB Sonic Arts Festival, Rome, 2014.
  • Digital Drop-in, Victoria and Albert Museum, London, 2013 & 2016.
  • Hack the Barbican, Barbican, London, 2013.
  • Creativity & Cognition Conference, Berkeley Art Museum, Berkeley, 2009.

Residencies

  • Machine Agencies and LePARC research clusters, Concordia University, Montreal, 2022-2023.
  • Leonardo da Vinci Museum of Science and Technology, Milan, 2022.
  • Choreographic Coding Lab, Chatham, UK, 2022.
  • Centre for Contemporary Art, Glasgow, 2022.
  • Cove Park, Argyll & Bute, UK, 2021.
  • Studio Wayne McGregor: Questlab, London, 2019.
  • Temp Studio, Lisbon, 2018.
  • Associate Artist with Music Hackspace, Somerset House Studios, London, 2017–2018.
  • G.A.S. Station (ZU-UK), London, 2016.
  • DRIFT (ZU-UK), Rio de Janeiro, 2015.
  • Sound and Music: Embedded with Music Hackspace, London, 2012–2013.

Publications

  • T. Green, L. Picinali, T. Murray-Browne, I. Engel, C. Henry, Speech-in-noise pupillometry data collected via a virtual reality headset,” Proceedings of the 15th Speech in Noise Workshop, Potsdam, Germany, 2024.

    abstract

    Abstract

    In an initial exploration of virtual reality (VR) for assessing speech recognition and listening effort in realistic environments, pupillometry data collected via a consumer Vive Pro-Eye headset were compared with data obtained via an Eyelink 1000 system, from the same normally-hearing participants presented with similar non-spatialized auditory stimuli. Vive testing used a custom platform based on Unity for video playback and MaxMSP for headphones-based audio rendering and overall control. For Eyelink measurements, head movements were minimized by a chin rest and participants fixated on a cross presented on a grey screen. No such constraints were present for Vive measurements which were conducted both using a 360° cafe video and with a uniformly grey visual field. Participants identified IEEE sentences in babble at two signal-to-noise ratios (SNRs). The lower SNR (-3 dB) produced around 65% word recognition, while performance for the higher SNR (+6 dB) was at ceiling. As expected, pupil dilation was greater for the lower SNR, indicating increased listening effort. Averaged across participants, pupil data were very similar across systems, and for the Vive, across the different visual backgrounds, thereby supporting the idea that VR systems have the potential to provide useful pupillometric listening effort data in realistic environments.

    Ongoing experiments, conducted solely in VR and using spatialised sound, examine the effects of incorporating different aspects of realistic situations. In the aforementioned café video three computer monitors, each associated with a particular talker, are visible on tables. In one condition, as above, target sentences from a single talker are presented audio-only. In a second single-talker condition, a video of the talker reading the target sentence is also presented. In a third condition the target talker varies quasi-randomly across trials. A visual cue presented 2s prior to the target video allows the participant to orient to the correct screen. In each visual condition, target sentences are presented against a background of café noise plus additional babble at two SNRs, differing by 9 dB. Comparison across conditions will be informative of the extent to which pupillometry data may be affected by factors such as dynamic changes in visual background and head movements of the type typically involved in multi-talker conversations.

    bibtex
    @inproceedings{green2024speech-in-noise,
        author = {Green, Tim and Picinali, Lorenzo and Murray-Browne, Tim and Engel, Isaac and Henry, Craig},
        booktitle = {Proceedings of the 15th Speech in Noise Workshop},
        title = {Speech-in-noise pupillometry data collected via a virtual reality headset},
        month = {January},
        year = {2023},
    }
    
  • T. Murray-Browne, “Against Interaction Design,” arXiv:2210.06467 [cs.HC], 30 Sep 2022.

    • pdf
    abstract

    Abstract

    bibtex
    @misc{murray-browne2022against-interaction-design,
      author = {Murray-Browne, Tim},
      title = {Against Interaction Design},
      year = {2022},
      month = {September},
      day = {30},
      publisher = {arXiv},
      doi = {10.48550/ARXIV.1907.10597},
      url = {https://timmb.com/against-interaction-design}
    }
    
  • T. Murray-Browne and P. Tigas, “Emergent Interfaces: Vague, Complex, Bespoke and Embodied Interaction between Humans and Computers,” Applied Sciences, 11(18): 8531, 2021.

    • pdf
    abstract

    Abstract

    Most human–computer interfaces are built on the paradigm of manipulating abstract representations. This can be limiting when computers are used in artistic performance or as mediators of social connection, where we rely on qualities of embodied thinking: intuition, context, resonance, ambiguity and fluidity. We explore an alternative approach to designing interaction that we call the emergent interface: interaction leveraging unsupervised machine learning to replace designed abstractions with contextually derived emergent representations. The approach offers opportunities to create interfaces bespoke to a single individual, to continually evolve and adapt the interface in line with that individual’s needs and affordances, and to bridge more deeply with the complex and imprecise interaction that defines much of our non-digital communication. We explore this approach through artistic research rooted in music, dance and AI with the partially emergent system Sonified Body. The system maps the moving body into sound using an emergent representation of the body derived from a corpus of improvised movement from the first author. We explore this system in a residency with three dancers. We reflect on the broader implications and challenges of this alternative way of thinking about interaction, and how far it may help users avoid being limited by the assumptions of a system’s designer.

    bibtex
    @article{murraybrowne2021emergent-interfaces,
        author = {Murray-Browne, Tim and Tigas, Panagiotis},
        journal = {Applied Sciences},
        number = {8531},
        title = {Emergent Interfaces: Vague, Complex, Bespoke and Embodied Interaction between Humans and Computers},
        volume = {11},
        year = {2021}
    }
    
  • T. Murray-Browne and P. Tigas, “Latent Mappings: Generating Open-Ended Expressive Mappings Using Variational Autoencoders,” in Proceedings of the International Conference on New Interfaces for Musical Expression (NIME), online, 2021.

    abstract

    Abstract

    In many contexts, creating mappings for gestural interactions can form part of an artistic process. Creators seeking a mapping that is expressive, novel, and affords them a sense of authorship may not know how to program it up in a signal processing patch. Tools like Wekinator [1] and MIMIC [2] allow creators to use supervised machine learning to learn mappings from example input/output pairings. However, a creator may know a good mapping when they encounter it yet start with little sense of what the inputs or outputs should be. We call this an open-ended mapping process. Addressing this need, we introduce the latent mapping, which leverages the latent space of an unsupervised machine learning algorithm such as a Variational Autoencoder trained on a corpus of unlabelled gestural data from the creator. We illustrate it with Sonified Body, a system mapping full-body movement to sound which we explore in a residency with three dancers.

    bibtex
    @inproceedings{murray-browne2021latent-mappings,
        author = {Murray-Browne, Tim and Tigas, Panagiotis},
        booktitle = {International Conference on New Interfaces for Musical Expression},
        day = {29},
        doi = {10.21428/92fbeb44.9d4bcd4b},
        month = {4},
        note = {\url{https://doi.org/10.21428/92fbeb44.9d4bcd4b}},
        title = {Latent Mappings: Generating Open-Ended Expressive Mappings Using Variational Autoencoders},
        url = {https://doi.org/10.21428/92fbeb44.9d4bcd4b},
        year = {2021},
    }
    
  • T. Murray-Browne, Dom Aversano, S. Garcia, W. Hobbes, D. Lopez, P. Tigas, T. Sendon, K. Ziemianin, D. Chapman, “The Cave of Sounds: An Interactive Installation Exploring How We Create Music Together,” in Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 307-310, London, UK, 2014.

    • pdf
    abstract

    Abstract

    The Cave of Sounds is an interactive sound installation formed of eight new musical instruments exploring what it means to create instruments together. Each instrument was created by an individual but with the aim of forming a part of this new ensemble, with the final installation debuting at the Barbican in London in August 2013. In this paper, we describe how ideas of prehistoric collective music making inspired and guided this participatory musical work, both in creation process and in the audience experience of musical collaboration. Following a detailed description of the installation itself, we reflect on the successes, lessons and future challenges of encouraging creative musical collaboration among members of an audience.

    bibtex
    @inproceedings{murray-browne2014cave-of-sounds,
        address = {London, UK},
        author = {Murray-Browne, Tim and Aversano, Dom and Garcia, Susanna and Hobbes, Wallace and Lopez, Daniel and Sendon, Tadeo and Tigas, Panagiotis and Ziemianin, Kacper and Chapman, Duncan},
        booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression},
        pages = {307-310},
        title = {The {C}ave of {S}ounds: An Interactive Installation Exploring How We Create Music Together},
        year = {2014},
    }
    
  • T. Murray-Browne, D. Mainstone, N. Bryan-Kinns and M. D. Plumbley, “The Serendiptichord: Reflections on the collaborative design process between artist and researcher,” Leonardo 46(1):86-87, 2013

    • pdf
    abstract

    Abstract

    The Serendiptichord is a wearable instrument, resulting from a collaboration crossing fashion, technology, music and dance. This paper reflects on the collaborative process and how defining both creative and research roles for each party led to a successful creative partnership built on mutual respect and open communication. After a brief snapshot of the instrument in performance, the instrument is considered within the context of dance-driven interactive music systems followed by a discussion on the nature of the collaboration and its impact upon the design process and final piece.

    bibtex
    @article{murray-browne2013leonardo,
        author = {Murray-Browne, T. and Mainstone, D. and Bryan-Kinns, N. and Plumbley, M. D.},
        journal = {Leonardo},
        number = {1},
        pages = {86-87},
        title = {The {S}erendiptichord: {R}eflections on the collaborative design process between artist and researcher},
        volume = {46},
        year = {2013},
    }
    
  • T. Murray-Browne. Interactive Music: Balancing Creative Freedom with Musical Development. PhD thesis, Queen Mary University of London, 2012.

    • pdf
    abstract

    Abstract

    This thesis is about interactive music – a musical experience that involves participation from the listener but is itself a composed piece of music – and the Interactive Music Systems (IMSs) that create these experiences, such as a sound installation that responds to the movements of its audience. Some IMSs are brief marvels commanding only a few seconds of attention. Others engage those who participate for considerably longer. Our goal here is to understand why this difference arises and how we may then apply this understanding to create better interactive music experiences.

    I present a refined perspective of interactive music as an exploration into the relationship between action and sound. Reasoning about IMSs in terms of how they are subjectively perceived by a participant, I argue that fundamental to creating a captivating interactive music is the evolving cognitive process of making sense of a system through interaction.

    I present two new theoretical tools that provide complementary contributions to our understanding of this process. The first, the Emerging Structures model, analyses how a participant's evolving understanding of a system's behaviour engages and motivates continued involvement. The second, a framework of Perceived Agency, refines the notion of ‘creative control’ to provide a better understanding of how the norms of music establish expectations of how skill will be demonstrated.

    I develop and test these tools through three practical projects: a wearable musical instrument for dancers created in collaboration with an artist, a controlled user study investigating the effects of constraining the functionality of a screen-based IMS, and an interactive sound installation that may only be explored through coordinated movement with another participant. This final work is evaluated formally through discourse analysis.

    Finally, I show how these tools may inform our understanding of an oft-cited goal within the field: conversational interaction with an interactive music system.

    bibtex
    @phdthesis{murray-browne2012phd,
        Author = {T. Murray-Browne},
        School = {Queen Mary University of London},
        Title = {Interactive music: Balancing creative freedom with musical development},
        Year = {2012}
    }
    
  • T. Murray-Browne, Mark D. Plumbley, “Harmonic Motion: A Toolkit for Processing Gestural Data for Interactive Sound,” In Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 213-216, London, UK, 2014.

    • pdf
    abstract

    Abstract

    Note from Tim: I messed this one up. After submitting the paper, I hit some flaws in the underlying architecture that required a significant rewrite. It was beyond the funding I had available so I had to pull the plug. I learnt my lesson not to submit the paper until the code was finalised.

    We introduce Harmonic Motion, a free open source toolkit for artists, musicians and designers working with gestural data. Extracting musically useful features from captured gesture data can be challenging, with projects often requiring bespoke processing techniques developed through iterations of tweaking equations involving a number of constant values – sometimes referred to as ‘magic numbers’. Harmonic Motion provides a robust interface for rapid proto- typing of patches to process gestural data and a framework through which approaches may be encapsulated, reused and shared with others. In addition, we describe our design process in which both personal experience and a survey of potential users informed a set of specific goals for the software.

    bibtex
    @inproceedings{murray-browne2014harmonic-motion,
        address = {London, UK},
        author = {Murray-Browne, Tim and Plumbley, Mark D.},
        booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression},
        pages = {213-216},
        title = {Harmonic Motion: A Toolkit for Processing Gestural Data for Interactive Sound},
        year = {2014},
    }
    
  • A. Otten, D. Shulze, M. Sorensen, D. Mainstone and T. Murray-Browne, “Demo hour,” Interactions, 18(5):8-9, 2011.

    abstract

    Abstract

    bibtex
    @article{otten2011demo-hour,
        author = {Otten, Anthony and Schulze, Daniel and Sorensen, Mie and Mainstone, Di and Murray-Browne, Tim},
        journal = {interactions},
        number = {5},
        pages = {8-9},
        title = {Demo hour},
        volume = {18},
        year = {2011},
    }
    
  • T. Murray-Browne, D. Mainstone, N. Bryan-Kinns and M. D. Plumbley, “The medium is the message: Composing instruments and performing mappings,” in Proceedings of the International Conference on New Instruments for Musical Expression (NIME-11), Oslo, Norway, 2011.

    • pdf
    abstract

    Abstract

    Many performers of novel musical instruments find it difficult to engage audiences beyond those in the field. Previous research points to a failure to balance complexity with usability, and a loss of transparency due to the detachment of the controller and sound generator. The issue is often exacerbated by an audience’s lack of prior exposure to the instrument and its workings.

    However, we argue that there is a conflict underlying many novel musical instruments in that they are intended to be both a tool for creative expression and a creative work of art in themselves, resulting in incompatible requirements. By considering the instrument, the composition and the performance together as a whole with careful consideration of the rate of learning demanded of the audience, we propose that a lack of transparency can become an asset rather than a hindrance. Our approach calls for not only controller and sound generator to be designed in sympathy with each other, but composition, performance and physical form too.

    Identifying three design principles, we illustrate this approach with the Serendiptichord, a wearable instrument for dancers created by the authors.

    bibtex
    @conference{murraybrowne2011nime,
        address = {Oslo, Norway},
        author = {Murray-Browne, Tim and Mainstone, Di and Bryan-Kinns, Nick and Plumbley, Mark D.},
        booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression},
        date-added = {2011-03-29 11:16:02 +0100},
        date-modified = {2012-04-14 13:39:39 +0000},
        pages = {56-59},
        title = {The medium is the message: Composing instruments and performing mappings},
        year = {2011},
    }
    
  • T. Murray-Browne, D. Mainstone, N. Bryan-Kinns and M. D. Plumbley, “The Serendiptichord: A wearable instrument for contemporary dance performance,” in Proceedings of the 128th Convention of the Audio Engineering Society, London, UK, 2010.

    • pdf
    abstract

    Abstract

    We describe a novel musical instrument designed for use in contemporary dance performance. This instrument, the Serendiptichord, takes the form of a headpiece plus associated pods which sense movements of the dancer, together with associated audio processing software driven by the sensors. Movements such as translating the pods or shaking the trunk of the headpiece cause selection and modification of sampled sounds. We discuss how we have closely integrated physical form, sensor choice and positioning and software to avoid issues which otherwise arise with disconnection of the innate physical link between action and sound, leading to an instrument that non-musicians (in this case, dancers) are able to enjoy using immediately.

    bibtex
    @inproceedings{murray-browne2010aes,
        address = {London, UK},
        author = {Murray-Browne, Tim and Mainstone, Di and Bryan-Kinns, Nick and Plumbley, Mark D.},
        booktitle = {Proceedings of the 128th Convention of the Audio Engineering Society},
        title = {The {S}erendiptichord: {A} wearable instrument for contemporary dance performance},
        year = {2010},
        }
    
  • T. Murray-Browne and C. Fox, “Global expectation-violation as fitness function in evolutionary composition,” in Proceedings of the 7th European Workshop on Evolutionary and Biologically Inspired Music, Sound, Art and Design, Tübingen, Germany, 2009.

    abstract

    Abstract

    Previous approaches to Common Practice Period style automated composition – such as Markov models and Context-Free Grammars (CFGs) – do not well characterise global, context-sensitive structure of musical tension and release. Using local musical expectation violation as a measure of tension, we show how global tension structure may be extracted from a source composition and used in a fitness function. We demonstrate the use of such a fitness function in an evolutionary algorithm for a highly constrained task of composition from pre-determined musical fragments. Evaluation shows an automated composition to be effectively indistinguishable from a similarly constrained composition by an experienced composer.

    bibtex
    @inproceedings{murraybrowne2009,
        address = {T\"ubingen, Germany},
        author = {Murray-Browne, Tim and Fox, Charles},
        booktitle = {Proceedings of the European Workshop on Evolutionary and Biologically Inspired Music, Sound, Art and Design},
        pages = {538-546},
        title = {Global Expectation-Violation as Fitness Function in Evolutionary Composition},
        year = {2009}
    }
    

Selected Teaching

  • Interactive Digital Art, four day lecture series for Masters students at Filmuniversität Babelsberg Konrad Wolf, Potsdam, Germany, 2017.
  • Music and Movement, two day workshop at the Victoria and Albert Museum, London, 2017.
  • Creative Coding, four day lecture series for Masters students at Filmuniversität Babelsberg Konrad Wolf, Potsdam, Germany, 2017.
  • Beginning Processing, three day workshop, Space, London, 2013.
  • Webcam Instruments, evening workshop, Music Hackspace, London, 2012.

Selected Talks

  • AI Mysticism, (un)Stable Diffusions, Concordia University, Montreal, 2023.
  • Dancing with Machines, Milieux Institute, Concordia University, Montreal, 2023.
  • Staying Wild with Technology, Digital Body Lab, Berlin, 2022.
  • Sonified Body, Choreographic Coding Lab, Chatham, UK, 2022.
  • Staying human while interacting with machines, Creative Jam, HUSH, New York, 2022.
  • Rewilding Human-Computer Interaction, Electromagnetic Field, Ledbury, UK, 2022.
  • Humachines/Interactions, invited panel speaker, Goldsmiths, University of London, 2021.
  • Interaction at the level of the body, guest lecture, Royal College of Art, 2021.
  • Questioning the ideology beneath interaction design, Art in Flux, online, 2020.
  • Sonica Talk: Science as Art, invited panel speaker, The Lighthouse, Glasgow, 2019.
  • Thinking without words, Hackoustic, Iklektik, London, 2018.
  • Post-Truth and Beauty, New European Media Conference, Musia Reina Sofia, Madrid, 2017.
  • Body Sound Space (with Jan-Ming Lee), Music Hackspace, London, 2016.
  • Ensemble, Sound and Music Programme Launch, Somerset House, London, 2013.
  • Can interactive art encourage creative self-expression in its audiences? (Or are they just kidding themselves?), talk given at Reasons to be Creative, Brighton, 2012
Published:
Updated: