IIL at NIME 2024

Sophie, Victor, Jack, Giacomo & Nicola present at NIME in Utrecht!
Wed Aug 28 2024

Here are our contributions to NIME 2024, taking place September 4-6 in Utrecht, Netherlands. We hope to see you there!

New Interfaces for Musical Expression 2024, Utrecht, Netherlands

New Interfaces for Musical Expression 2024, Utrecht, Netherlands

Paper 1: “Querying the Ghost: AI Hauntography in NIME”

  • Nicola Privato (presenter), Thor Magnusson
  • Wednesday Sept. 4th, 11:00, Paper Session 2, “Embracing Resistance and Failure in Practice Cultures”
  • Read the full paper: PDF.

Abstract

The discourse around creative AI is populated by eerie and otherwordly presences often evoked by artists to reflect on the social and cultural paradoxes that this technology embodies. This tendency of AI art to bring forth the uncanny emerging also in my design and performative work with NIMEs echoes the methods of an artistic movement known as sonic hauntology. In this paper I elaborate on Derrida’s and Fisher’s notion of hauntology a theoretical framework investigating ontology’s liminalities and an artistic current addressing the paradoxes of postmodern aesthetics through the magnification of the technological uncanny. I then apply this paradigm to creative AI arguing that the model’s algorithmic manipulation of the training data reproduces and exponentially accelerates the processes of temporal and semantic flattening that characterise postmodern aesthetics. The frictions produced by creative AI as it operates with and within the culture bring forth hauntological disjunctures that artists might harness as an instrument of critique and scholars as a novel epistemic method. Finally I introduce AI hauntography a practice-based methodology combining artistic practice and observation to investigate the phenomenological aspects of creative AI as they intersect with the broader technical and sociopolitical discourse.

Paper 2: “Tölvera: Composing With Basal Agencies”

  • Jack Armitage (presenter), Victor Shepardson, Thor Magnusson
  • Wednesday Sept. 4th, 16:45, Paper Session 3, “Data, Materials, and More-than-Human Influence in NIMEs”
  • Read the full paper: PDF.
  • Visit the Tölvera website and join the Discord server.

Tölvera Video for NIME 2024

Abstract

Diverse intelligence is an emerging field that views intelligence as a spectrum drawing insights from e.g. cell-based models and their evolution and recognising the combined agential properties of biological and engineered materials across disciplines. Within diverse intelligence basal cognition encapsulates the simpler end of the continuum focusing on broadly applicable insights from the behaviour of single-celled organisms and simulation. Based on a desire for more diversity of real-time AI in NIME we developed a library called Tölvera initially for composable artificial life. In this paper we present Tölvera’s design and the practice-based methodology that drove it reviewing artistic works and emergent themes from design-practice iteration cycles. We reflect on how an early influence of artificial life gave way to an interest in reading Tölvera as a basal art medium and how unexpected tendencies and capabilities play in perturbative aesthetic tension with compositional decisions. We describe how basal agency research aesthetics and toolkits are influencing the direction of both design and practice and review work-in-progress features. Finally we reflect on how a cell’s eye view of agency is shaping our thinking towards AI in NIME.

Paper 3: “Stacco: Exploring the Embodied Perception of Latent Representations in Neural Synthesis”

  • Nicola Privato (presenter), Victor Shepardson (presenter), Giacomo Lepri, Thor Magnusson
  • Friday Sept. 6th, 10:25, Paper Session 10, “User Perception and Audience Participation”
  • Read the full paper: PDF.

Abstract

The application of neural audio synthesis methods for sound generation has grown significantly in recent years. Among such systems streaming autoencoders such as RAVE are particularly suitable for instrument design as they map audio to and from control signals in an abstract latent space with acceptable latency. Despite the uptake of autoencoders in NIME design little research has been done to characterize the latent spaces of audio models and to investigate their affordances in practical musical scenarios. In this paper we present Stacco an instrument specifically designed for the intuitive control of neural audio synthesis latent parameters through the displacement of magnetic objects on a wooden board with four magnetic attractors. We then examine models trained on the same data with different seeds we explore strategies for more consistent mappings from audio to latent space and propose a method for stitching the latent space of one model to another. Finally in a user study we investigate whether and how these techniques are perceived through embodied practice with Stacco.

Demo 1: “iipyper: Rapid Prototyping of Real-time Music Interaction with the Python Software Ecosystem”

  • Victor Shepardson, Jack Armitage
  • Friday Sept. 6th, 15:00, Demo session 3

Abstract

There is widespread interest across music in exploring real-time interaction with artificial intelligence models and processes. However, since most tools in this space reside in the Python ecosystem, this presents a challenge to creative coders in setting up communications protocols, and real-time event-loops. Though many Python-based Open Sound Control (OSC) and MIDI libraries exist, they are alien to those with only machine learning backgrounds, and slow to get started with in a domain where rapid iteration is crucial, even for seasoned music technologists. To address these issues, we introduce iipyper, a Python library optimised for rapid exploration of real-time music interaction with artificial intelligence. iipyper makes trivial the creation of event-loop servers that communicate over OSC and MIDI, using highly flexible Python decorators for routing a variety of data types including sending and receiving n-dimensional arrays. We show a multitude of iipyper examples including integrations with various clients including Max/MSP, Pure Data, and SuperCollider, and describe future work for improving the library.

Demo 2: “Tungnaá: a Hyper-realistic Voice Synthesis Instrument for Real-Time Exploration of Extended Vocal Expressions”

  • Victor Shepardson, Jonathan Reus,Thor Magnusson
  • Friday Sept. 6th, 15:00, Demo session 3

Abstract

This demo showcases Tungnaá, a new voice synthesis system and software instrument for real-time musical exploration of Deep Voice Synthesis. The design of Tungnaá emphasizes real-time interaction and customization, enabling artists to manipulate various aspects of the synthesis process and to explore aesthetic artefacts unique to autoregressive neural synthesis. The synthesis engine achieves real-time streaming generation of paralinguistic and extended forms of vocal expression, while controlling them using symbolic text notations drawn from the entire unicode character set, allowing for the creation of new notation systems. The interface provides visual display and mouse- or OSC-controllable interventions into the machine vocalisations. The demo showcases Tungnaá on a laptop with headphones and a MIDI controller, allowing participants to explore the instrument via both a textual and physical interface.

Poster 1: “About TIME: Textile Interfaces for Musical Expression”

  • Sophie Skach, Victor Shepardson, Thor Magnusson
  • Wednesday Sept. 4th, 11:30, Demo session 1

Abstract

Research in new musical interfaces includes exploring new and sometimes unconventional materials drawing from a wide range of sources but typically affiliated with the computing industry. These interfaces are made out of plastic metal glass and rubber built with sensors that seek precision and ergonomic control. However there are other more unconventional interfaces such as flexible and soft instruments that allow for different types of interaction. One of the materials that has not been explored extensively in this context are textiles in particular e-textiles. Here we survey the NIME archive and provide an overview of the work on non-rigid interfaces. Further this paper presents a new textile based musical interface and evaluates its potential as a malleable expressive instrument through a case study with 6 musicians. The findings of the qualitative analysis conclude a set of guidelines for the development of future e-textile interfaces.

Concert 1: “Magnetologues”

  • Nicola Privato, Giacomo Lepri
  • Wednesday Sept 4th, 14:15, After Lunch Concert, TivoliVredenburg, Vredenburgkade 11

Abstract

Stacco is a new NIME embedding magnetic attractors that detect variations in the surrounding magnetic fields. It attracts and repels magnetic spheres displaced by the performer allowing both fine musical control and the emergence of unpredictable interactions out of its interlaced magnetic forces. Stacco has been designed to perform with neural synthesis models where multidimensional sonic spaces can be navigated through the exploration of tangled control parameters. It comes with two graphical scores that function as guides for the exploration of the neural synthesis model. By drawing on the surface of the interface it is also possible to create new scores and layer them on top of each other. We propose a performance titled ”Magnetologues” with two Staccos one operating as a neural audio engine and the other controlling in real-time the sound in space via Ambisonics.

Concert 2: “Learning Machine Knit Minput Intertextiles”

  • Victor Shepardson, Sophie Skach
  • Wednesday Sept 4th, 18:30, After Lunch Concert, De Nijverheid Café, Nijverheidskade 15

Abstract

What lies among these loops of copper wool and steel? Skin yarn and wire form a circuit machine precision meets somatic sense and tactility elaborates algorithmic entanglements. In this performance e-textile pieces are patched into a no-input mixer while player uses interactive machine learning tools to embroider sonic layers onto a repeated gesture. Analog audio signals from a no-input mixing board are routed through a piezoresistive mat which has been knitted from wool and steel yarns. The performer first develops a patch on the no-input mixing board to sound a two-handed pressing gesture on the mat. Then the they alternate between performing the gesture and refining mappings between a machine listing algorithm and additional layers of sound using the anguilla interactive machine learning package.