Miguel Crozzoli will be in Troy, New York for ICAD 2024 to present two papers from the lab. For full details about the conference, visit the ICAD 2024 website.
International Conference on Auditory Display 2024 is being held in Troy, NY, USA
Miguel Crozzoli and Thor Magnusson
Read the full paper: PDF.
Designing sonification for music composition involves aesthetic and narrative manipulation through strategies based on compositional intent, sparking discussions of data clarity and understanding. This paper describes an autobiographical process of crafting an aesthetic sonification from climate change data, and its further musification. In this approach, the sonification is used as a material for composition and transformed into an interactive musical instrument, within contemporary aesthetics and live performance contexts. The sonification object becomes an interface for composition and performance. The resulting piece seeks to amplify engagement through affectivization while displaying data information via symbolic abstraction and musical narrative. The paper also describes the techniques and results of blending sonification sounds with mixed notation derived from the sonification object. The composed piece was recorded and played by an electronicacoustic contemporary ensemble to an audience which later gave feedback. The conclusion drawn from this musification project was that even though the source data was not directly perceivable in detail by the audience, the piece does convey information via the power of emotional affectation, which aligns with the original intention of the project.
Jack Armitage, Miguel Crozzoli & Daniel Jones
Read the full paper: PDF.
Multimodal displays that combine interaction, sonification, visualisation and perhaps other modalities, are seeing increased interest from researchers seeking to take advantage of cross-modal perception, by increasing display bandwidth and expanding affordances. To support researchers and designers, many new tools are being proposed that aim to consolidate these broad feature sets into Python libraries, due to Python’s extensive ecosystem that in particular encompasses the domain of artificial intelligence (AI). Artificial life (ALife) is a domain of AI that is seeing renewed interest, and in this work we share initial experiments exploring its potential in interactive sonification, through the combination of two new Python libraries, Tölvera and SignalFlow. Tölvera is a library for composing self-organising systems, with integrated open sound control, interactive machine learning, and computer vision, and SignalFlow is a sound synthesis framework that enables real-time interaction with an audio signal processing graph via standard Python syntax and data types. We demonstrate how these two tools integrate, and the first author reports on usage in creative coding and artistic performance. So far we have found it useful to consider ALife as affording synthetic behaviour as a display modality, making use of human perception of complex, collective and emergent dynamics. In addition, we think ALife also implies a broader perspective on interaction in multimodal display, blurring the lines between data, agent and observer. Based on our experiences, we offer possible future research directions for tool designers and researchers.