I’m a research professor at the University of Iceland and a professor of future music in the Music Department at the University of Sussex. I’ve recently served as an Edgard-Varèse guest professor at the Technische Universität Berlin. My research interests include musical performance, improvisation, new technologies for musical expression, live coding, musical notation, artificial intelligence and computational creativity. My research has roots equally in practice and theory and recent books include 'Sonic Writing: Technologies of Material, Symbolic and Signal Inscription' and 'Live Coding: A User's Manual', published by Bloomsbury Academic and MIT Press respectively.
I hold a BA in Product Design from the Icelandic University of the Arts (2023) and a diploma in web design. I opt for working close to people, hands-on. My interest in human/ material connections has led to my previous and ongoing projects. In collaboration with the Textile lab in Blönduós I researched unused Icelandic materials, secondary wool, and horse hair for possible use as natural upholstery material. Looking into old methods using new material combinations and machinery. Currently, I am focusing on getting to know the “Icelandic” material, aluminum and working hands on with the positive recycling properties of the material on a small scale. Reshaping it into new forms and objects using sand casting methods.
I am a doctoral researcher in the Intelligent Instruments Lab at the University of Iceland. Previously I worked as a machine learning engineer on neural models of speech, and before that I studied Digital Musics at Dartmouth College and Computer Science at the University of Virginia. My interests include machine learning, artificial intelligence, electronic and audiovisual music, and improvisation. In my current research, I approach the lived experience of people with AI via design of, and performance with, new musical instruments. My projects include the Living Looper, which reimagines the live looping pedal through neural synthesis algorithms, and Notochord, a probabilistic model for MIDI performances.
I’m a PhD candidate in Cultural Studies, conducting my research at the Intelligent Instruments Lab. Previously, I studied Electronic Music at the Conservatory of Padua (MA), Jazz Improvisation and Composition at the Conservatory of Trieste (BA) and Modern Languages and Cultures at the University of Padua (BA). In the last ten years I have been curating musical events and festivals, composing, performing and teaching music. My current interests include alternative forms of notation, improvisation, composition, and Human-Computer Interaction in performative contexts. My project focuses on AI explainability in music performances.
I am a composer, saxophonist, and Ph.D. candidate in Cultural Studies, currently conducting my research project at the Intelligent Instruments Lab in Reykjavik. I have studied European composition, jazz studies, and cultural management in Argentina, and I hold an Advanced Postgraduate Diploma and a master's degree from the Rhythmic Music Conservatory in Copenhagen, where I conducted practice-based research on the aesthetic potential of sonifications. I believe in the power of sound as a transformative experience for societies, and I am particularly interested in data democratization and knowledge dissemination. Through my research interest in music, new technologies, data, and AI, I explore the questions about the role of the arts in critical thinking and perception.
I am the inventor of the halldorophone, an electro acoustic string instrument intended for working with string based feedback. For the past decade I have been seeking out and working with musicians to make music with halldorophones and noting their thoughts and feelings on the process to inform further development. I am currently working on a PhD documenting and expanding on this work under the supervision of Thor Magnusson and Chris Kiefer at the University of Sussex. Besides working on this project I am currently funded by an innovation grant from the Icelandic Technology Development Fund on further development of halldorophones. I enjoy using my skills as a fabricator to collaborate with musicians and instrument makers in the NIME context.
I am a postdoctoral fellow at the Intelligent Instruments Lab and active as a performer, composer and curator within contemporary music and ecological sound art. I obtained my PhD in artistic research in music from Lund University. My research looked at agencies at play in artistic processes, explored through micro-labs set within and outside of institutional environments. I continue along such a context-sensitive trajectory at the lab, focusing on the mediation that occurs in the performance of pre-existing design and the prototyping of an AI-augmented violin.
I'm a transmedia designer-researcher working globally to explore how systems make worlds. My practice asks how technological innovation can invite more-than-human networks (animal, vegetal, machine, and algorithmic) to walk together across new terrains. I direct the critical design studio We&Us, which since 2020 has explored co-creation for systems change across Europe, Asia & Canada. At IIL, I'll be exploring the lab itself as a living, breathing more-than-human instrument. From autumn 2024, I will be Professor of Transversal Design at IDXM (Institute Experimental Design & Media Cultures), FHNW Academy of Art and Design Basel. Photo credit: Jessica Jacobson.
I am a postdoctoral researcher at Intelligent Instruments Lab. I am a composer, performer, and sound artist. My research interests deal with different manifestations of feedback processes (i.e., audio feedback, data feedback, interpersonal and behavioural feedback). I did my PhD at the University of Sussex, where I explored cybernetic theory principles to devise generative Music Theatre performances. In my previous research I approached the performance space as the interface for interaction, looking into the notion of space as an instrument. Currently at the lab, I am working with feedback acoustic string instruments (feedback double bass, and halldorophone). I am interested in the musical possibilities these instruments offer and in the conceptual frontiers they push in different contexts and setups (solo, ensemble, interlinked etc).
I am interested in the interplay between human and non-human agencies in music, embodiment in networked music performance, and interactive machine learning. I have recently designed a networkable feedback string instrument with embedded machine learning called the Sophtar. At IIL I am working on extending the Sophtar with actuators and machine learning models to make it respond to my playing in ways that are not easy to predict yet meaningful and inspiring. I see the research and development work on the Sophtar as a way to probe broader research questions on musical improvisation and co-creativity with machines and algorithms.
I’m a sound artist, composer and a DJ. As a research assistant at Intelligent Instruments Lab my main task is documenting, organising and expanding the Organium library of technical elements. Currently, I attend the Masters program on Music Technology and Contemporary Practices at the Musical Department of Kapodistrian University of Athens. I have previously studied Applied Mathematics and Physics at the National Technical University of Athens. My fields of artistic and research interest include, inter alia, data sonification, sonic exploration of inaudible wavescapes, audio feature extraction and corpus-based synthesis.
I am a performer, composer, and sonic researcher. I create solo double bass music, often with the FAAB (feedback-actuated augmented bass)—a feedback double bass with embedded signal processing. I am fascinated by instruments that exhibit some sense of autonomy and afford interaction and improvisation. In this context, I investigate how terms such as musical mastery, virtuosity, resistance, and failure may become reframed (or rendered obsolete) through the decentrering of human agency, as we engage with machines that don’t always do what we expect them to. I make sound installations, create and exhibit 3D-printed sculptures generated from live music performance, and have created sound for theatre, dance, film, and image. My work has been performed in the US, Japan, Australia and Europe and I appear on around 50 releases, three of these solo albums. I am currently undertaking a PhD in music at SARC (Sonic Arts Research Centre), Queen’s University Belfast, Northern Ireland.
I am mostly an enthusiastic dancer and live coder/musician but also a digital artist and researcher from Medellin, Colombia, where I worked in many collaborations, album recordings, and live sessions; also I have worked as a software developer for Chmusick toolkit, a library to make ChucK language an algorave language. I am the author of CQenze, a DSL designed to be a first-experience language for non-coders. Also, I developed CineVivo, a graphics render engine. Currently, I am a PhD Candidate in Computer Music at CCRMA - Stanford working for the ChucK development team.
I am a UK-based artist-researcher whose work explores new ways in which sound and technology can illuminate our understanding of the world, translating patterns and data into living musical forms. My research interests include algorithmic composition, systems music, spatial audio and sonification. As one half of Jones/Bulley, I am director of a practice-led research studio, creating sculptural sound installations that articulate the hidden dynamics of environmental systems, with past projects portraying forest ecosystems, weather patterns, global audio ecologies, and FM radio transmissions. In my solo practice, I explore the integration of dynamical systems and elemental forces as active agents in musical co-creation. From 2020-2024, I was research leader and co-founder at Sonos Advanced Technology London, a research laboratory that incubates next-generation audio technologies. I hold a PhD in artificial life from Goldsmiths, University of London.
I am a freelancing composer based in Los Angeles. My work explores the overlap between the experience of performing and the experience of listening, attempting to break down the barriers between listener and performer, audience and artist. My work with machine learning is focused on the application of autoencoders as generative synthesis tools, and their use as a ghost-in-the-machine inside augmented and intelligent instruments and architectural spaces, altering and amplifying variation in how the instruments and spaces respond to the performer. On the side I co-run Carrier Records--a label for new and experimental music--with Sam Pluta, Katie Young, and Jeff Snyder.
I teach Fine Arts at the UPV/EHU University in Bilbao where I did my doctoral thesis on generative music and the txalaparta. I studied Art in Bilbao and MA Design for Interactive Media in London. I am part of www.ixi-audio.net since 2001 and I like developing software to make weird music. I am interested in that place where popular music, contemporary music, art and the creative use of technology meet.
I work as an artist in socially engaged art practice and a Senior Lecturer at Camberwell College of Art, University of the Arts London. My work deals with colonisation of space and resource caused by gentrification, poverty, human trafficking, and climate change. Early projects included work with women in prison, Gypsy and Traveling communities, homeless people and those being displaced from the Elephant & Castle in London. Since 2016 I have been focusing on projects that confront the climate emergency, using participatory and collaborative methodologies to move people beyond passive spectatorship towards active social agency. Currently my main mediums are knitting & weaving, which I conceptualize as ethical photography (for their binary, time-based qualities, and dependance on plant-based photosynthesis as primary source of “writing with light”). Since there is no route back to the pre-industrial age we will need technology to meet the needs of humanity, I am exploring e-textiles as a synthesis of craft skills with the modern technology.
I am a recent BEng graduate in the Computer Systems program at Carleton University. I have previously worked on various embedded software projects, working with microcontrollers, sensors, and electronics. Recently, I have been learning about the music technology space which led me to my project for the summer: developing software tools to aid in the exploration of embedded machine learning possibilities on the Bela computers. I will be working on this project with support from Google Summer of Code, the BeagleBoard foundation and IIL, hopefully ending with some tools, tips and tricks for others looking to use machine learning with the Bela platform.
I build digital instruments, play with them and try to critically think through them. My research crosses the domains of electroacoustic improvisation / composition, human-computer interaction and cultural studies. In short, I am specialised in not being specialised. I hold a master in 'Instruments and Interfaces' from STEIM / Institute of Sonology and a PhD in Media and Art Technology from Queen Mary University of London. At IIL I explore compositional strategies for the mediation of sociocultural values and technological agencies, considering the practice of sonic interaction design as an opportunity to play with illusions and magic.
I think about what I do as bridges between music and computers: I work with performers, composers and machines to facilitate their relationship to each other, building instruments, interfaces and performance pieces. My work revolves around feedback, both as in Larsen effect and as in human-in-the-loop of a listening-performing process. When I’m lost I ritually play no-input mixer to re-negotiate my agency and musical desire. I’m currently working on a digitally controllable no-input mixer to see if introducing algorithmic strategies can help exploring new grounds for this negotiation, or if it defeats the purpose all together. Other works I’ve done are about the presence of technology in life, such as sonification of WiFi devices' never stopping chit-chat, or computer RAM streams of consciousness. I’m interested in machine learning too, but looking for ways of being critical and perverse with it, rather than an optimistic testimonial, or to say it with Dadabots, to keep it weird. And I play noise. From Spinoza, to Deleuze, to Soren Kjærgaard, to me and you: “we don’t know what a body can do”.
I compose music and code, sometimes before execution and sometimes during. My research interests include the design of languages, environments, and interfaces for composing and performing music with computational, generative, or procedural aspects, while my artistic practice includes algorithmic composition, instrumental improvisation, and live coding. Previously, I studied electrical engineering, computer science, and music at MIT, where I earned a BSc and MEng and helped found the MIT Laptop Ensemble. Currently, I'm a PhD student in Music Technology at Georgia Tech in the Computational Music for All lab.
Jack Armitage is a musician, designer and technologist based in Reykjavík, and the founder of Afverhju Ekki - The Absolutely Everything Studio. Jack is a postdoc researcher at the Intelligent Instruments Lab, University of Iceland, and has a PhD in Media & Arts Technologies from Queen Mary University of London. Jack's work spans experimental concerts, electronic club performances and DJ sets, multimedia installations, interface design, sound design, music production and composition, and more. Jack's project Lil Data released on the PC Music label, and has co-production credits including Charli XCX and Jónsi.
I am a homeostatic animal experimenting within a vibrant ecosystem of cultures and tools. I co-founded the instrumentinventorsinitiative (iii) and Platform for Thought in Motion in The Hague, and worked for many years on artist-led research at the former Studio for Electro-Instrumental Music (STEIM). I am currently exploring what it means to have a voice.
I am a MSc student in Computer Science at Reykjavík University, as well as studying at the FÍH school of music. I have a BA degree in psychology and philosophy from the University of Iceland and, subsequently, worked in software development for 14 years, ranging from web design to app development. My main interests in life and work are music, AI, UX, and creativity. I am currently working on teaching a computer to play the txalaparta at the Intelligent Instruments Lab. I have been playing and creating music for as long as I remember and am in a death metal band, although I have been involved in various different musical projects, as well as composing more experimental music in darkened rooms.
I am a machine learning engineer with experience researching, developing, and productionizing machine learning systems. My interest in music technology was sparked by my time working as an undergraduate researcher with the Robotic Musicianship group at the Georgia Tech Center for Music Technology, under Prof. Gil Weinberg. I am particularly interested in creative AI development and applications, and in creating new musical instruments, interfaces, and interactions.
I am an artist, performer, stage director and scholar weaving together contemporary performance, new media art and interactive computer music since the early 2000s. My artistic and scholarly work is rooted in feminist theories, critical theory and critical disability studies. I manipulate bodies, create choreographies, engineer machines and compose sounds to create artworks and performances that can speak critically of ritual, power and technology. I have a Ph.D. in performing arts, computing and body theory from Goldsmiths, University of London. Recently, I held Research Fellowships at the Dortmund Academy for Theater and Digitality and at the Berlin University of the Arts - in collaboration with the Neurorobotics Research Laboratory, in Germany. I am a co-founder of the artist group Fronte Vacuo, based in Berlin. Photo cred: Dario J Lagana' | Norte.it
As a saxophonist, composer and audio-visual artist I explore the interplay between humans and machines by combining improvisation with generative algorithms, machine learning, brain-computer interfaces and sonification of bio-signals. My current work focuses on real-time sonification and visualisation of EEG and ECG data in a live-performance context, creating a feedback-loop between musicians, machines and sound. In the context of interactive audio-visual installations these systems allow the audience-members to intuitively explore the effects of this feedback-loop on themselves.
I am a creative musician in the field of applied experimental music. I have studied Conducting, Composition, Production, Cello, Programming, 3D printing and Sonology at Brazil, Germany and the Netherlands. In my research I develop artworks merging those practices with technology, social inclusion, sustainability & fair music distribution. I am a member of Netherlands coding Live, Instrument inventors initiative and the creator of Knurl, an interactive and polyphonic instrument. In the first months in iil, I will be consolidating the interface of Knurl by the asssitance of Thor and Halldor and the collective practice of local musicians. In the third month, Adam Pultz and I are going to be developing and testing our performance 'Sound energy harvest', where our instruments sounds are going to be harvesting energy for its own application.
I'm a graduating student in Composition - New Media at LHI. My interests involve sonification, sonic interaction design and DIY approaches to multimedia creation. Previously I studied Biotechnologies at La Sapienza University of Rome, and before then I studied guitar, violin, and sound engineering. I release music as Bob Hermit since 2016 and co-founded the transnational label BohReal? Records in 2020. In my work and in my music, I explore what it means to be a human and to be communicating in an information-overloaded world; I use abjections, discomfort and displacement to investigate human responses to aesthetic patterns. Currently I'm experimenting with sensors and Bela boards to create interactive sculptures and wearable music instruments. I'm going to work with IIL all summer, it's going to be great!
I have a BFA from the Studio of Interrelated Media from MassArt in Boston and recently received a Master's in Performing Arts from Listaháskóli Íslands where I focused on bringing my background in interactive and kinetic sculpture into a performative and socially engaged practice. Inspired by the local Icelandic arts and music scene and I have worked with the Reykavík Dance Festival, Sequences Art Festival, Nylistasafnið, Kling og Bang, Listahátið, Raflost, Rask, Mengi, Spectral Assault Records, and grassroots organizations Post-Dreifing, RUSL Fest, Fúsk, and King og Bong. My primary goal as an artist is to create an engaging experience that encourages interaction through the performative nature of objects and the sensation of experience.
I currently study vocal arts at Söngskólinn í Reykjavík and computer science at the University of Iceland. Previously I studied fashion design at Studio Berçot in Paris, and MA Design at Iceland University of the Arts. In my work I am continuously interested in the relationship between the concept of identity and technology. My previous work addresses the ways in which we attempt to upload our physical selves into the virtual world, virtual and digital fashion, augmented reality experiments and machine learning experiments. In collaboration with IIL I am currently working on an AI Vocal Avatar, where I attempt to donate my voice to a machine learning algorithm, and therefore outsourcing my physical body for the process of singing.
I'm a postdoctoral researcher at Queen Mary University of London, where I also obtained my PhD as part of the Media & Arts Technology programme. Trained as a fashion designer (BA, MA), I have worked in industry for larger companies as well as on my own projects. With this background in fashion and textile design, my research explores 'smart' clothing as a ubiquitous, wearable sensing system for applications in social interaction, soft robotics, and intelligent instruments.
I make interactive sound systems and instruments that I improvise with or exhibit for the public to play. I'm currently a music technology doctoral researcher at the Leverhulme Trust funded be.AI Centre (University of Sussex, Brighton, UK). My research explores enactive metaphors for collaborative musical instruments that enhance intra-dependant actions between human and non-human agents.
I'm a composer and artist exploring the intricate relationship between humans and technology. Through my audiovisual creations, I delve into how our interactions with fabricated environments shape our experiences. My works challenge the conventional understanding of how we perceive and engage with technology-infused spaces, I aim to disrupt our habitual consumption of these spaces and interactions, prompting us to venture beyond our comfort zones and actively explore new possibilities. Currently I am focusing on my Fast Forward Trajectory, supported by the Performing Arts Fund NL, which enables me to work with renowned international partners such as Percussions de Strasbourg and the IIL. This collaboration allows me to build valuable new bridges between bits and atoms, deepening the relationship between the human and the algorithmic by bringing together musicians and artificial performers on stage.