LETTER FROM THE EDITOR


February 12th, 2022

A

s a practitioner of electronic and computer music for over twenty years, I have often been jealous of my 1960s and 1980s counterparts, who saw technique after new technique upend the field of sound production to reveal entirely new vistas of possibility. From feedback systems to granular synthesis to fm synthesis to phase vocoders to improvising computers, the latter 20th century was a whirlwind of new ideas in music technology, resulting in a rapidly shifting field with constantly changing sonic horizons.

Conversely, the central driving force behind change for the past two decades has not been the creation of new techniques, but rather the increased computing power available to anyone with a laptop, allowing us to do more and more stuff, faster, and more and more live. My own focus has been on user interface and data structure - the manipulation of the shape of computer programs and the data stored therein to allow the performer to do anything they can do at any moment. The signal chain is longer and more complex than earlier attempts, but the elements are the same as they always were.

However, recent developments in Machine Listening and Machine Learning and the associated technologies inherited from big data research, while not genuinely new (see Lee, 1991, Dannenberg, 2001, and Tudor, 1995), are only now at a point where we have the terabytes and teraflops - along with the toolsets (Fiebrink, 2009 and Tremblay, 2021) - to make creation with these technologies feasible to a broad range of users. Yet, we are still at the nascent stage of things. Ahead are horizons of what we can do and then beyond that how we do it. There are infuriatingly difficult roadblocks ahead. One only needs to look at the development of the self-driving car to see Big Tech’s struggles with its own monstrous creation. 

But this is exciting! What do we do? I don’t know! It is new. It is strange. And I, for one, can't wait to see where this technology takes us.

In this issue of the Archive we find three individuals and groups at the forefront of musical thinking around AI. Max Ardito explores the wonders and the terrors of composing with hyper-powerful black-box AI technology largely developed in the wake of the military industrial complex and promulgated through Big Tech’s crusade to consume everything. Ted Moore shares with us his experiences using machine learning as an Algorithmic Collaborator in composition and improvisational practice. Norah Lorway, Edward Powley, and Arthur Wilson of Beesting Labs share a manifesto about their Scorch programming language, a language which uses AI collaborators to open computational composition to a wider demographic of future musickers.