HOW TO WRITE A DOCKERFILE————
I met improviser and composer John Zorn for the first time outside of the Whitebox Gallery in downtown Manhattan. A concert was happening at the gallery that night, and I happened to leave midway through the show just as Zorn was arriving. We passed each other on the street corner and microscopically nodded at one another in a sort of mutually parasocial way.
Zorn wasn’t yet aware of the situation that had just occurred which led me to leave the gallery in the first place. That is, just a few moments earlier, some of the performers started violently attacking one of the audience members. Despite being a typically loud improvised music show, the audience member was somehow able to modulate their voice to a loudness and timbre that precisely masked the majority of sounds produced in the performance. One of the musicians found this to be a disturbance, and resorted to physical confrontation.
However, the audience member wasn’t really interrupting anything. Because if you listened closely to the spectral shapes they were producing, you’d realize that they were actually just asking questions—questions a curious child might ask about experimental music. For some reason, I decided not to tell Zorn about this. He looked happy, and I figured that he should discover the situation for himself. After some awkward loitering, he suddenly looked at me and started speaking…
JOHN ZORN: “...about new sounds in contemporary music?”
MAX ARDITO: “Load balancing.”
Although I’m not entirely sure why I said this, it’s just the first thing that came to mind. The “load balancing” I was referring to was the load balancing that takes place in a server farm. Recent advancements in processing hardware have allowed musicians to—whether conscious of it or not—use cloud hosted AI tools in the compositional process, the promotion and distribution process, and the process of organizing and governing artist communities.
In a way, I was performing my own sort of neurological load balancing at that moment, because truthfully I didn’t actually meet John Zorn in any physical space that night. But I did meet him somewhere deep in the latent coordinates of my unconscious. With the various signs and symbols contextually at my disposal, I was able to tell myself this story—to dream of this encounter as if it were a tangible event in my life.
Nonetheless, I’ve been thinking about this dream quite a lot recently. I find myself comparing it to a dream that artist Hito Steyerl describes in an essay of hers titled Duty Free Art. In this essay, Steyerl goes above and beyond the traditional scope of the gallery artist, linking a broad variety of geopolitical topics to the contemporary art world—including WikiLeaked documents from Asma al-Assad, a modern art museum turned Kurdish refugee camp in Diyarbakır, Turkey, and Muammar Gaddafi’s son’s brief foray into the art world. But the “main character” of Steyerl’s essay, so to speak, is something called the freeport art storage unit. Freeport units are luxury art bunkers located around the world in various tax free zones. She focuses particularly on one freeport unit located in Geneva that is rumored to contain thousands of Picasso paintings. Steyerl makes note of the fact that these works of art sit idle, while legally speaking they are eternally in transit, residing in zones completely devoid of national jurisdiction.
In Steyerl’s dream, she is looking at a diagram from a text by philosopher Peter Osbourne that represents the genealogy of contemporary art. Contemporary art is represented as a circle, surrounded by outer rings representing other epochs of art history like Avant-Garde, Romantic, and Modern. Look directly above the diagram and you might only see the circle labeled “contemporary art”, however if you look at an angle, it becomes clear that the diagram has depth. With the addition of this dimension, the outer rings that contain their corresponding epochs become visible to the observer.
Suddenly, Steyerl is in outer space, and Peter Osbourne appears next to her wearing an astronaut suit. The diagram has suddenly turned into a sort of impact zone concave to the earth’s surface, covered by a thin screen containing an image of a target. From behind his astronaut visor, Osbourne says...
This is the role of contemporary art. It is a proxy, a stand-in. It is projected onto a site of impact, after time and space have been shattered into a disjunctive unity—and proceed to collapse into rainbow-colored stacks designed by starchitects.
Contemporary art is a kind of layer or proxy which pretends that everything is still ok, while people are reeling from the effects of shock policies, shock and awe campaigns, reality TV, power cuts, any other form of cuts, cat GIFs, tear gas—all of which are all completely dismantling and rewiring the sensory apparatus and potentially also human faculties of reasoning and understanding by causing a state of shock and confusion, of permanent hyperactive depression.
You don’t know what’s going on behind the doors of the freeport storage rooms either, do you? Let me tell you what’s happening in there: time and space are smashed and rearranged into little pieces like in a freak particle accelerator, and the result is the cage without borders called contemporary art today.
In reality, we actually don’t know much about what exactly is inside the Geneva freeport. At one point in Steyerl’s essay, she claims that there could even be nothing at all. It is in this sense that the freeport museum starts to resemble a freakish symptom of art’s strange yet completely normalized intersection with global markets, imbibing both the chaotic models of supply chains and the world historical dialectics of shock and confusion. Over the course of the last ten years or so, music has been indoctrinated into a similar world. But contemporary music’s memetic properties are quite unlike those of contemporary art. Throughout history, music has existed instead as a series of strange encryption practices—memorization, notation, transduction, datification, and more—all of which perpetually keep the auditor from confronting the ways in which sound itself might mimic the patterns of dialectical materialism. Where on earth is music’s freeport storage unit hiding?
According to the resource section of docker.com, a container is “a standardized unit for development, shipment, and deployment”. Founded in the early 2010s, Docker is a software company that has grown rapidly over the past decade, drastically shifting the common practices and paradigms of software engineering. To illustrate this, we can use the music streaming site Bandcamp as an example. Bandcamp, both as software and as ontological substance, inhabits thousands of places at once. Its code is constantly being downloaded, edited, tested, built, run, and pushed back to its servers by different engineers every day. Bandcamp’s code is also made up of thousands of dependencies—small little open source packages of software made by third-party developers that Bandcamp depends on in order for the site to function. On top of that, users are also passing their data through Bandcamp thousands of times a day, which in turn triggers millions of little changes in both the front end and back end of the website.
Docker is not the first organization to conceive of a “containerized” approach to software development, but they’ve certainly paved the way for it to become the new standard. The issue at hand is about maintaining the gestalt of an application, keeping it in its most symbolically stable form despite the fact that it’s constantly being compiled, built, and modified by its users and developers. Docker’s solution to this is containerization, a protocol that allows code to be shipped, built, edited, and deployed very efficiently by keeping it within the borders of a hyper-minimal operating system called a container. To greatly simplify the mechanics of this process, instead of shipping around the actual files and folders belonging to Bandcamp’s codebase, developers instead ship around a “container” that holds an operating system holding nothing but a single part of Bandcamp’s code. We can think of Bandcamp as it exists online as a sort of cargo ship, and each container aboard the cargo ship as a specific category of processes, dependencies, and data. Each part of the app—the user interface, the back-end, the machine learning algorithms—can then run isolated from the others while inside their separate containers, yet still communicate with each other through IP routing.
Perhaps the argument could be made that the Docker container is music’s freeport storage unit, at least in recent years during COVID-19 related lockdowns and quarantines. When live music fails to exist, recorded music flourishes, propagating itself exclusively throughout the “cage without borders” called platform. Practically every modern platform has, at one point or another, spent some time inside of a Docker container. User generated content (UGC) sites like Facebook, Google, Twitter, and Spotify all integrate container orchestration in their deployment architectures, and the audio related tools and interfaces that exist within these platforms are no exception. From 2020 onwards, organized sound gradually became static, existing exclusively on the databases of server farms. Yet just like the freeport units, it also entered into a peculiar state of constant metaphysical transit. For instance, when a piece of music is played on Bandcamp, a client might interact with the music by using a familiar audio user interface that sends requests to the servers to deliver the music. UI elements such as these are one of the many entities packed inside of a Docker container and delivered to users around the world, much like wholesale commodities delivered in bulk from a factory.
Except the user never really opens the container. It remains closed just like the freeport museum. Its only entrypoint is through an IP address that serves as a proxy. So how can the auditor know what really happens inside the Docker container? In her dream, Osbourne tells Steyerl that inside the freeport units, “time and space are smashed and rearranged into little pieces like in a freak particle accelerator.” A simple audio UI is not going to rip through the fabric of time and space. But what about something larger, like a database?
To discuss further, we need to dig a bit deeper into containerized software’s technical infrastructure. The instructions for how to package and deploy a Docker container are written in a file called the Dockerfile. This file is usually located in the same folder as the code that we want to containerize. Below is an example of a Dockerfile. I wrote it to containerize Google’s DDSP style-transfer model, which I was using to generate sound for a recent piece. Let’s call this container spectral-seance. Maybe we’ll find what we’re looking for in here. Maybe we’ll find something paranormal. I added some comments that might come in handy.
Within this container, a neural network is being trained to recreate latent timbres based on dozens of provided audio files. The key word here is latent, meaning the timbres don’t actually exist yet. The timbres are instead being channeled by traversing the manifolds of an encoded multidimensional space, or “tensor” space. This space is created by training the neural network model on a large batch of provided audio files. Dare to peek inside the Docker container during a neural network’s training and some weird things start to happen. Like the rearrangement of particles that might occur within the walls of the freeport museum, discrete spectral fragments are rearranged and encoded through the process of interpolation. What comes out of the other end of this training is the ability to generate audio files that are completely new and synthetic, yet hauntologically tied down to the past. And once again, this latent space never actually reveals itself to the auditor. Instead, the Docker container acts as a wrapper around the latent space, hiding its complex dimensionality by rerouting its endpoint IP to a container holding ergonomically familiar user interfaces in bulk.
An important dichotomy exists here between technology and packaging. This is a distinction that acclaimed techno-anarchist Ted Nelson makes in a series of talks titled Computers for Cynics. He makes the claim that devices like smartphones, websites, and app interfaces aren’t actually technology, they’re merely “packaging.” And perhaps we can think of the Docker container as the most generic type of packaging we have today: a wrapper that can exist around any deployable computational process including but not limited to platforms, web apps, and gargantuan AI models, all while keeping the real technologies—tensor math, DNS, pointer arithmetic—totally hidden inside the container. Ted Nelson’s meditations on the difference between packaging and technology culminate in a certain belief that it is absolutely crucial for the user to reckon with the technologies hidden behind the various tools that they use, since they possess powerful hermeneutic forces that morph and manipulate the user’s everyday behavior.
Musicians that use computation in both composition and distribution are no exception to this call for action; it is through this techno-anarchical practice that the musician might be able to envision radical new forms of music. But throughout the last few decades, the musician has seldom played the part of the one envisioning these new realities. New technology is instead delivered to the artist top-down, starting at the level of large institutions. And these technologies are almost always originally developed for the purpose of state-backed military and intelligence research.
Modern warfare as we know it is the cognitive machine that has churned out the contemporary world’s aesthetics, as war incubates technology and technology incubates art. In the 20th century, music happened to be an extremely vital weapon, becoming forever militarized in its integration with and representation through electrical signal. From this point onward, all forms of music came to share a certain adjacent relation to the military industrial complex. These moments in history transformed music theory from abstract models of musical representation to mathematical models of signal processing, computational systems, information retrieval, cryptography, and feature extraction. Any music that has been stored on a computer, a magnetic tape, a record, or any medium that is eventually translated into voltage, can be genealogically traced back to a technology born out of military research. In a sense, this is what we call contemporary music. Because practically all contemporary music shares one thing in common: its essence has transubstantiated into the form of digital signal, and has—at least once—passed through an electronic device that uses technologies originally intended for military-industrial purposes.
It’s worth examining how a concept like the experimental exists within the militaristic traces of contemporary music history. If contemporary music is a product of military-industrial technologies, “experimental music” is a term we might reserve for a sort of music that uses these technologies while also exposing them for the ways in which they deceive us. In this regard, technology and musical practice become practically equivalent. But this equivalence should not be taken the wrong way; a technologically focused definition such as this one may seem at first to neglect the cultural and historical frameworks of various kinds of music. But when we simply discuss musical practice without talking about music technology, we end up ignoring music’s role in much larger historical structures. Different communities are surrounded by different systems and interfaces. Various pieces of technology exist spread out among the nexus of global commodity circulation, reflecting the dialectical movements of capital. And while we use various musical and non-musical interfaces, they consistently and silently infect our behavior. Their embedded histories and designs continuously (mis)guide us towards different patterns and secret messages. Sometimes they even radicalize us. Therefore, experimental music’s overarching goal might be to expose and dissect these technological symptoms, mimicking a sort of deconstruction by proxy of one’s compositional process.
In the documentary The Last Angel of History (1996), members of the Black Audio Film Collective present a history of black British and American electronic music in the post-war 20th century. A recurring character called the Data Thief is introduced at the beginning of the film. The Data Thief presents us with what is perhaps the central thesis of the film: “If you can find the crossroads, a crossroads, this crossroads, if you can make an archeological dig into this crossroad, you’ll find fragments. Techno fossils. And if you can put those elements, those fragments together, you’ll find the code. Crack that code and you have the keys to your future.” Based on this prompt, electronic music pioneers Juan Atkins, Derrick May, Goldie, George Clinton, Carl Craig, A Guy Called Gerald, and DJ Spooky are interviewed about this “crossroads,” gradually digging up the various archeological and historical fragments of techno, jungle, drum & bass, dub, and funk.
Alongside the group of electronic musicians is critical art/music theorist Kodwo Eshun, who provides a provocative analysis regarding the reenvisioning of military technology in black electronic music:
Computer technology started out in the military sphere in the post-war period. By the time you get to the 80s and 90s it’s true that computer technology [and] cheap software [have] now created an ecology: an analog or digital ecology by which you can use technology, synthesizers, sequencers, programmers, [and] work stations. You can use them as ways to create sonic worlds, and some of those sonic worlds will secede from mainstream worlds and some will be antagonistic towards them. The point is that the root through the cybernetic, the root through the drum machine allows much more possibility for that. The point is the explosion, and the perforation, and mutation of African derived rhythms. [...] Techno and Underground Resistance are waging war on mediocre audiovisual programming
Interwoven between these interviews, the Data Thief slowly links together the fragments of this archeological excavation. Like Eshun, he makes a connection between the crumbling milieu of military technologies and black electronic DJs and producers “composing the soundtrack to the end of the industrial epoch,” a connection even further developed in Steve Goodman’s rhizomatic masterpiece Sonic Warfare which commences with an analysis of Eshun’s meditations on mediocre audiovisual programming. Goodwin interprets what Eshun is describing as “audio assault as a kind of cultural hacking against the ‘mediocre audiovisual programming’ of the ‘programmers.’” The “colonized of the empire strike back through rhythm and sound, Afrofuturist sonic process is deployed into the networked, diasporic force field that Paul Gilroy termed the Black Atlantic.” Though the ability to “perforate and mutate” the various histories of the black diaspora, black artists such as those affiliated with Underground Resistance wage a war on mediocrity not only through the power of the contemporary musical interface as a memetic device, but also through a certain awareness of the technological interface’s history as an industrial weapon and a vehicle of mathematical decontextualization. What these musical ecologies thus represent in the larger context of both experimental music and contemporary music is a crucial instance in which 20th century technologies were cybernetically re-envisioned by way of a musical practice: the digital audio file, the ROM, and the frequency bandwidth all radically reimagined and reappropriated.
In the 1960s in France, another community was experimenting with the reappropriation of electronic equipment. A unique experimental musical practice began to propagate and mutate from the surplus debris of military technology. In this case, it was among the European white establishment through the means of state funding and technological resources belonging to Radio France and the GRM (Group de Recherches Musicales). But the early pioneers of what’s now called acousmatic music claim that its early days were similarly subversive. In an essay titled Space In Question, acousmatic composer François Bayle writes about the birth of the acousmonium and its overlap with various protests of May ‘68.
Up until 1968, in France electroacoustic music was an ‘underground’ art form that attracted quite an audience. Of course, we didn’t have many resources to work with, but the desire for marginality was strong. This music was the manifestation of an unconventional form that overturned social prohibitions. It was usually performed using rudimentary equipment: just two or four loudspeakers.
When the ‘events’ of May ‘68 took place, suddenly everything went quiet. The masses had had their fill of the ‘underground’, and freedom had been expressed on the streets, sweeping away any desire to come back into an auditorium to listen to a concert of electroacoustic music! So there were a couple of dark years then, with sometimes only ten people in the audience, when not long before we’d had a queue at the entrance that would have filled the hall twice over. Two years on: no one at all! And we wondered: Will they come back? [...]
How can you make sure that a work will take on its true dimensions when presented to an audience? This is how the idea of the Acousmonium as an orchestra of sound projection devices developed, bit by bit, according to a logical placement of ‘sound screens’ around the space of the stage and the hall. It is by way of this ‘staging’ and because of the results it delivers—the musical performance—that audiences have come back. But even more importantly than the audiences, the composers have also returned.
Acousmatic music is not far off from traditional Western classical composition. Similar to the orchestra and its different types of instruments, the acousmonium is an array of dozens of different types of loudspeakers placed throughout an auditorium. A composer writes a piece of electronic music for these loudspeakers to play, and these pieces are traditionally interpreted as a curation of different recorded and synthesized “sound objects”—a term stemming from earlier musique concrète traditions in Paris, referring to a sound alienated from its original context by way of it being captured and manipulated through recording technology. These sound objects are mixed throughout the array of loudspeakers by a sort of maestro sound engineer who diffuses and interprets the entire piece live on a mixing board in the middle of the auditorium.
In François Bayle’s conceiving of the acousmonium, a similar cybernetic leap had to be taken in reenvisioning the various pieces of military technology involved. Curating the placement of speakers in the form of the acousmonium was an experimental act insofar as it became a sort of potential for how space itself could be composed along with sound. The acousmonium thus composes a listening practice unto the auditor by using the sound object’s motion through space as a cybernetic phenomenon, continuously traveling through multiple loudspeakers placed at disparate locations from within the auditorium. This was also a setting in which timbre developed a newfound potential through its projection out of the uniquely crafted array of loudspeakers used in the acousmonium.
There was, however, a major problem with the conceiving of the acousmonium. The problem existed in the extent to which the acousmonium enabled the sound object to become stripped of its context and virtualized within a space completely isolated from its original element. This introduces a familiar problem. In one sense, the acousmonium at its worst might act as a sort of sound-colonizing interface. But on a more abstract level, the acousmonium turns sound into a type of proto-data, something completely alien to anything beyond its surrounding musical setting. From what Bayle describes in his essay, Paris witnessed a radical reenvisioning of the potential for electronic music through which the act of protest and public critique during May ‘68 superseded electroacoustic music’s place within the culture industry. But shortly after these countercultural times, instead of attempting to envision the technologies of electronic music into systems that reflected the revolutionary discourse and thus imagined potential new futures, the communities surrounding electronic music remained unaware and uncritical of the inner technological structures of the acousmonium. The “composers returned,” as Bayle says, unconsciously settling into a military-industrial compositional practice: packaging, isolating, organizing, and decontextualizing sound.
This is a Dockerfile from Spotify’s official GitHub page. It’s a containerized version of Google Cloud’s BigTable, which is a tool used to perform low-latency analytics on massive datasets. No info is given about what data might be analyzed inside the container when it’s used internally at Spotify, but let’s assume that it could be just about anything—from something as basic as parsing search histories, to more complex machine listening algorithms that cluster the spectral similarities between different audio files. Then again, like the Geneva freeport, maybe this container—costing a Yves Bouvier sized fortune in cloud computing resources, with entry tier prices running around $10k/month—sits on Spotify’s servers completely empty…
Experimental music suffers from a symptom that acousmatic music fully predicted. Acousmatic music created a system in which sound objects are stripped from their context and passed through the organized spatial framework of the acousmonium. Now, music resides in a different spatial framework—a latent spatial framework—decontextualized inside the Docker containers of Bandcamp, Spotify, Youtube, and Facebook, subject to the gaze of machine listening and surveillance capitalism. Even if a musical practice strives to reject the implications of the sound object, it will inherently take on the form of the sound object by consenting to the terms and conditions of any UGC platform. In this regard, Spotify itself might be the most experimental musical artist that exists today, insofar as it continues to both standardize normative listening unto millions of users and propagate the myth of corporate personhood. It is no longer the artists but rather the music sharing platforms, neoliberal in their design, who are the ones composing our listening.
What would experimental music really look like right now? Who’s waging Underground Resistance’s war on “mediocre audio visual programming?” Experimental music must somehow expose the structures of the hegemonic technologies of the 21st century. Yet we are at a disorienting point in time, because practically our entire world is hegemonically technological. We are unconsciously ruled by the choice architectures of everyday digital interfaces, and thus, maybe it’s simply not enough for the act of deconstruction to take place from within a musical system. What would it mean to exist completely separated from these systems? Perhaps experimental music needs to be hidden—hidden away from all forms of sanctioned physical and digital space, hidden like the freeport art storage units, or like the contents of a Docker container.
STEP I: DISCRIMINATIVE MUSIC————
The connectivity and simultaneity slowly started being graspable, applicable and livable as a system of computational processes 30 years into the digital age; our relationship to our own mind has been altered by the existence of hyperspaces where subjectivity emerges at its own space and time. Yet the subjectivity that the computer software allows us to express and share depends highly on a description of the world as it has to be, rather than the world as it is. Thus, the obsession with the process of idealization of the world as it is, has become the primary basis of the utilitarian promise of digital change.
A tailor made reality, customized for people adhering to majoritarily endorsed ideas, forms a social fabric that is like a homogenous circle. In each pixel of this circle lies the copy of another. The algorithm always manages to come full circle and keep us from experiencing nuance. The lack of nuance makes us bound to repress our sensitivities about the world, and each other. The saturation of self repeating information patterns emerges when the profitability of the information is proved by the scale, the efficiency, the linear growth.
- Elif Sansoy, The Evolution of Digital Intimacy
Actually, my recent obsession with containerization did not start with the Docker container. It began on a trip to Istanbul in November 2019. I was walking near Rumelihisarı down the Asian side of the Bosphorus Strait, when suddenly I had a minor out-of-body experience. This small hallucination was triggered by an enormous incoming cargo ship, a monolithic vessel carrying hundreds of real shipping containers, gracefully flowing through the strait from the Black Sea to the Sea of Marmara.
Elif Sansoy—a dear friend, collaborator, and the writer of the text above—was walking next to me. She looked at the ship too and proceeded to tell me: “You know, last summer a cargo ship crashed into the European side after engine failure.” A gentle disruption in the rhythms of everyday globalatinized commerce, a blip in the symbolic pacing of world trade. A little over a year later, a cargo ship larger than the empire state building remained stuck in the Suez Canal for almost a month, causing a short global trade panic.
Perhaps these cargo containers were shipping Picasso paintings from Yves Bouvier’s art bunker to a Sotheby’s auction in the United States. Maybe they were physical proxies for the Docker containers of cyberspace, which normally travel across the canals of Google Cloud and Amazon Web Services delivering containerized aesthetic content in the form of encoded audio files and album metadata. What’s certain is that the containers aboard these two non-functional vessels momentarily existed in what might be described as a state of total disruption. These broken containers began to represent something closer to their underlying material technology, as each one struggled to maintain symbolic singularity with itself.
While continuing down the shore I thought about the sum total of all Docker containers carrying various audio-related paraphernalia on the internet, and I imagined those containers aboard a cargo ship crashing head first into the shores of Istanbul. Soon enough, in my mind’s eye, I began to see a very familiar looking container on the edge of the ship bisected almost completely down the middle, with one end hanging off the deck, its edges sharply warped outwards. I look closer, and to my surprise I retroactively notice our container from earlier, spectral-seance. But its massive multidimensional space has collapsed onto itself and become totally exposed. Its latent timbral coordinates begin to unravel, and its morbid cybernetic histories begin to decode themselves. The dead come back to life as the people, objects, and sounds once contained in the latent space begin to take on their physical forms once again, falling right out of the container directly into the water.
What do unexpected and disruptive events like the Bosphorus shipwreck or the Suez debacle say about our relationship with the past? In 2014 at the Multimedijalni institut (MaMa) in Zagreb, the late theorist Mark Fisher gave a lecture titled The Slow Cancellation of The Future. The hypothesis presented in this lecture was pretty straightforward: the year 1994 was the exact year in which music history ended, and since then we’ve been conditioned to no longer expect any radical sonic breaks from the past. After 1994, music has instead manifested itself through different types of zombie forms, where aesthetic content might change, diversify, and get more specific, but ceases to take on the form of something completely unexpected and unprecedented. One reading of this phenomenon is that it is a microcosm of something much larger that happened at the very same time. That is, in the mid 90s, the constantly unexpected and chaotic became normalized—a neo-Hegelian end of history ushered in by newfound global economic liberalism.
Somewhere in Elif’s meditations on technology there exists a crossroads. A bridge between Mark Fischer’s ideas about the endless repetition of aesthetics and Ted Nelson’s meditations on the deceptive nature of technological packaging. With this relation in mind, we can allude to the idea of the Docker container as a type of packaging so thinly and abstractly placed around the circumference of digital media platforms that it has appropriated even the most experimental listening practices into acts of complete surplus enjoyment. Is the experimental music that exists on platforms really all that experimental? Or does it merely use the latent timbral space it inhabits to communicate with the dead and consecutively rehash the nihilistic aesthetics of the past?
Web 2.0’s containerized infrastructure provides something akin to the borders of a nation state. A nation state is an entity that poses as a provider of asylum but instead antagonistically negates itself to act solely as a closed system, excluding the various people and ideas that are unable to be defined by its strictly typed object-oriented structures. Do the sound objects found in BandCamp’s experimental music section push the borders of what Elif calls the “homogenous circle”? In digital spaces like BandCamp, music is pristinely and virtuosically packaged in its containerized acousmatic environment. It is, as Elif says, “tailor made” for its surroundings, down to its comprehensive metadata. It fits perfectly within its confines by inhabiting a completely discrete and stateful space. And in no way does it attempt to push past the limits of this surrounding packaging. The sound objects are so crisp, so crystal clear that they resemble the priceless art objects of the freeport storage unit, packaged tightly but seldom if ever seen. The unspoken formats and rules of the platform guide the artist into thinking that perhaps this is the only way sounds must exist. Auditory substance becomes subject by reflecting itself through the fields of an upload form, the interfaces of an audio application, and the solutionism through which one comes to realize a latent parameter set.
Like bulk materials implicitly taking on the commodity form while inside the maritime shipping container, perhaps all sound must take on the sound-object form when confined within the borders of the Docker container. Attempts at making any remotely radical music will backfire, as the container reappropriates its own content in almost every circumstance. Because within its confines, diverse and polyglot aesthetics can quickly become a reinforcement learning model’s most valuable data. What might start in the physical world as an attempt by the artist to experiment, intersect, and collaborate will inevitably take the form of a discriminative exercise in aesthetic taxonomization from within the container, as a model will throw even the most fringe forms of music into a spatial cluster of implicitly decipherable features and labels. Traversing this symbolic musical feature space with a militaristic accuracy, the model then feeds its findings back to its users to refuel the information economy. And throughout this feedback process, extreme aesthetic outliers might be used as reference points by which the model can become an even stronger classifier of sound.
Most importantly, the auditor might unconsciously adapt to the listening practice of the model. The experimental musician’s attempt at forming new potentials for musical aesthetics ends up becoming a surveillance listening exercise. It is therefore the musician seeking “new sounds” who becomes most complicit in strengthening the profit-driven feature vectors, aesthetic labels, and musical taxonomies of a big-data platform by filling up every last stylistic crevice of the Docker container, minimizing the loss functions of hypothetical regression algorithms, and perfectly molding their audio content to the shape of the container’s inner tensor space.
It’s tempting to try and construct solutions to the problems at hand here. But the problem at hand might be the very concept of solutionism itself—a concept so deeply embedded into the meaning of the word “technology” that the two are practically inseparable. A solution is nothing more than the repression of a symptom, a mode of invisibility in which the ergonomic properties of a technology allow its historical and material context to disappear from plain sight. In this regard, music technologies are no different, and if there’s a single solution worth constructing to the problem of searching for new sounds in a time of mass digital surveillance, it is to try and envision what it might mean to design a completely anti-solutional musical technology.
For instance, how do you turn an AI into something anti-solutional? An adversarial attack is an attack on an AI model in which the user tries to trick the model into failing. Adversarial attacks can be implemented in many ways, two very important examples being in the image and speech recognition domain. Below is a representation of how an adversarial function might lead an image classifying convolutional neural network into thinking that the kitten below is actually an ostrich.
At one point in history, it felt as though experimental music’s role was like that of the adversarial attack. The electronic music communities mentioned earlier—Underground Resistance, early musique concrète in Paris—appeared to be adversarially reworking the tools of warfare in a way that shifted their technological and social purposes into something completely new. Now the meaning of experimental music seems to have inverted. Experimental music seems no longer to be adversarial but rather discriminative, moving towards the goal of a complete and total aesthetic refinement of both audio signal and metadata, such that any given platform’s algorithm can accurately target, find, and label an artist’s music. Experimental music has thus become synonymous with a sort of pinpoint accuracy in the conception of distinct and intersectional sound worlds for purposes of cultural capital. And as aesthetic accuracy increases, so in turn does an obsession with the morbid act of signification. Through astute categorization, what’s actually experimental is consistently kept at bay. Within the walls of the Docker container—a non-place where Bandcamp’s experimental music selection might be shared with the gargantuan machine listening models of Spotify, the natural language processing models of the U.S. military, and the product recommendation systems of Amazon—the album becomes a zombie album, the artist becomes an AI, and the auditor becomes the loss function of a regression algorithm.
That being said, musicians and listeners didn’t become machine learning models overnight. It happened gradually over the last decade or so. During this time, discriminative AI models became integrated into the stacks of various UGC platforms. In the context of music sharing sites, this integration has manifested itself through a blossoming of hyper-refined sonic niches, readily available through music recommendation systems. The BigTable Dockerfile is just one example of how companies like Spotify are known to cluster and map gargantuan amounts of music into relational space by way of their aesthetic similarity, plotting even the smallest crevices of stylistic difference into these feature spaces. These refined markets of musical aesthetics thus accelerate uncontrollably towards an almost Keynesian listening practice that excavates capital out of virtually nothing. The experimental musician then becomes complicit in this machine listening exercise by way of the artist’s search for new and diverse sonic worlds, further helping a normative platform like Spotify map out the manifolds of aesthetic taxonomy into the coordinates of discriminative feature space. Search by tags while on these big music databases: noise experimental DIY drone ambient improvisation free-jazz musique contrete and you’ll probably get what you’re looking for. If you don’t find what you’re looking for, you can rest assured that most models will learn from their errors and bend their tensor space to provide a future user with the correct results.
But there’s a bit more to AI than clustering and categorizing: there’s also the idea of the dream. And it’s always fascinating how in dreams, when we meet people who we’ve never met in real life—celebrities, authors, dead people—that they are, for lack of a better phrase, on our side. The roles that they play become pure interpolations between our own thoughts and feelings, solely based on the compiled experiences of our everyday lives. Like the fictitious Peter Osbourne in Hito Steyerl’s dream, or the John Zorn in my own dream, we usually witness these people acting as divine interpreters, or analysts. They often attempt to give us answers to the problems and philosophical quandaries that we experience in real life.
The three people above do not exist in any real world, but they do exist in a dream world. They were completely imagined and rendered by something called a Generative Adversarial Network (GAN). Recall that our Docker container from earlier (spectral-seance) created “latent” timbres, auditory timbres that don’t actually exist yet. These timbres were created through building up a large tensor space based on audio files and interpolating between the space’s coordinates. The photos above were generated from this same process, except instead of audio, this tensor space was built up from images.
The goal of generative modeling is to create something new by providing gargantuan amounts of data—audio, visual, linguistic—and interpolating between this data. Interpolation requires that we conceptualize the data as a sort of multidimensional coordinate space, which in this case consists of tens of thousands of headshots. If designed correctly, the structure of this space is not determined arbitrarily. Instead, it is precisely through the discrimination of data that the space ends up accurately reflecting a sophisticated and high-dimensional aesthetic ordering. Multilayered symbolic relations exist throughout the dimensions of the space, reflecting both the physiological and phenomenological structure of thought. Whether it’s dreaming or telling a story or trying to come up with a provocative idea, thinking is a spatial act. It is spatial like the topographies of neuron pathways within our brain, or like the load balancing algorithms performed across a warehouse of cloud servers. It’s no coincidence that the GPU—the chip used to train most neural networks—is the same chip used to render video games and visual media, as the multidimensionality of the task at hand becomes the task at hand, and the problem of making something new becomes a problem of mapping, modeling, and understanding complex topologies.
There are many different types of generative AI models, but the GAN is a rather strange model. It is one part generator, and one part discriminator. And the way it learns to generate content is by playing a bizarre game with itself in which its generator tries to fool the discriminator by repetitively and ouroborically quizzing itself with both real and fake content. Over the course of the training process, it learns from its mistakes and slowly gets better and better at generating believably real content—in this case, human faces.
In the spirit of the Data Thief, let’s attempt to make an archeological dig. If we excavate the spaces of discriminative learning, there’s not much more to what meets the eye—a repressed concave network of relations between entities that share similar features. But if we dig into the GAN’s auto-encoded latent space, we find some strange artifacts. It’s ability to convexly create and resurrect histories points to a potential for some rather mysterious outputs. Through an understanding of the model’s mathematical structures, extreme contradictions can coexist and histories can intersect. These cybernetic tensors can then be used to synthetically add and subtract aesthetics from any given section in the space. If we continue with our set of headshots, we are able to add and subtract glasses, multiply by hair length, and take the dot product of skin color, totally virtualizing almost all aspects of human semiotic perception. Play with these features even more and we begin to find small disruptions. Latent space then becomes a sort of Tarkovsky-esque “zone” where the predictable patterns of everyday life simply don’t make sense anymore.
Sound has a very different informatic framework from visual data. In a strictly physical context, the act of making sound results in the generation of vibrations in the air that, after a few seconds, quickly die. Immediately we are dealing with a type of signal that is comfortable with the idea of its own mortality. But somewhere along the way, the advent of recording technology provided a sort of coffin or crypt for people to hold onto these sonic messages as fixed semiotic objects. Electronic music systems of the 20th century like Bayle’s acousmonium represented a sort of platform through which the user could tell stories using these sonic carcasses. And in the 21st century, these datafied histories are coming back to haunt us.
The above clip was generated with OpenAI’s Jukebox model. Jukebox is trained on 1.2 million tracks of recorded music. One interesting way you can generate audio from the model is by prompting it with a short audio file of your own. The model will then map that audio to a point in latent space and interpolate between nearby aesthetic features to continue generating content for the rest of the song. I prompted the model with 15 seconds of FM synth demos and let the model attempt to fit these demos into Jukebox’s latent space.
However, I also did something a bit out of the ordinary. Using a V100 GPU, it takes about two days to fully generate an audio file from this model. But I stopped the training process at about 10 hours. What we get as a result is a sort of cybernetic debris, just like the small disruptions in the headshots above—something not fully formed, a little bit broken, and extremely vulnerable in its technologically non-functional state. Through a technique such as this one, the latent space itself is sonified instead of its coordinates—the system instead of its data, the form instead of the content. The histories of these sound corpuses are once again resurrected, and musical vibe shifts intermittently as the space channels the dead artists, sounds, and aesthetics.
What do you hear in the noise? Maybe you hear something familiar. The GAN can act as a warped mirror in that sense, a sort of “null space” where the feature vectors in your mind and the feature vectors in the latent space cancel each other out to zero. Space as a question holds up well after François Bayle envisioned his version of spatialized electronic music in the 1960s. But now, in our current epoch, the question begins to concern a different sort of space. The sound object that a GAN produces gives way to something that is not actually a sound object. It is instead something that is once further removed from the sound object. It is a latent sound object. And through this latent sound object, the vibrations of the past begin to haunt us in new ways—ways that intersect with the surveillance of our lives, both as they exist on UGC platforms and as they map our strangest fantasies and fixations into the manifolds of a discriminative topology.
We might say that for the first time since the 90s, technology has mutated into a form through which there exists a chance at deciphering the encrypted keys to a possible future for music. The hauntological properties of the GAN might even hold a hidden mapping, composably pointing towards Mark Fisher’s historical feedback loop. Indeed, hauntology is the one and only thing that the GAN relies on, depending solely on the discrimination of fragments from the past in order to create new aesthetic spaces. Musical zombie forms cyclically re-occur within the cybernetic, and the repetition of history triumphs, propelling the endless growth of both contemporary music and neoliberalism forward. It is thus tempting to view the GAN as a symbol of a post-colonial aesthetic ideology—one part discriminator and one part generator—resembling a totalizing feedback loop of virtualization and appropriation.
But in reality, there actually isn't that much to fear in the GAN. Because the GAN is one of those uniquely anti-solutional pieces of technology. One would still be forgiven in feeling as though OpenAI’s JukeBox represents an intuitively frightening totem—a convex representation of how solutional technologies listen to the world. However, maybe the GAN can be deconstructed in the same way that the military technologies of the 20th century were sonically reworked. In an effort to figure out what exactly experimental music has the potential to become in the 21st century, would it be possible to use music’s latent space to confront the past and attempt to move on? The process might be, at its worst, a seance resurrecting the repressed aesthetic nightmares of history, perpetuating an endless repetition of technocratic neoliberalism. But at its best, through a reappropriation of the technology, this process could serve as a path towards a new adversarial potential for not only sonic organization, but maybe even societal organization. By way of an analytic session with the GAN, it could potentially act as an undoing of the preconceived notions—colonial notions, monetary notions—of the artist’s place in a world stuck in 1994.
STEP II: SINTHOMATIC MUSIC————
Soon after I visited Istanbul at the end of 2019, Elif became increasingly invested in digital intimacy, a field of research dealing with the phenomenology of digital interface. On a weekly basis we would catch up on video calls from Jackson Heights to Kadıköy where I would receive alt apps, underground social media platforms, and bizarre websites that provide spaces where people can find solace from the normative platforms that make up the majority of the internet. Some notable examples include iOS bot therapists, a Chrome browser extension that turns cookie acceptance into a video game, and reissued DIY literature from the days of homebrew computing, like Ted Nelson’s Computer Lib/Dream Machines.
One of our favorite works of art that tackles the issue of digital intimacy is called Serious Games, an installation by video artist Harun Farocki. Throughout the four parts of the film, a single piece of technology is used in two radically different scenarios. In Serious Games I: Watson is Down, Farocki films a US Marine drill in which 3D video game simulations of combat in Afghanistan are shown on one screen, while the other screen depicts a room full of US soldiers playing these same computer games. These durational shots are later juxtaposed with Serious Games III: Immersion, in which a workshop organized by the Institute for Creative Technologies experiments with the use of VR as a therapeutic device for former soldiers suffering from post-traumatic stress. In this part of the film, various soldiers put on a VR headset and begin to relive their wartime traumas by sharing their stories with a therapist, while in real time, the therapist places 3D assets into the virtual world that correspond to the narratives. Assets that are shown in the film include IEDs, gunshot sounds, enemies, coke cans, dog carcasses, and other fragments of memory that the patient describes in the moment. Of course, the central irony of Farocki’s film is that these 3D vistas and assets that help soldiers recover are the very same ones used by the U.S government to train soldiers for combat in the first place. Similar to the GAN, they represent both a discriminative and a generative side of the same technology. Farocki’s film can be seen as an experimental exploration of early VR, as it takes apart both the concave and convex structures of its underlying technological practice completely from within the medium.
In 2021, we began sketching out a similar analytic process involving intimacy and immersion within digital space. But instead of immersing the subject inside a replica of an already existing space—in Farocki’s case, a completely synthetic Afghanistan—we wanted to immerse the subject in a latent space. We wanted to use latent space not as a solution, but as a potential through which something like music history can come face to face with its contemporary symptoms and begin to slowly heal itself. This path to recovery, a literal seance, came to be known as the sinthomatic process. Through a series of performances, lectures, installations, and impossibly unrealizable ideas, we have begun to slowly produce a body of work.
In order to talk about the sinthomatic process, it’s necessary to revisit the question of space and topology once more, not only as we already did in the context of the GAN, or the acousmonium, but rather as it relates to the field of psychoanalysis. The sinthome in “sinthomatic” is a term borrowed from one of Jacques Lacan’s seminars—a sort of old French way of writing the word “symptom,” as well as a nifty play on words (Synthetic-Artifical Man, Saint Homme, SinThomasAquinas, and perhaps in our case Synthesized Sound). In Lacanian psychoanalysis, a patient’s symptoms are crucial to our general understanding of concepts such as the new, the unexpected, and the disruptive. Popular Lacan interpreter Slavoj Žižek summarizes this connection in his book The Sublime Object of Ideology, shedding light onto our earlier analysis of industrial maritime disasters. He doesn’t mention cargo ships like in the Bosphorus or the Suez, but he does talk about an event that most people are familiar with…
Th[e] dialectics of overtaking ourselves towards the future and simultaneous retroactive modification of the past - dialectics by which the error is internal to the truth, by which the misrecognition possesses a positive ontological dimension - has [...] its limits; it stumbles on to a rock upon which it becomes suspended. This rock is of course the Real, that which resists symbolization: the traumatic point which is always missed but none the less always returns, although we try - through a set of different strategies - to neutralize it, to integrate it into the symbolic order. In the perspective of the last stage of Lacanian teaching, it is precisely the symptom which is conceived as such a real kernel of enjoyment, which persists as a surplus and returns through all attempts to domesticate it, to gentrify it (if we may be permitted to use this term adapted to designate strategies to domesticate the slums as ‘symptoms’ of our cities), to dissolve it by means of explication, of putting-into-words its meaning…
...the wreck of the Titanic made such a tremendous impact not because of the immediate material dimensions of the catastrophe but because of its symbolic overdetermination, because of the ideological meaning invested in it: it was read as a ‘symbol’, as a condensed, metaphorical representation of the approaching catastrophe of European civilization itself. The wreck of the Titanic was a form in which society lived the experience of its own death, and it is interesting to note how both the traditional rightist and leftist readings retain this same perspective, with only shifts of emphasis. From the traditional perspective, the Titanic is a nostalgic monument of a bygone era of gallantry lost in today’s world of vulgarity; from the leftist viewpoint, it is a story about the impotence of an ossified class society.
In the final years of his life, Lacan began to focus his work towards a sort of “universalization of the symptom.” This focus, according to Žižek, became so extreme that the symptom practically ended up becoming his answer to the most fundamental philosophical questions, like why there is even “something” instead of “nothing.” In other words, the symptom itself existed for Lacan as a kernel embedded so deeply into society and language that it must be recognized entirely as its own systematic entity.
Here is where the sinthome comes in, precisely Lacan’s term for this embedded kernel. Whereas a ‘symptom’ acts as a sort of isolated occurrence—a void signifier that the subject latches onto through fantasy and fixation—the sinthome is symptom as it exists beyond fantasy. It represents the encoded symptom as it approaches the subject from the dimension of the future, eventually able to be decoded through language and signifiers, yet still persistent as symptom nonetheless. The sinthome is, in a sense, a space that links together our symptoms as they exist decontextualized from fantasy.
As we exist with one foot in physical space and one foot in cyberspace, we have in a very real sense come face to face with the sinthome while not even being aware of it. And since Lacan’s death in 1981, one could potentially say that our collective idea of what is symptom and what is sinthome has radically shifted. Somewhere along the way—perhaps between the birth of the internet and the integration of computer modeling into macroeconomics—we have normalized and digitized the symptoms of 20th century capitalism, its bygone violent fantasies included, while still endlessly experiencing its sinthomatic zombie forms from within digital space. Most importantly, we still hear these fantasies of the 20th century in various symptomatic sound objects, perhaps as an array of endlessly orbiting musical debris surviving inside the semiotic worlds of drum machines, Max patches, tape recorders, and proprietary streaming services.
But these 20th century symptoms now exist in another space, one that is beyond fantasy—more specifically, a sinthomatic latent space. Lacan claims in Seminar XXIII: Le Sinthome that in order to make the conceptual leap necessary to conceive of the sinthome, he spent two months trying to solve a seemingly random topological proof that consisted of tying four trefoil knots together into a borromean structure. He was convinced it was impossible, until two mathematicians Soury et Thomé (pay attention to these names: S-et-Thomé) came to show him that it was indeed completely possible and soundly provable. This fourth trefoil knot came to represent the sinthome, the gray ring in the above drawing.
Why did Lacan put so much emphasis on topology in his seminar? Perhaps he wanted to demonstrate that in order for both the analyst and the patient to break free from the symptomatic and repetitive cycles of thought, a reckoning with the inner mathematical structures of any system is necessary. By performing topological surgery and attempting to craft a proof concerning the structures underlying the unconscious, Lacan was able to envision the sinthome. And in turn, the patient then might be able to learn what to do with their symptoms by partaking in a similar exercise during an analytic session. Both topology and Lacanian psychoanalysis point to the conclusion that there’s more to an object’s structure than what meets the eye. Imagine briefly a donut and a coffee cup: according to the axioms of topology, a donut and a coffee cup are the same object because you can perform a transformation from one into the other without breaking or tearing its abstract material. In the exact same way, the topological structure of a psychoanalytic symptom might instead appear to the subject as a fear of bugs, or a sexual fantasy, but both nonetheless might point to the same topologically equivalent sinthome.
Attempt once more to envision this same process using the topology of an AI system’s latent space. In an abstract sense, the advent of the GAN represents a bizarre new point in history in which an artist is able to come face-to-face with the sinthomatic structure behind their aesthetic symptoms. The GAN in particular forms a certain topology, one which has the potential to model an artist’s latent sound objects, given enough data. Similar to Lacan’s dimension of the Real, these latent sound objects exist in the future: beyond an artist’s decipherable fantasies, yet strangely enough also deeply embedded in the zombified and datified aesthetics of the past. Using our example of the headshots, we’ve seen how it is possible to warp this inner topology just as the analyst warps the topology of the subject’s relationship with language. By digging into the actual mathematics of this topology and reckoning with its power to decontextualize, the artist just like the patient is able to learn what to do with their symptoms by seeing the underlying latent tensors, matrices, and vectors that exist in the system. And it is only at this point—the point at which the sinthomatic properties of the system becomes visible—that the GAN stops being something hauntological and repetitive, and instead turns into something adversarial.
In Learning What To Do With Your Sinthome Instead of Enjoying It / The Evolution of Digital Intimacy, both Elif and I try to craft a series of sounds, words, and images through which this adversarial state of anti-solution can be used as a potential for reimagining the structures of the world. We try to reappropriate the sinthome in order to differentiate our research from other music and visual art that might otherwise use AI as a means of solving an artistic problem. Furthermore, this sinthomatic process must always attempt to exist as an anti-solutional approach to the use of not just the GAN, but perhaps all technologies used in the compositional and artistic process. It must manifest itself not as a coordinate in space, but as a rip into the fabric of space itself, like the sliced container hanging off of the Bosphorus cargo ship. It must cut the GAN’s cyclical repetition of history at the point by which it trains itself, deconstructing the hauntological manifolds of sound by sonifying and exposing the raw mathematical spaces in which they live.
STEP III: ENCRYPTED MUSIC————
Where there is a single internet packet traveling to an Amazon Echo, here we can imagine a single cargo container. The dizzying spectacle of global logistics and production will not be possible without the invention of this simple, standardized metal object. Standardized cargo containers allowed the explosion of modern shipping industry, which made it possible to model the planet as a massive, single factory. In 2017, the capacity of container ships in seaborne trade reached nearly 250,000,000 dead-weight tons of cargo, dominated by giant shipping companies like Maersk of Denmark, the Mediterranean Shipping Company of Switzerland, and France’s CMA CGM Group, each owning hundreds of container vessels. For these commercial ventures, cargo shipping is a relatively cheap way to traverse the vascular system of the global factory, yet it disguises much larger external costs.
In recent years, shipping boats produce 3.1% of global yearly CO2 emissions, more than the entire country of Germany. In order to minimize their internal costs, most of the container shipping companies use very low grade fuel in enormous quantities, which leads to increased amounts of sulphur in the air, among other toxic substances. It has been estimated that one container ship can emit as much pollution as 50 million cars, and 60,000 deaths worldwide are attributed indirectly to cargo ship industry pollution related issues annually. Even industry-friendly sources like the World Shipping Council admit that thousands of containers are lost each year, on the ocean floor or drifting loose. Some carry toxic substances which leak into the oceans. Typically, workers spend 9 to 10 months in the sea, often with long working shifts and without access to external communications. Workers from the Philippines represent more than a third of the global shipping workforce. The most severe costs of global logistics are born by the atmosphere, the oceanic ecosystem and all it contains, and the lowest paid workers.
- Kate Crawford, Anatomy of AI
But it is through the anti-solutionism inherent in the sinthomatic process that the subject is transported back into an anthropocenic reality. In the above quote, the shipping container is once again represented in its deconstructed form. A Lacanian point de capiton between the physical container and the digital container, AI ethicist Kate Crawford describes the surplus result of cargo as it exists leaking toxic waste at the bottom of the ocean. Crawford shows the various symptoms of capital’s dialectical manifestations in her paper by including a detailed diagram of both the physical and digital supply chains involved in the manufacturing of the Amazon Echo. These defunct shipping containers that once stored electrical and computational components are just one single part of the massive dialectical structure.
What shall we call this container at the bottom of the ocean floor? It’s certainly not a digital container, despite it potentially containing digital components. It might be physical in its form but certainly not in its function. It’s similar to the containers idle in the Suez or on the shores of the Bosphorus, but it remains motionless for much longer than a week or a month. It’s more so a result of the disruption, left to deteriorate and sublimate beyond its meaning. How does ocean life interpret this shipping container? Obviously, aquatic life doesn’t perceive a shipping container in the same way people do; the shipping container is merely a human invention, envisioned and maintained systematically in the collective consciousness. Outside of human spaces, the container becomes stripped of its meaning. For a pod of whales passing by, or for aquatic bacterial life, its context is totally meaningless. We could say that the cybernetic semblance of the shipping container is bottlenecked, encoded, and encrypted into its surrounding space.
This is coincidental, because the shipping container owes its entire existence to its phenomenal ability to decontextualize. Decontextualization was pertinent not only in the digital realm but also in the physical realm, as the advent of the shipping container represented a grave danger for longshoremen unions in the past. Similarly, the shipping container experiences a decontextualization further removed from that of its own while at the bottom of the ocean floor. Its original meaning is hidden not in its ergonomic nature, but in its absolute meaninglessness, cybernetically encrypted from the surrounding aquatic life.
The implications of the shipping container are not far off from the ways that the late anarchist and anthropologist David Graeber interprets the genealogy of money. In his book Debt: The First 5,000 Years, Graeber traces the roots of money back to the origin of human economies. Through rigorous anthropological work, he makes the claim that it was in fact the human that became the first object that represented a truly unpayable debt within societal and tribal credit systems. And by envisioning the human as a decontextualized object—stripped from their relational, cultural, and familial environment—masses of people could now be seen as objects of fixed value, manifesting first through the birth of patriarchal society in which men used women as objects of credit, and eventually leading to giant global credit systems such as the transatlantic slave trade.
In this sense, human economies eventually sublimated towards the advent of currency, channeling the human’s capability to be alienated from their context through the commodity object’s decontextual form. It was at this moment that money came to be: value completely stripped from its context. In the modern world, slave labor and human economies still exist everywhere, a notable example being the mining of rare earth metals needed to produce CPUs and GPUs. And in a broader sense, traces of the human economy exist—on a microcosmic scale—in the lives of those who are a part of the global work force, insofar as we must periodically and temporarily alienate part of our lives for the sake of receiving decontextual credit through physical labor.
Do these traces also exist in the stacks of the internet? The above photo—originally from a paper by Michael Bergman—was recently cited in an article written by Caroline Busta about the various structures underlying the web. In the article, she defines the term clearnet, a term that alludes to the surface level internet, or “observable” internet. Busta informs us that in the U.S. the clearnet runs on the GAFA stack (Google, Apple, Facebook, Amazon), whereas in other countries the clearnet might run on a different stack—China’s stack is BAT (Baidu, Alibaba, Tencent). In the above illustration, the clearnet represents the upper surface of the ocean, a surface on which content is shipped around internationally on massive container orchestration vessels (Google’s Kubernetes, Docker’s Swarm).
It’s tempting to find an “alternative” space somewhere beyond this ocean, somewhere experimental music can reach its full expressive and imaginative potential. I often find myself going back to the image of the shipwrecked vessel. I’m imagining a Docker container from Spotify falling off one of these cargo ships and sinking to the bottom of this digital ocean, all the way down to the dark web. It’s leaking industrial waste. The surrounding aquatic life is unphased by its encrypted aesthetic content. Can we call this space a true musical asylum?
Three years ago I had another dream about load balancing, another parasocial encounter, not with John Zorn but with a different improviser and composer. This dream didn’t happen in any latent space, it happened in physical space—a real event, retroactively modified by my own neurological load balancing. While on a break from teaching an electronic music class in Paris, I once took a personal pilgrimage forty-five minutes by train outside of the city to the town of Reims. My main intention for the visit was to see the Notre Dame Cathedral, where composer Guillaume de Machaut premiered his polyphonic setting of the latin mass in 1423, said to be the first ever notated polyphonic setting of the mass ordinary.
Machaut was a huge influence on me as an improvising violinist, especially as I was just starting to perform frequently in my hometown of Brooklyn. In the late 2010s I ended up involved in a number of underground music circles, playing violin sets at various basement shows and punk houses. At the time, my artistic practice was concerned with trying to find a sort of crossroads between New York noise/no-wave music and the music of French medieval/renaissance composers. For whatever reason, these musical milieus both resonated with me: both were fleeting, somber, poorly archived, and seemingly independent from the infectious systems of thought established during the European enlightenment.
I spent about three hours total inside Reims Cathedral. Yet in a disruptive spatiotemporal occurrence, I couldn’t find a single artifact concerning Machaut or his mass. Nothing existed, at least not in my immediate surroundings. There were blurbs about Chagall’s stained glass windows, even about Messiaen’s organ improvisations, but nothing about Machaut.
I walked around a slightly drizzling Reims all day and started to feel like I was coming down with a cold. While I took the train back to Paris, I tried Googling Machaut to find more information about his position at the cathedral. That’s when I witnessed something very odd.
I went back to my flat where I reflected upon the day by listening to the Credo from Machaut’s mass. I thought about Machaut’s ability to encrypt himself into the fabric of both physical and digital space.
Two years later I checked to see if Machaut still subverted Google’s SEO and his mistaken digital identity remained. It’s often posited that both the Middle Ages and the Renaissance represented epochs of European history in which signified meaning was incredibly loose, times in which linguistic and symbolic signifiers were not objects of taxonomy but of poetry—Michel Foucault further develops this argument in the first part of his history of epistemology The Order of Things. The early internet was an epistemologically similar time. In order to be mapped into the digital systems, language had to first be completely broken down in order to then be codified. And even from the grave, in a time of violently enhanced discrimination and codification of data, Machaut is able to implement an encryption algorithm on his own identity.
His music, too, remains a bizarre open source project, with early music scholars and musicians attempting to resurrect accurate sound objects, yet constantly failing. Historical performance ensembles from the 20th century once erroneously interpreted this encrypted counterpoint, trying to resurrect its encoded timbres through the utilization of contemporary performance practices and classical-era instruments. More musicologically experimentative groups like Ensemble Organum and Ensemble Musica Nova now treat the work with more awareness of its contextual, cultural, and temporal complexity. But a mysterious barricade still exists between these interpretations and the original signal.
Personally, I think Machaut is curating his SEO from the grave. I’m convinced that he’s purposely keeping Anton Vivaldi at the top of Google's search results in an effort to keep his identity totally encrypted, poetically distributed within the memetic space without being subsumed by its gaze. I’ve tried to channel Machaut from within latent space a few times and I can never get it quite right.
My love and gratitude goes out to Elif Sansoy, Buse Çetin, Dominic Coles, Sam Tarakajian, Marija Kovačević, and Isabelle Galet-Lalande.