16-softwarecontrolpanel-narrow

From the Machine: Computer Algorithms and Acoustic Music

Western music has made use of rule-based compositional techniques for centuries, but with the advent of realtime computing and modern networking technologies, new possibilities can be imagined. A composer’s live data input can work in concert with conditional, aleatoric, probabilistic, and pre-composed materials to produce what might be called a “realtime composition” or a­n “interactive score” for acoustic musicians in live performance.

Written By

Joseph Branciforte

The possibility of employing an algorithm to shape a piece of music, or certain aspects of a piece of music, is hardly new. If we define algorithmic composition broadly as “creating from a set of rules or instructions,” the technique is in some sense indistinguishable from musical composition itself. While composers prior to the 20th century were unlikely to have thought of their work in explicitly algorithmic terms, it is nonetheless possible to view aspects of their practice in precisely this way. From species counterpoint to 14th-century isorhythm, from fugue to serialization, Western music has made use of rule-based compositional techniques for centuries. It might even be argued that a period of musical practice can be roughly defined by the musical parameters it derives axiomatically and the parameters left open to “taste,” serendipity, improvisation, or chance.

A relatively recent development in rule-based composition, however, is the availability of raw computational power capable of millions of calculations per second and its application to compositional decision-making. If a compositional process can be broken down into a specific enough list of instructions, a computer can likely perform them—and usually at speeds fast enough to appear instantaneous to a human observer. A computer algorithm is additionally capable of embedding non-deterministic operations such as chance procedures (using pseudo-random number generators), probability distributions (randomness weighted toward certain outcomes), and realtime data input into its compositional hierarchy. Thus, any musical parameter—e.g. harmony, form, dynamics, or orchestration—can be controlled in a number of meaningful ways: explicitly pre-defined, generated according to a deterministic set of rules (conditional), chosen randomly (aleatoric), chosen according to weighted probability tables (probabilistic), or continuously controlled in real time (improvisational). This new paradigm allows one to conceive of the nature of composition itself as a higher-order task, one requiring adjudication among ways of choosing for each musically relevant datum.

Our focus here will be the application of computers toward explicitly organizational, non-sonic ends.

Let us here provisionally distinguish between the use of computers to generate/process sound and to generate/process compositional data. While, it is true, computers do not themselves make such distinctions, doing so will allow us to bracket questions of digital sound production (synthesis or playback) and digital audio processing (DSP) for the time being. While there is little doubt that digital synthesis, sampling, digital audio processing, and non-linear editing have had—and will continue to have—a profound influence on music production and performance, it is my sense that these areas have tended to dominate discussions of the musical uses of computers, overshadowing the ways in which computation can be applied to questions of compositional structure itself. Our focus here will therefore be the application of computers toward explicitly organizational, non-sonic ends; we will be satisfied leaving sound production to traditional acoustic instruments and human performers. (This, of course, requires an effective means of translating algorithmic data into an intelligible musical notation, a topic which will be addressed at length in next week’s post.)

Let us further distinguish between two compositional applications of algorithms: pre-compositional use and performance use. Most currently available and historical implementations of compositional data processing are of the former type: they are designed to aid in an otherwise conventional process of composition, where musical data might be generated or modified algorithmically, but is ultimately assembled into a fixed work by hand, in advance of performan­ce.[1]

A commonplace pre-compositional use of data processing might be the calculation of a musical motif’s retrograde inversion in commercial notation software, or the transformation of a MIDI clip in a digital audio workstation using operations such as transposition, rhythmic augmentation/diminution, or randomization of pitch or note velocity. On the more elaborate end of the spectrum, one might encounter algorithms that translate planets’ orbits into rhythmic relationships, prime numbers into harmonic sequences, probability tables into melodic content, or pixel data from a video stream into musical dynamics. Given the temporal disjunction between the run time of the algorithm and the subsequent performance of the work, such operations can be auditioned by a composer in advance, selecting, discarding, editing, re-arranging, or subjecting materials to further processing until an acceptable result is achieved. Pre-compositional algorithms are thus a useful tool when a fixed, compositionally determinate output is desired: the algorithm is run, the results are accepted or rejected, and a finished result is inscribed—all prior to performance.[2]

It is now possible for a composer to build performative or interactive variables into the structure of a notated piece, allowing for the modification of almost any imaginable musical attribute during performance.

With the advent of realtime computing and modern networking technologies, however, new possibilities can be imagined beyond the realm of algorithmic pre-composition. It is now possible for a composer to build performative or interactive variables into the structure of a notated piece, allowing for the modification of almost any imaginable musical attribute during performance. A composer might trigger sections of a musical composition in non-linear fashion, use faders to control dynamic relationships between instruments, or directly enter musical information (e.g. pitches or rhythms) that can be incorporated into the algorithmic process on the fly. Such techniques have, of course, been common performance practice in electronic music for decades; given the possibility of an adequate realtime notational mechanism, they might become similarly ubiquitous in notated acoustic composition in the coming years.

Besides improvisational flexibility, performance use of compositional algorithms offers composers the ability to render aleatoric and probabilistic elements anew during each performance. Rather than such variables being frozen into fixed form during pre-composition, they will be allowed to retain their fundamentally indeterminate nature, producing musical results that vary with each realization. By precisely controlling the range, position, and function of random variables, composers can define sophisticated hierarchies of determinacy and indeterminacy in ways that would be unimaginable to early pioneers of aleatoric or indeterminate composition.

Thus, in addition to strictly pre-compositional uses of algorithms, a composer’s live data input can work in concert with conditional, aleatoric, probabilistic, and pre-composed materials to produce what might be called a “realtime composition” or a­n “interactive score.”

We may, in fact, be seeing the beginnings of a new musical era, one in which pre-composition, generativity, indeterminacy, and improvisation are able to interact in heretofore unimaginable ways. Instances in which composers sit alongside a chamber group or orchestra during performance, modifying elements of a piece such as dynamics, form, and tempo in real time via networked devices, may become commonplace. Intelligent orchestration algorithms equipped with transcription capabilities might allow a pianist to improvise on a MIDI-enabled keyboard and have the results realized by a string quartet in (near) real time. A musical passage might be constructed by composing a fixed melody along with a probabilistic table of possible harmonic relationships (or, conversely, by composing a fixed harmonic progression with variable voice leading and orchestration), creating works that blur the lines between indeterminacy and fixity, composition and improvisation, idea and realization. Timbral or dynamic aspects of a work might be adjusted during rehearsal in response to the specific acoustic character of a performance space. Formal features, such as the order of large-scale sections, might be modified by a composer mid-performance according to audience reaction or whim.

While the possibilities are no doubt vast, the project of implementing a coherent, musically satisfying realtime algorithmic work is still a formidable one: many basic technological pieces remain missing or underdeveloped (requiring a good deal of programming savvy on a composer/musician’s part), the practical requirements for performance and notation are not yet standardized, and even basic definitions and distinctions remain to be theorized.

In this four-part series, I will present a variety of approaches to employing computation in the acoustic domain, drawn both from my own work as well as that of fellow composer/performers. Along the way, I will address specific musical and technological questions I’ve encountered, such as strategies for networked realtime notation, algorithmic harmony and voice leading, rule-based orchestration, and more. While I have begun to explore these compositional possibilities only recently, and am surely only scratching the surface of what is possible, I have been fascinated and encouraged by the early results. It is my hope that these articles might be a springboard for conversation and future experimentation for those who are investigating—or considering investigating—this promising new musical terrain.



1. One might similarly describe a piece of music such as John Cage’s Music of Changes, or the wall drawings of visual artist Sol Lewitt, as works based on pre-compositional (albeit non-computer-based) algorithms.


2. Even works such as Morton Feldman’s graph pieces can be said to be pre-compositionally determinate in their formal dimension: while they leave freedom for a performer to choose pitches from a specified register, their structure and pacing is fixed and cannot be altered during performance.


Joseph Branciforte

Joseph Branciforte is a composer, multi-instrumentalist, and recording/mixing engineer based out of New York City. As composer, he has developed a unique process of realtime generative composition for instrumental ensembles, using networked laptops and custom software to create an “interactive score” that can be continuously updated during performance. As producer/engineer, Branciforte has lent his sonic touch to over 150 albums, working with such artists as Ben Monder, Vijay Iyer, Tim Berne, Kurt Rosenwinkel, Steve Lehman, Nels Cline, Marc Ribot, Mary Halvorson, Florent Ghys, and Son Lux along the way. His production/engineering work can be heard on ECM, Sunnyside Records, Cantaloupe Music, Pi Recordings, and New Amsterdam. He is the co-leader and drummer of “garage-chamber” ensemble The Cellar and Point, whose debut album Ambit was named one of the Top 10 Albums of 2014 by WNYC’s New Sounds and praised by outlets from the BBC to All Music Guide. His current musical efforts include a collaborative chamber project with composer Kenneth Kirschner and an electronic duo with vocalist Theo Bleckmann.