kirschner-3

Indeterminacy 2.0: Under the Hood

This week, I want to talk about some of the actual work I’ve done with indeterminate digital music, with a focus on both the technologies involved and the compositional methods that have proven useful to me in approaching this sort of work.

Written By

Kenneth Kirschner

variant:SONiC

Image from variant:SONiC by Joshue Ott and Kenneth Kirschner

This week, I want to talk about some of the actual work I’ve done with indeterminate digital music, with a focus on both the technologies involved and the compositional methods that have proven useful to me in approaching this sort of work. Let me open with a disclaimer that this is going to be a hands-on discussion that really dives into how these pieces are built. It’s intended primarily for composers who may be interested in writing this kind of music, or for listeners who really want to dig into the mechanics underlying the pieces. If that’s not you, feel free to just skim it or fast-forward ahead to next week, when we’ll get back into a more philosophical mode.

For fellow composers, here’s a first and very important caveat: as of right now, this is not music for which you can buy off-the-shelf software, boot it up, and start writing—real, actual programming will be required. And if you, like me, are someone who has a panic attack at the sight of the simplest Max patch, much less actual code, then collaboration may be the way to go, as it has been for me. You’ll ideally be looking to find and work with a “creative coder”—someone who’s a programmer, but has interest and experience in experimental art and won’t run away screaming (or perhaps laughing) at your crazy ideas.

INITIAL CONCEPTS

Let me rewind a little and talk about how I first got interested in trying to write this sort of music. I had used chance procedures as an essential part of my compositional process for many years, but I’d never developed an interest in working with true indeterminacy. That changed in the early 2000s, when my friend Taylor Deupree and I started talking about an idea for a series we wanted to call “Music for iPods.” An unexpected side effect of the release of the original iPod had been that people really got into the shuffle feature, and suddenly you had all these inadvertent little Cageans running around shuffling their whole music collections right from their jean pockets. What we wanted to do was to write specifically for the shuffle feature on the iPod, to make a piece that was comprised of little fragments designed to be played in any order, and that would be different every time you listened. Like most of our bright ideas, we never got around to it—but it did get me thinking on the subject.

And as I thought about it, it seemed to me that having just one sound at a time wasn’t really that interesting compositionally; there were only so many ways you could approach structuring the piece, so many ways you could put the thing together. But what if you could have two iPods on shuffle at once? Three? More? That would raise some compositional questions that struck me as really worth digging into. And under the hood, what was this newfangled iPod thing but a digital audio player—a piece of software playing software. It increasingly seemed like the indeterminate music idea was something that should be built in software—but I had no clue how to do it.

FIRST INDETERMINATE SERIES (2004–2005)

In 2004, while performing at a festival in Spain, I met a Flash programmer, Craig Swann, who had just the skills needed to try out my crazy idea. The first piece we tried—July 29, 2004 (all my pieces are titled by the date on which they’re begun)—was a simple proof of concept, a realization of the “Music for iPods” idea; it’s basically an iPod on shuffle play built in Flash. The music itself is a simple little piano composition which I’ve never found particularly compelling—but it was enough to test out the idea.

Here’s how it works: the piece consists of 35 short sound files, each about 10 seconds long, and each containing one piano chord. The Flash program randomly picks one mp3 at a time and plays it—forever. You can let this thing go as long as you like, and it’ll just keep going—the piece is indefinite, not just indeterminate. Here’s an example of what it sounds like, and for this and all the other pieces in my first indeterminate series, you can download the functioning generative Flash app freely from my website and give it a try. I say “functioning,” but these things are getting a bit long in the tooth; you may get a big security alert that pops up when you press the play button, but click “OK” on it and it still works fine. Also potentially interesting for fellow composers is that, by opening up the subfolders on each piece, you can see and play all of the underlying sound files individually and hopefully start to get a better sense of how these things are put together.

It was with the next piece, August 26, 2004, that this first series of indeterminate pieces for me really started to get interesting (here’s a fixed excerpt, and here’s the generative version). It’s one thing to play just one sound, then another, then another, ad infinitum. But what if you’ve got a bunch of sounds—two or three or four different layers at once—all happening in random simultaneous juxtapositions and colliding with one another? It’s a much more challenging, much more interesting compositional question. How do you structure the piece? How do you make it make sense? All these sounds have to “get along,” to fit together in some musically meaningful way—and yet you don’t want it to be homogenous, static, boring. How do you balance the desire for harmonic complexity and development with the need to avoid what are called, in the technical parlance of DJs, “trainwrecks”? Because sooner or later, anything that can happen in these pieces will happen, and you have to build the entire composition with that knowledge in mind.

August 26, 2004 was one possible solution to this problem. There are three simultaneous layers playing—three virtual “iPods” stacked shuffling on top of each other. One track plays a series of piano recordings, which here carry most of the harmonic content; there are 14 piano fragments, most around a minute long, each moving within a stable pitch space, and each able to transition more or less smoothly into the next. On top of that are two layers of electronics, drawn from a shared set of 21 sounds, and these I kept very sparse: each is harmonically open and ambiguous enough that it should, in theory, be able to hover over whatever piano fragment is playing as well as bump into the other electronic layer without causing too much trouble.

As the series continued, however, I found myself increasingly taking a somewhat different approach: rather than divide up the sounds into different functional groups, with one group dominating the harmonic space, I instead designed all of the underlying fragments to be “compatible” with one another—every sound would potentially work with every other, so that any random juxtaposition of sounds that got loaded could safely coexist. To check out some of these subsequent pieces, you can scan through 2005 on my website for any compositions marked “indet.” And again, for all of them you can freely download the generative version and open up the folders to explore their component parts.

INTERMISSION (2006–2014)

By late 2005, I was beginning to drift away from this sort of work, for reasons both technological and artistic (some of which I’ll talk about next week), and by 2006 I found myself again writing nothing but fully “determinate” work. Lacking the programming skills to push the work forward myself, indeterminacy became less of a focus—though I still felt that there was great untapped potential there, and hoped to return to it one day.

Another thing holding the pieces back was, quite simply, the technology of the time. They could only be played on a desktop computer, which just wasn’t really a comfortable or desirable listening environment then (or, for that matter, now). These pieces really cried out for a mobile realization, for something you could throw in your pocket, pop some headphones on, and hit the streets with. So I kept thinking about the pieces, and kept kicking around ideas in my head and with friends. Then suddenly, over the course of just a few years, we all looked up and found that everyone around us was carrying in their pockets extremely powerful, highly capable computers—computers that had more firepower than every piece of gear I’d used in the first decade or two of my musical life put together. Except they were now called “phones.”

THE VARIANTS (2014–)

In 2014, after years of talking over pad kee mao at our local Thai place, I started working with my friend Joshue Ott to finally move the indeterminate series forward. A visualist and software designer, Josh is best known in new music circles for superDraw, a “visual instrument” on which he improvises live generative imagery for new music performances and on which he has performed at venues ranging from Mutek to Carnegie Hall. Josh is also an iOS developer, and his app Thicket, created with composer Morgan Packard, is one of the best examples out there of what can be achieved when you bring together visuals, music, and an interactive touch screen.

Working as artists-in-residence at Eyebeam, Josh and I have developed and launched what we’re calling the Variant series. Our idea was to develop a series of apps for the iPhone and iPad that would bring together the generative visuals of his superDraw software with my approach to indeterminate digital music, all tightly integrated into a single interactive experience for the user. Our concept for the Variant apps is that each piece in the series will feature a different visual composition of Josh’s, a different indeterminate composition of mine—and, importantly, a different approach to user interactivity.

When I sat down to write the first sketches for these new apps, my initial instinct was to go back and basically rewrite August 26, 2004, which had somehow stuck with me as the most satisfying piece of the first indeterminate series. And when I did, the results were terrible—well, terribly boring. It took me a little while to realize that I’d perhaps learned a thing or two in the intervening decade, and that I needed to push myself harder—to try to move the indeterminate pieces forward not just technologically, but compositionally as well. So I went back to the drawing board, and the result was the music for our first app, variant:blue (here’s an example of what it sounds like).

It’s immediately clear that this is much denser than anything I’d tried to do in the first indeterminate series—even beyond the eight tracks of audio running simultaneously. It’s denser compositionally, with a more dissonant and chromatic palette than I would have had the courage to attempt ten years earlier. But the piece is actually not that complex once you break it down: each audio file contains a rhythmically loose, repeating instrumental pattern (you can hear an example of one isolated component here to give you a sense of it), with lots of silent spaces in between the repetitions. The rhythms, however, are totally free (there’s no overarching grid or tempo), so as you start to layer this stuff, the patterns begin to overlap and interfere with each other in complex, unpredictable ways. For variant:blue, there are now 48 individual component audio files; the indeterminate engine grabs one sound file at random and assigns it to one of the eight playback tracks, then grabs the next and assigns it to the next available track, and so forth. One handy feature of all of the Variant apps is that, when you click the dot in the lower right, a display will open that shows the indeterminate engine running in real time, which should hopefully give you a better sense of how the music is put together.

In one way, though, the music for variant:blue is very much like my earlier indeterminate pieces: it’s straight-up indeterminate, not interactive. The user has no control over the audio, and the music evolves only according to the indeterminate engine’s built-in chance procedures. For variant:blue, the interaction design focuses solely on the visuals, giving you the ability to draw lines that are in turn modified by the music. True audio interactivity, however, was something that would become a major struggle for us in our next app, variant:flare.

The music for variant:flare has a compositional structure that is almost the diametrical opposite of variant:blue’s, showing you a very different solution to the problem of how to bring order to these indeterminate pieces. Where the previous piece was predominantly atonal and free-floating, this one is locked to two absolute grids: a diatonic scale (C# minor, though sounding more like E major much of the time), and a tight rhythmic grid (at 117 bpm). So you can feel very confident that whatever sound comes up is going to get along just fine with the other sounds that are playing, in terms of both pitch and rhythm. Within that tightly quantized, completely tonal space, however, there’s actually plenty of room for movement—and each of these sounds gets to have all sorts of fun melodically and especially metrically. The meters, or lack thereof, are where it really gets interesting, because the step sequencer that was used to create each audio file incorporated chance procedures that occasionally scrambled whatever already-weird meter the pattern was playing in. Thus every individual line runs in a different irregular meter, and also occasionally changes and flips around into new and confusingly different patterns. Try following the individual lines (like this one); it’s a big fun mess, and you can listen to an example of the full app’s music here.

We were very happy with the way both the music and the visuals for the app came together—individually. But variant:flare unexpectedly became a huge challenge in the third goal of our Variant series: interactivity. Try as we might, we simply couldn’t find a way to make both the music and the visuals meaningfully interactive. The musical composition was originally designed to just run indeterminately, without user input, and trying to add interactivity after the fact proved incredibly difficult. What brought it all together in the end was a complete rethink that took the piece from a passive musical experience to a truly active one. The design we hit on was this: each tap on the iPad’s screen starts one track of the music, up to six. After that point, each tap resets a track: one currently playing track fades out and is replaced by another randomly selected one. This allows you to “step” through the composition yourself, to guide its evolution and development in a controlled, yet still indeterminate fashion (because the choice of sounds is still governed by chance). If you find a juxtaposition of sounds you like, one compelling point in the “compositional space” of the piece, leave it alone—the music will hover there, staying with that particular combination of sounds until you’re ready to nudge it forward and move on. The visuals, conversely, now have no direct user interactivity and are controlled completely by the music. While this was not at all the direction we initially anticipated taking, we’re both reasonably satisfied with how the app’s user experience has come together.

After this experience, my goal for the next app was to focus on building interactivity into the music from the ground up—not to struggle with adding it into something that was already written, but to make it an integral part of the overall plan of the composition from the start. variant:SONiC, our next app, was commissioned by the American Composers Orchestra for the October 2015 SONiC Festival, and my idea for the music was to take sounds from a wide cross-section of the performers and composers in the festival and build the piece entirely out of those sounds. I asked the ACO to send out a call for materials to the festival’s participants, asking each interested musician to send me one note—a single note played on their instruments or sung—with the idea of building up a sort of “virtual ensemble” to represent the festival itself. I received a wonderfully diverse array of material to work with—including sounds from Andy Akiho, Alarm Will Sound (including Miles Brown, Michael Clayville, Erin Lesser, Courtney Orlando, and Jason Price), Clarice Assad, Christopher Cerrone, The Crossing, Melody Eötvös, Angelica Negron, Nieuw Amsterdams Peil (including Gerard Bouwhuis and Heleen Hulst), and Nina C. Young—and it was from these sounds that I built the app’s musical composition.

When you boot up variant:SONiC, nothing happens. But tap the screen and a sound will play, and that sound will trigger Josh’s visuals as well. Each sound is short, and you can keep tapping—there are up to ten sounds available to you at once, one for each finger, so you can almost begin to play the piece like an instrument. As with our other apps, each tap is triggering one randomly selected element of the composition at a time—but here there are 153 total sounds, so there’s a lot for you to explore. And with this Variant you get one additional interactive feature: hold down your finger, and whatever sound you’ve just triggered will start slowly looping. Thus you can use one hand, for example, to build up a stable group of repeating sounds, while the other “solos” over it by triggering new material. variant:SONiC is a free app, so it’s a great way to try out these new indeterminate pieces—but for those that don’t have access to an iOS device, here’s what it sounds like.

variant:SONiC is, for me, the first of our apps where the audio interactivity feels natural, coherent, and integral to the musical composition. And to me it illustrates how—particularly when working with touchscreen technology—indeterminacy quite naturally slides into interactivity with this kind of music. I’m not sure whether that’s just because iPhone and iPad users expect to be able to touch their screens and make things happen, or whether there’s something inherent in the medium that draws you in this direction. Maybe it’s just that having the tools on hand tempts you to use them; to a composer with a hammer, everything sounds like a nail.

In the end, though, much as I’m finding interactive music to be intriguing and rewarding, I do still believe that there’s a place for indeterminate digital music that isn’t interactive. I hope to work more in this direction in the future—though to call it merely “passive” indeterminate music sounds just as insulting as calling a regular old piece of music “determinate.” I guess what I’m trying to say is that, despite all these wonderfully interactive technologies we have available to us today, there’s still something to be said for just sitting back and listening to a piece of music. And maybe that’s why I’ve called this series Indeterminacy 2.0 rather than Interactivity 1.0.

Next week, our season finale: “The Music of Catastrophe.”