16-BRANCIFORTE-featured4

From the Machine: Conversations with Dan Tepfer, Kenneth Kirschner, Florent Ghys, and Jeff Snyder

They discuss their unique approaches to the compositional and performance possibilities offered by computers to generate, manipulate, process, and display musical data for acoustic ensembles. It’s potentially one of the most promising vistas for musical discovery in the coming years.

Written By

Joseph Branciforte

Over the last three weeks, we’ve looked at various techniques for composing and performing acoustic music using computer algorithms, including realtime networked notation and algorithmic approaches to harmony and orchestration.

Their methods differ substantially, from the pre-compositional use of algorithms, to the realtime generation of graphic or traditionally notated scores, to the use of digitally controlled acoustic instruments and musical data visualizations.

This week, I’d like to open up the conversation to include four composer/performers who are also investigating the use of computers to generate, manipulate, process, and display musical data for acoustic ensembles. While all four share a similar enthusiasm for the compositional and performance possibilities offered by algorithms, their methods differ substantially, from the pre-compositional use of algorithms, to the realtime generation of graphic or traditionally notated scores, to the use of digitally controlled acoustic instruments and musical data visualizations.

Pianist/composer Dan Tepfer, known both for his expressive jazz playing and his interpretations of Bach’s Goldberg Variations, has recently unveiled his Acoustic Informatics project for solo piano. In it, Tepfer uses realtime algorithms to analyze and respond to note data played on his Yahama Disklavier piano, providing him with an interactive framework for improvisation. Through the use of musical delays, transpositions, inversions, and textural elaborations of his input material, he is able to achieve composite pianistic textures that would be impossible to realize with human performer or computer alone.

Composer Kenneth Kirschner has been using computers to compose electronic music since the 1990s, manipulating harmonic, melodic, and rhythmic data algorithmically to create long-form works from minimal musical source material. Several of his electronic works have recently been adapted to the acoustic domain, raising questions of musical notation for pieces composed without reference to fixed rhythmic or pitch grids.

Florent Ghys is a bassist and composer who works in both traditional and computer-mediated compositional contexts. His current research is focused on algorithmic composition and the use of realtime notation to create interactive works for acoustic ensembles.

Jeff Snyder is a composer, improviser, and instrument designer who creates algorithmic works that combine animated graphic notation and pre-written materials for mixed ensembles. He is also the director of the Princeton Laptop Orchestra (PLOrk), providing him with a wealth of experience in computer networking for live performance.

THE ROLE OF ALGORITHMS

JOSEPH BRANCIFORTE: How would you describe the role that computer algorithms play in your compositional process?

KENNETH KIRSCHNER: I come at this as someone who was originally an electronics guy, with everything done on synthesizers and realized electronically. So this computer-driven approach is just the way I work, the way I think compositionally. I’ve never written things with pencil and paper. I work in a very non-linear way, where I’m taking patterns from the computer and juxtaposing them with other patterns—stretching them, twisting them, transposing them.

I have to have that feedback loop where I can try it, see what happens, then try it again and see what happens.

A lot of my obsession over the last few years has been working with very reduced scales, often four adjacent semitones, and building patterns from that very restricted space. I find that as you transpose those and layer them over one another, you get a lot of very interesting emergent patterns. In principle, you could write that all out linearly, but I can’t imagine how I would do it, because so much of my process is experimentation and chance and randomness: you take a bunch of these patterns, slow this one down, transpose this one, layer this over that. It’s very fluid, very quick to do electronically—but hopelessly tedious to do if you’re composing in a linear, notated way. My whole development as a composer presupposes that realtime responsiveness. I have to have that feedback loop where I can try it, see what happens, then try it again and see what happens.

FLORENT GHYS: That’s very interesting, because we don’t come from the same background, but we ended up with algorithmic music for the same reasons. I come from a background of traditional acoustic music composition: writing down parts and scores for musicians. But I realized that the processes I was using as I was composing—canons, isorhythms, transpositions, stretching out durations—were very easy to reproduce in Max/MSP. I began by working with virtual instruments on the computer, fake sounds that gave me an idea of what it might sound like with a real ensemble. It was fascinating to listen to the results of an algorithmic process in real time—changing parameters such as density of rhythm, rhythmic subdivision, transposition, canonic relationships—and being able to hear the results on the spot. Even something as simple as isorhythm—a cell of pitches and a cell of rhythms that don’t overlap—writing something like that down takes some time. With an algorithmic process, I can go much faster and generate tons of material in a few minutes, rather than spending hours in Sibelius just to try out an idea.

DAN TEPFER: I’ve used algorithms in a number of ways. I’ve done stuff where I’ve generated data algorithmically that then gets turned into a relatively traditional composition, with notes on a page that people play. I’ve also experimented with live notation, which is more improvisationally based, but with some algorithmic processing in there too. And then there’s the stuff I’ve been doing recently with the Disklavier, where the algorithms react to what I’m improvising on the piano in real time.

With the live notation stuff, I’ve done it with string quartet, or wind quartet, and me on piano. I did one show where it was both of them together, and I could switch back and forth or have them both playing. I have a controller keyboard on top of the piano, and I can play stuff that gets immediately sent out as staff notation. There’s some processing where it’ll adapt what I’m playing to the ranges of each instrument, doubling notes or widening the register. Then there are musical controls where I can save a chord and transform it in certain ways just by pushing a button. At the rhythmic level, there’s usually a beat happening and this stuff is floating above it, a bit of an improvisational element where the musicians can sink into the groove.

JEFF SNYDER: I’ve got two main pieces that I would say fall into this category of realtime notation. The first is called Ice Blocks, which combines graphic notation with standard notation for open instrumentation. And then another one called Opposite Earth, which uses planets’ orbits as a graphic notation device. There are ten concentric circles, each one assigned to a performer. Each musician is a particular planet on an orbit around the sun. As the conductor, I can introduce vertical or horizontal lines from the center. The idea is that when your planet crosses one of those lines, you play a note. I have control over how fast each planet’s orbit is, as well as the color of the lines, which refer to pitch materials. There are five different colors that end up being five different chords. So it sets up a giant polyrhythm based on the different orbits and speeds.

Each planet can also rotate within itself, with additional notches functioning the same way as the lines do, although using unpitched sounds. That basically gives me another rhythmic divider to play with. I can remove or add orbits to thin out the texture or add density. It’s interesting because the piece allows me to do really complicated polyrhythms that couldn’t be executed as accurately with traditional notation. You might be playing sixteen against another person’s fifteen, creating this really complicated rhythmic relationship that will suddenly line up again. This makes it really easy: all you’re doing is watching a line, and each time you cross, you make a sound. You can do it even if the players aren’t particularly skilled.

PERFORMANCE PRACTICE AND USER EXPERIENCE

JB: I’m really interested in this question of performer “user experience” when working with realtime notational formats. What were the performers’ responses to dealing with your dynamic graphic notation, Jeff?

JS: The piece was played by PLOrk, which is a mix of composition grad students, who are up for anything, and then undergrads who are a mix of engineers and other majors. They get excited about the fact that it’s something different. But I’ve worked with more conservative ensembles and had performers say, “I’ve worked for so many years at my instrument, and you’re wasting my skills.” So people can have that response as well when you move away from standard notation.

With PLOrk, I was able to workshop the piece over a few months and we would discover together: “Is this going to be possible? Is this going to be too difficult? Is this going to be way too easy?” I could experiment with adding staff notation or using different colors to represent musical information. For me, it was super valuable because I wasn’t always able to gauge how effective certain things would be in advance. None of this stuff has a history, so it’s hard to know whether people can do certain things in a performance situation. Can people pay attention to different gradations of blue on a ring while they’re also trying to perform rhythms? I just have to test it, and then they’ll tell me whether it works.

JB: There’s always that initial hurdle to overcome with new notational formats. I’ve been using traditional notation in my recent work, albeit a scrolling version where performers can only see two measures at a time, but I remember a similar adjustment period during the first rehearsal with a string quartet. We set everyone up, got the Ethernet connections between laptops working, tested the latencies—everything looked good. But for the first fifteen minutes of rehearsal, the performers were all complaining that the software wasn’t working properly. “It just feels like it’s off. Maybe it’s not synced or something?” So I did another latency check, and everything was fine, under two milliseconds of latency.

DT: So the humans weren’t synced!

It’s just a new skill. Once performers get used to it, then they don’t want it to change.

JB: I reassured them that everything was working properly, and we kept rehearsing. After about 30 minutes, they started getting the hang of the scrolling notation—things were beginning to sound much more comfortable. So after rehearsal, as everyone was packing up, I said, “Is there anything you’d like me to change in the software, anything that would make the notation easier to deal with?” And they all said, “No! Don’t change a thing. It’s perfect!” And then I realized: it’s just a new skill. Once performers get used to it, then they don’t want it to change. They just need to know that it works and that they can rely on it.

But beyond the mechanics of using the software, I sometimes wonder whether it’s harder for a performer to commit to material that they haven’t seen or rehearsed in advance. They have no idea what’s coming next and it’s difficult to gain any sense of the piece as a whole.

FG: I think you’re touching on something related to musicianship. In classical music, the more you play a piece, the better you’re going to understand the music, the more you’re going to be able to make it speak and refine the dynamics. And within the context of the ensemble, you’ll understand the connections and coordination between all the musicians. So the realtime notation is going to be a new skill for musicians to learn—to be able to adapt to material that’s changing. It’s also the job of the composer to create a range of possibilities that musicians can understand. For instance, the piece uses certain types of rhythms or scales or motives; a performer might not know exactly what it’s going to be, but they understand the range of things that can happen.

KK: They need to be able to commit to the concept of the piece, rather than any of the specific details of the narrative.

DT: I think a key word here is culture. You’re seeing a microcosm of that when, in the time span of a rehearsal, you see a culture develop. At the beginning of the rehearsal, musicians are like, “It’s not working,” and then after a certain time they’re like, “Oh, it is working.” Culture is about expectations about what is possible. And if you develop something in the context of a group, where it is understood to be fully possible, then people will figure out ways to do it. It might start with a smaller community of musicians who can do it at first. But I think we’re probably not far from the time when realtime sight-reading will just be a basic skill. That’s going to be a real paradigm shift.

I think we’re probably not far from the time when realtime sight-reading will just be a basic skill. That’s going to be a real paradigm shift.

JB: How do you deal with the question of notational pre-display in your live notation work, Dan?

DT: It happens pretty much in real time.

JB: So you play a chord on your MIDI keyboard and it gets sent out to the musicians one measure at a time?

DT: They’re just seeing one note. There’s no rhythmic information. The real difficulty is that I have to send the material out about a second early in order to have any chance of maintaining consistency in the harmonic rhythm. It takes some getting used to, but it’s surprisingly intuitive after a while.

JS: That’s something I wasn’t able to address in the planets piece by the time of the performance: there was no note preparation for them, so lines just show up. I told the performers, “Don’t worry if a line appears right before your planet is about to cross it. Just wait until the next time it comes around again.” But it still stressed them out. As performers, they’re worried about “missing a note,” especially because the audience could see the notation too. So perhaps in the next version I could do something where the lines slowly fade in to avoid that issue.

JB: I have to sometimes remind myself that the performers are part of the algorithm, too. As much as we want the expanded compositional possibilities that come from working with computers, I think all of us value the process of working with real musicians.

KK: With these recent acoustic adaptations of my pieces, it was a whole different experience hearing it played with an actual pianist and cellists. It was a different piece. And I thought, “There is something in here that I want to pursue further.” There’s just a level of nuance you’re getting, a level of pure interpretation that’s not going to come through in my electronic work. But the hope is that by composing within the electronic domain, I’m stumbling upon compositional approaches that one may not find writing linearly.

COMPUTER AS COMPOSITIONAL SURROGATE

JB: I want to discuss the use of the computer as a “compositional surrogate.” The premise is that instead of working out all of the details of a piece in advance, we allow the computer to make decisions on our behalf during performance, based on pre-defined rules or preferences. There’s an argument that outsourcing these decisions to the computer is an abdication of the fundamental responsibility of being a composer, the subjective process of selection. But I’ve begun to see algorithm design as a meta-compositional process: uncovering the principles that underlie my subjective preferences and then embedding them into the algorithmic architecture itself.

KK: Right. There’s a sense that when something works musically, there’s a reason for it. And what we’re trying to do is uncover those reasons; the hope is that some of those rules that are affecting our aesthetic judgment are able to be discovered. Once you begin to codify some of that, you can offload it and shift some of the compositional responsibility to the computer. The idea is to build indeterminate pieces that have a degree of intelligence and adaptation to them. But that requires us to understand what some of those underlying mechanisms are that make us say “this is good” or “this is bad.”

For me, something might sound good one day, and another day I might hate it. I don’t know if you’re ever going to find a “rule” that can explain that.

FG: I don’t know. I’m a little skeptical. For me, something might sound good one day, and another day I might hate it. I don’t know if you’re ever going to find a “rule” that can explain that; there are so many factors that go into musical perception.

JB: A dose of skepticism is probably warranted if we’re talking about machines being able to intervene in questions of aesthetics. But to me, the beauty of designing a composer-centric framework is that it allows you to change your preferences from day to day. You can re-bias a piece to conform to whatever sounds good to you in the moment: a different tempo, more density, a slightly different orchestration. I’m not sure that we even need to understand the nature of our preferences, or be able to formalize them into rules, in order to have the computer act as an effective surrogate. Economists have a concept called “revealed preference,” where instead of looking at what consumers say they want, you look at their purchasing habits. That kind of thing could be applied to algorithm design, where the algorithm learns what you like simply by keeping track of your responses to different material.

KK: I’ve had a similar thought when working on some of my indeterminate pieces—that you want a button for “thumbs up” or “thumbs down.” If you could record the aggregate of all those decisions, you could begin to map them to a parameter space that has a greater chance of giving you good outcomes. You could also have different profiles for a piece. For example, I could do my “composer’s version” that contains my preferences and builds the piece in a certain direction; then I could hand it off to you, hit reset, and have you create your own version of the piece.

FG: In a lot of the algorithms I’ve been designing lately, I have a “determinacy-to-randomness” parameter where I can morph from something I’ve pre-written, like a melody or a series of pitches, to a probabilistic set of pitches, to a completely random set of pitches. With the probabilities, I allow the computer to choose whatever it wants, but I tell it, “I’d like to have more Gs and G#s, but not too many Cs.” So, weighted probabilities. We know that the random number generator in Max/MSP, without any scaling or probabilities, sounds like crap.

KK: It needs constraints.

JB: Finding ways to constrain randomness—where it’s musically controlled, but you’re getting new results with every performance—that’s become a major compositional concern for me. As an algorithm grows from initial idea to a performance-ready patch, the parameters become more abstract and begin to more closely model how I hear music as a listener. At the deepest level of aesthetic perception, you have things like balance, long-range form, tension/resolution, and expectation. I think probabilistic controls are very good at dealing with balance, and maybe not as good with the others.

FG: Yeah, when you deal with algorithms you go to a higher level of thinking. I’ve done things where I have a pattern that I like, and I want the computer to generate something else like it. And then eventually I know I want it to transform into another pattern or texture. But the tiny details of how it gets from A to B don’t really matter that much. It’s more about thinking of the piece as a whole.

NETWORKED NOTATION

JB: Jeff, I wanted to ask you about something a little more technical: when dealing with live notation in PLOrk, are you using wired or wireless connections to the performers’ devices?

JS: I’ve done live notation with both wireless and wired connections. In any kind of networking situation, we look at that question on a case-by-case basis. If we’re going to do wired, it simplifies things because we can rely on reasonable timing. If we’re going to do wireless, we usually have issues of sync that we have to deal with. For a long time, our solution has been LANdini, which was developed by Jascha Narveson. Recently, Ableton Link came out and that simplifies things. So if you don’t need certain features that LANdini offers—if you just need click synchronization—then Link is the simpler solution. We’ve been doing that for anything in which we just need to pulse things and make sure that the pulses show up at the same time, like metronomes.

JB: In my notation system, there’s a cursor that steps through the score, acting as a visual metronome to keep the musicians in sync. So transfer speed is absolutely critical there to make sure there’s as little latency as possible between devices. I’ve been using wired Ethernet connections, which ensures good speed and reliability, but it quickly becomes a real mess on stage with all the cables. Not to mention the hundreds I’ve spent on Ethernet adapters! Perhaps the way to do it is to have Ableton Link handle the metronome and then use wireless TCP/IP to handle the notation messages.

JS: That’s what I was just about to suggest. With Link, you can actually get information about which beat number you’re on, it’s not just a raw pulse.

JB: Does it work well with changing time signatures?

JS: That’s a good question, I haven’t tested that. I have discovered that any tempo changes make it go nuts. It takes several seconds to get back on track when you do a tempo change. So it’s limited in that way. But there are other possibilities that open up when you get into wireless notation. Something I’ve really wanted to do is use wireless notation for spatialization and group dynamics. So say you had a really large ensemble and everybody is looking at their own iPhone display, which is giving them graphic information about their dynamics envelopes. You could make a sound move through an acoustic ensemble, the same way electronic composers do with multi-speaker arrays, but with a level of precision that couldn’t be achieved with hand gestures as a conductor. It’d be easily automated and would allow complex spatial patterns to be manipulated, activating different areas of the ensemble with different gestures. That’s definitely doable, technically speaking, but I haven’t really seen it done.

BRINGING THE COMPOSER ON STAGE

Do you think that having the composer on stage as a privileged type of performer is potentially in conflict with the performers’ ownership of the piece?

JB: With this emerging ability for the composer to manipulate a score in realtime, I wonder what the effects will be on performance culture. Do you think that having the composer on stage as a privileged type of performer is potentially in conflict with the performers’ ownership of the piece?

FG: Bringing the composer on stage changes the whole dynamic. Usually instrumentalists rule the stage; they have their own culture. Now you’re up there with them, and it totally changes the balance. “Whoa, he’s here, he’s doing stuff. Why is he changing my part?”

JB: Right, exactly. In one of my early realtime pieces, I mapped the faders of a MIDI controller to the individual dynamic markings of each member of the ensemble. This quickly got awkward in rehearsal when one of the violinists said half-jokingly, “It seems like I’m playing too loudly because my dynamic markings keep getting lower and lower.”

DT: It’s like Ligeti-style: you go down to twelve ps! [laughs]

JB: From that point, I became very self-conscious about changing anything. I suddenly became aware of this strange dynamic, where I’m in sole control of the direction of the piece but also sitting on stage alongside the musicians.

DT: You know, it’s interesting—come to think of it, in everything I’ve done with live notation, I’m performing as well. I think that makes a huge difference, because I can lead by example.

KK: And you’re also on stage and you’re invested as a performer. Whereas Joe is putting himself in this separate category—the puppet master!

FG: I wonder if it’s not also the perception of the instrumentalists in what they understand about what you’re doing. In Dan’s case, they totally get what he’s doing: he’s playing a chord, it’s getting distributed, they have their note. It’s pretty simple. With more complex algorithmic stuff, they might not get exactly what you’re doing. But then they see an obvious gesture like lowering a fader, and they think, “Oh, he’s doing that!”

DT: Something nice and simple to complain about!

FG: Otherwise, you’re doing this mysterious thing that they have no idea about, and then they just have to play the result.

KK: This is why I think it’s really important to start working with a consistent group of musicians, because we’ll get past this initial level and start to see how they feel about it in the longer term as they get used to it. And that might be the same response, or it might be a very different response.

DT: Has anyone taken that step of developing this kind of work over a couple of years with the same group of people? I think then you’ll see performers finding more and more ways of embracing the constraints and making it their own. That’s where it gets exciting.


Well, that about does it for our four-part series. I hope that these articles have initiated conversation with respect to the many possible uses of computer algorithms in acoustic music, and perhaps provided inspiration for future work. I truly believe that the coupling of computation and compositional imagination offers one of the most promising vistas for musical discovery in the coming years. I look forward to the music we will collectively create with it.

Comments and questions about the series are very much welcome, either via the comments section below or any of the following channels:

josephbranciforte.com // facebook // twitter // instagram