15-Bettis-Naphtali-Lorenzo-Tammen

Live Sound Processing and Improvisation

Over the past 5-7 years there has been an enormous surge in interest among musicians, outside of computer music academia, in discovering how to enhance their work with electronics and, in particular, how to use electronics and live sound processing as a performable “real” instrument.

Written By

Dafna Naphtali

Intro to the Intro

I have been mulling over writing about live sound processing and improvisation for some time, and finally I have my soapbox!  For two decades, as an electronic musician working in this area, I’ve been trying to convince musicians, sound engineers, and audiences that working with electronics to process and augment the sound of other musicians is a fun and viable way to make music.

Also a vocalist, I often use my voice to augment and control the sound processes I create in my music which encompasses both improvised and composed projects. I also have been teaching (Max/MSP, Electronic Music Performance) for many years. My opinions are influenced by my experiences as both an electronic musician who is performer/composer and a teacher (who is forever a student).

A short clip of my duo project with trombonist Jen Baker, “Clip Mouth Unit,” where I process both her sound and my voice.

Over the past 5-7 years there has been an enormous surge in interest among musicians, outside of computer music academia, in discovering how to enhance their work with electronics and, in particular, how to use electronics and live sound processing as a performable “real” instrument.

So many gestural controllers have become part of the fabric of our everyday lives.

The interest has increased because (of course) so many more musicians have laptops and smartphones, and so many interesting game and gestural controllers have become part of the fabric of our everyday lives. With so many media tools at our disposal, we have all become amateur designers/photographers/videographers, and also musicians, both democratizing creativity (at least to those with the funds for laptops/smartphones) and exponentially increasing and therefore diluting the resulting output pool of new work.

Image of a hatted and bespectacled old man waving his index finger with the caption, "Back in my day... no real-time audio on our laptops (horrors!)"

Back when I was starting out (in the early ’90s), we did not have real-time audio manipulations at our fingertips—nothing easy to download or purchase or create ourselves (unlike the plethora of tools available today).  Although Sensorlab and iCube were available (but not widely), we did not have powerful sensors on our personal devices, conveniently with us at all times, that could be used to control our electronic music with the wave of a hand or the swipe of a finger. (Note: this is quite shocking to my younger students.) There is also a wave of audio analysis tools using Music Information Retrieval (MIR) and alternative controllers, previously only seen at research institutions and academic conferences, all going mainstream. Tools such as the Sunhouse sensory percussion/drum controller, which turns audio into a control source, are becoming readily available and popular in many genres.

In the early ’90s, I was a performing rock-pop-jazz musician, experimenting with free improv/post-jazz. In grad school, I became exposed for the first time to “academic” computer music: real-time, live electroacoustics, usually created by contemporary classical composers with assistance from audio engineers-turned-computer programmers (many of whom were also composers).

My professor at NYU, Robert Rowe, and his colleagues George Lewis, Roger Dannenberg and others were composer-programmers dedicated to developing systems to get their computers to improvise, or building other kinds of interactive music systems.  Others, like Cort Lippe, were developing pieces for an early version of Max running on a NeXT computer using complex real-time audio manipulations of a performer’s sound, and using that as the sole electroacoustic—and live—sound source and for all control (a concept that I personally became extremely interested and invested in).

As an experiment, I decided to see if I could create a simplified versions of these live sound processing ideas I was learning about. I started to bring them to my free avant-jazz improv sessions and to my gigs, using a complicated Max patch I made to control an Eventide H3000 effects processor (which was much more affordable than the NeXT machine, plus we had one at NYU). I did many performances with a core group of people, willing to let me put microphones on everyone and process them during our performances.

Collision at Baktun 1999. Paul Geluso (bass), Daniel Carter (trumpet), Tom Beyer (drums), Dafna Naphtali (voice, live sound processing), Kristin Lucas (video projection / live processing), and Leopanar Witlarge (horns).

Around that time I also met composer/programmer/performer Richard Zvonar, who had made a similarly complex Max patch as “editor/librarian” software for the H3000, to enable him to create all the mind-blowing live processing he used in his work with Diamanda Galás, Robert Black (State of the Bass), and others. Zvonar was very encouraging about my quest to control the H3000 in real-time via a computer. (He was “playing” his unit from the front panel.)  I created what became my first version of a live processing “instrument” (which I dubbed “kaleid-o-phone” at some point). My subsequent work with Kitty Brazelton and Danny Tunick, in What is it Like to be a Bat?, really stretched me to find ways to control live processing in extreme and repeatable ways that became central and signature elements of our work together, all executed while playing guitar and singing—no easy feat.

Six old laptops all open and lined up in two rows of three on a couch.

Since then, over 23 years and 7 laptops, many gigs and ensembles, and releasing a few CDs, I’ve all along worked on that same “instrument,” updating my Max patch, trying out many different controllers and ideas, adding real-time computer-based audio. (Only once that was possible on a laptop, in the late ’90s.) I’m just that kinda gal; I like to tinker!

In the long run, what is more important to me than the Max programming I did for this project is that I was able to develop for myself an aesthetic practice and rules for my live sound processing about respecting the sound and independence of the other musicians to help me to make good music when processing other people’s sound.

The omnipresent “[instrument name] plus electronics”, like a “plus one” on a guest list, fills many concert programs.

Many people, of course, use live processing on their own sound, so what’s the big deal? Musicians are excited to extend their instruments electronically and there is much more equipment on stage in just about every genre to prove it. The omnipresent “[instrument name] plus electronics”, like a “plus one” on a guest list, fills many concert programs.

However, I am primarily interested in learning how a performer can use live processing on someone else’s sound, in a way that it can become a truly independent voice in an ensemble.

What is Live Sound Processing, really?

To perform with live sound processing is to alter and affect the sounds of acoustic instruments, live, in performance (usually without the aid of pre-recorded audio), and in this way create new sounds, which in turn become independent and unique voices in a musical performance.

Factoring in the acoustic environment of the performance space, it’s possible to view each performance as site-specific, as the live sound processor reacts not only to the musicians and how they are playing but also to the responsiveness and spectral qualities of the room.

Although, in the past, the difference between live sound processing and other electronic music practices has not been readily understood by audiences (or even many musicians), in recent years the complex role of the “live sound processor” musician has evolved to often be that of a contributing, performing musician, sitting on stage within the ensemble and not relegated, by default, to the sound engineer position in the middle or back of the venue.

Performers as well as audiences can now recognize electroacoustic techniques when they hear them.

With faster laptops and more widespread use and availability of classic live sound processing as software plugins, these live sound processing techniques have gradually become more acceptable over 20 years—and in many music genres practically expected (not to mention the huge impact these technologies have had in more commercial manifestations of electronic dance music or EDM). Both performers and audiences have become savvier about many electroacoustic techniques and sounds and can now recognize them by hearing them.

We really need to talk…

I’d like to encourage a discourse about this electronic musicianship practice, to empower live sound processors to use real-time (human/old-school) listening and analysis of sounds (being played by others), and to develop skills for real-time (improvised) decisions about how to respond and manipulate those sounds in a way that facilitates their electronic-music-sounds being heard—and understood—as a separate performing (and musicianly) voice.

In this way, the live sound processor is not always dependent on and following the other musicians (who are their sound source), their contributions not simply “effects” that are relegated to the background. Nor will the live sound processor be brow-beating the other musicians into integrating themselves with, or simply following, inflexible sounds and rhythms of their electronics expressed as an immutable/immobile/unresponsive block of sound that the other musicians must adapt to.

My Rules

My self-imposed guidelines were developed over several years of performing and sessions are:

  1. Never interfere with a musician’s own musical sound, rhythm or timbre. (Unless they want you to!)
  2. Be musically identifiable to both co-players and audience (if possible).
  3. Incorporate my body to use some kind of physical interaction between the technology and myself, either through controllers or the acoustics of the sound itself, or my own voice.

I wrote about these rules in “What if Your Instrument is Invisible?” (my chapter contribution to the excellent book, Musical Instruments in the 21st Century: Identities, Configurations, Practices (Springer 2016).

The first two rules, in particular, are the most important ones and will inform virtually everything I will write in coming weeks about live sound processing and improvisation.

My specific area of interest is live processing techniques used in improvised music, and in other settings in which the music is not all pre-composed. Under such conditions, many decisions must be made by the electronic musician in real-time. My desire is to codify the use of various live sound processing techniques into a pedagogical approach that blends listening techniques, a knowledge of acoustics / psychoacoustics, and tight control over the details of live sound processing of acoustic instruments and voice. The goal is to improve communication between musicians and optional scoring of such work, to make this practice easier for new electronic musicians, and to provide a foundation for them to develop their own work.

You are not alone…

There are many electronic musicians who work as I do with live sound processing of acoustic instruments in improvised music. Though we share a bundle of techniques as our central mode of expression, there is very wide range of possible musical approaches and aesthetics, even within my narrow definition of “Live Sound Processing” as real-time manipulation of the sound of an acoustic instrument to create an identifiable and separate musical voice in a piece of music.

In 1995, I read a preview of what Pauline Oliveros and the Deep Listening Band (with Stuart Dempster and David Gamper) would be doing at their concert at the Kitchen in New York City. Still unfamiliar with DLB’s work, I was intrigued to hear about E.I.S., their “Expanded Instrument System” described as an “interactive performer controlled acoustic sound processing environment” giving “improvising musicians control over various parameters of sound transformation” such as “delay time, pitch transformation” and more. (It was 1995, and they were working with the Reson8 for real-time processing of audio on a Mac, which I had only seen done on NeXT machines.) The concert was beautiful and mesmerizing. But lying on the cushions at the Kitchen, bathing in the music’s deep tones and sonically subtle changes, I realized that though we were both interested in the same technologies and methods, my aesthetics were radically different from that of DLB. I was, from the outset, more interested in noise/extremes and highly energetic rhythms.

It was an important turning point for me as I realized that to assume what I was aiming to do was musically equivalent to DLB simply because the technological ideas were similar was a little like lumping together two very different guitarists just because they both use Telecasters. Later, I was fortunate enough to get to know both David Gamper and Bob Bielecki through the Max User Group meetings I ran at Harvestworks, and to have my many questions answered about the E.I.S. system and their approach.

There is now more improvisation than I recall witnessing 20 years ago.

Other musicians important for me to mention who are working with live sound processing of other instruments and improvisation for some time: Lawrence Casserley, Joel Ryan (both in their own projects and long associations with saxophonist Evan Parker’s “ElectroAcoustic” ensemble), Bob Ostertag (influential in all his modes of working), and Satoshi Takeishi and Shoko Nagai’s duo Vortex. More recently: Sam Pluta (who creates “reactive computerized sound worlds” with Evan Parker, Peter Evans, Wet Ink and others), and Hans Tammen. (Full disclosure, we are married to each other!)

Joel Ryan and Evan Parker at STEIM.

In academic circles, computer musicians, always interested in live processing, have more often taken to the stage as performers operating their software (moving from the central/engineer position). It seems there is also more improvisation than I recall witnessing 20 years ago.

But as for me…

In my own work, I gravitate toward duets and trios, so that it is very clear what I am doing musically, and there is room for my vocal work. My duos are with pianist Gordon Beeferman (our new CD, Pulsing Dot, was just released), percussionist Luis Tabuenca (Index of Refraction), and Clip Mouth Unit—a project with trombonist Jen Baker. I also work occasionally doing live processing with larger ensembles (with saxophonist Ras Moshe’s Music Now groups and Hans Tammen’s Third Eye Orchestra).

Playing with live sound processing is like building a fire on stage.

I have often described playing with live sound processing as like “building a fire on stage”, so I will close by taking the metaphor a bit further. There are two ways to start a fire with a lot of planning or improvisation, which method we choose to start with use depends on environmental conditions (wind, humidity, location), the tools we have at hand, and also what kind of person we are (a planner/architect, or more comfortable thinking on our feet).

In the same way, every performance environment impacts on the responsiveness and acoustics of musical instruments used there. This is much more pertinent, when “live sound processing” is the instrument. The literal weather, humidity, room acoustics, even how many people are watching the concert, all affect the defacto responsiveness of a given room, and can greatly affect the outcome especially when working with feedback or short delays and resonances. Personally, I am a bit of both personality types—I start with a plan, but I’m also ready to adapt. With that in mind, I believe the improvising mindset is needed for working most effectively with live sound processing as an instrument.

A preview of upcoming posts

What follows in my posts this month will be ideas about how to play better as an electronic musician using live acoustic instruments as sound sources. These ideas are (I hope) useful whether you are:

  • an instrumentalist learning to add electronics to your sound, or
  • an electronic musician learning to play more sensitively and effectively with acoustic musicians.

In these upcoming posts, you can read some of my discussions/explanations and musings about delay as a musical instrument, acoustics/psychoacoustics, feedback fun, filtering/resonance, pitch-shift and speed changes, and the role of rhythm in musical interaction and being heard. These are all ideas I have tried out on many of my students at New York University and The New School, where I teach Electronic Music Performance, as well as from a Harvestworks presentation, and from my one-week course on the subject at the UniArts Summer Academy in Helsinki (August 2014).


Dafna Naphtali creating music from her laptop which is connected to a bunch of cables hanging down from a table. (photo by Skolska/Prague)

Dafna Naphtali is a sound-artist, vocalist, electronic musician and guitarist.   As a performer and composer of experimental, contemporary classical and improvised music since the mid-1990s, she creates custom Max/MSP programming incorporating polyrhythmic metronomes, Morse Code, and incoming audio signals to control her sound-processing of voice and other instruments, and other projects such as music for robots, audio augmented reality sound walks and “Audio Chandelier” multi-channel sound projects.  Her new CD Pulsing Dot with pianist Gordon Beeferman is on Clang Label.