15-OkefenokeeSwamp

Delays as Music

The use of delays in music is ubiquitous. We use delays to locate a sound’s origin, create a sense of size/space, to mark musical time, create rhythm, and delineate form.

Written By

Dafna Naphtali

As I wrote in my previous post, I view performing with “live sound processing” as a way to make music by altering and affecting the sounds of acoustic instruments—live, in performance—and to create new sounds, often without the use of pre-recorded audio. These new sounds, have the potential to forge an independent and unique voice in a musical performance. However, their creation requires, especially in improvised music, a unique set of musicianship skills and knowledge of the underlying acoustics and technology being used. And it requires that we consider the acoustic environment and spectral qualities of the performance space.

Delays and Repetition in Music

The use of delays in music is ubiquitous.  We use delays to locate a sound’s origin, create a sense of size/space, to mark musical time, create rhythm, and delineate form.

The use of delays in music is ubiquitous.

As a musical device, echo (or delay) predates electronic music. It has been used in folk music around the world for millennia for the repetition of short phrases: from Swiss yodels to African call and response, for songs in the round and complex canons, as well as in performances sometimes taking advantage of unusual acoustic spaces (e.g. mountains/canyons, churches, and unusual buildings).

In contemporary music, too, delay and reverb effects from unusual acoustic spaces have been included the Deep Listening cavern music of Pauline Oliveros, experiments using the infinite reverbs in the Tower of Pisa (Leonello Tarbella’s Siderisvox), and organ work at the Cathedral of St. John the Divine in NY using its 7-second delay. For something new, I’ll recommend the forthcoming work of my colleague, trombonist Jen Baker (Silo Songs).

Of course, delay was also an important tool in the early studio tape experiments of Pierre Schaeffer (Etude aux Chemin de Fer) as well as Terry Riley and Steve Reich. The list of early works using analog and digital delay systems in live performances is long and encompasses many genres of music outside the scope of this post—from Robert Fripp’s Frippertronics to Miles Davis’s electric bands (where producer Teo Macero altered the sound of Sonny Sharrock’s guitar and many other instruments) and Herbie Hancock’s later Mwandishi Band.

The use of delays changed how the instrumentalists in those bands played.  In Miles’s work we hear not just the delays, but also improvised instrumental responses to the sounds of the delays and—completing the circle—the electronics performers respond to by manipulating their delays in-kind. Herbie Hancock was using delays to expand the sound of his own electric Rhodes, and as Bob Gluck has pointed out (in his 2014 book You’ll Know When You Get There: Herbie Hancock and the Mwandishi Band), he “intuitively realized that expressive electronic musicianship required adaptive performance techniques.” This is something I hope we can take for granted now.

I’m skipping any discussion of the use of echo and delay in other styles (as part of the roots of Dub, ambient music, and live looping) in favor of talking about the techniques themselves, independent of the trappings of a specific genre, and favoring how they can be “performed” in improvisation and as electronic musical sounds rather than effects.

Sonny Sharrock processed through an Echoplex by Teo Macero on Miles Davis’s “Willie Nelson” (which is not unlike some recent work by Johnny Greenwood)

By using electronic delays to create music, we can create exact copies or severely altered versions of our source audio, and still recognize it as a repetition, just as we might recognize each repetition of the theme in a piece organized as a theme and variations, or a leitmotif repeated throughout a work. Besides the relationship of delays to acoustic music, the vastly different types of sounds that we can create via these sonic reflections and repetitions have a corollary in visual art, both conceptually and gesturally. I find these analogies to be useful especially when teaching. Comparisons to work from the visual and performing arts that have inspired me in my work include images, video, and dance works.  These are repetitions (exact or distorted), Mandelbrot-like recursion (reflections, altered or displaced and re-reflected), shadows, and delays.  The examples below are analogous to many sound processes I find possible and interesting for live performance.

Sounds we create via sonic reflections and repetitions have a corollary in visual art.

I am a musician not an art critic/theorist, but I grew up in New York, being taken to MoMA weekly by my mother, a modern dancer who studied with Martha Graham and José Limon.  It is not an accident that I want to make these connections. There are many excellent essays on the subject of repetition in music and electronic music, which I have listed at the end of this post.  I include the images and links below as a way to denote that the influences in my electroacoustic work are not only in music and audio.

In “still” visual art works:

  • The reflected, blurry trees in the water of a pond in Claude Monet’s Poplar series creates new composite and extended images, a recurring theme in the series.
  • Both the woman and her reflection in Pablo Picasso’s Girl Before a Mirror are abstracted and interestingly the mirror itself is both the vehicle for the reiteration and an exemplified object.
  • There are also repetitions, patterns, and “rhythms” in work by Chuck Close, Andy Warhol, Sol Lewitt, M.C. Escher, and many other painters and photographers.

In time-based/performance works:

  • Fase, Four Movements to the Music of Steve Reich, is a dance choreographed in 1982 by Anne Teresa De Keersmaeker to Steve Reich’s Music for 18 Musicians. De Keersmaeker uses shadows with the dancers. The shadows create a 3rd (and 4th and 5th) dancer which shift in and out of focus turning the reflected image presented as partnering with the live dancers into a kind of sleight-of-hand.
  • Iteration plays an important role in László Moholy-Nagy’s short films, shadow play constructions, and his Light Space Modulator (1930)
  • Reflection/repetition/displacement are inherent to the work of countless experimental video artists, starting with Nam June Paik, who work with video synthesis, feedback and modified TVs/equipment.

Another thing to be considered is that natural and nearly exact reflections can also be experienced as beautifully surreal. On a visit to the Okefenokee swamp in Georgia long ago, my friends and I rode in small flat boats on Mirror Lake and felt we were part of a Roger Dean cover for a new Yes album.

Okefenokee Swamp

Okefenokee Swamp

Natural reflections, even when nearly exact, usually have some small change—a play in the light or color, or slight asymmetry—that gives it away. In all of my examples, the visual reflection is not “the same” as the original.   These nonlinear differences are part the allure of the images.

These images are all related to how I understand live sound processing to impact on my audio sources. Perfect mirrors create surreal new images/objects extending away from the original.  Distorted reflections (anamorphosis) create a more separate identity for the created image, one that can be understood as emanating from the source image, but that is inherently different in its new form. Repetition/mirrors: many exact or near exact copies of the same image/sound form patterns, rhythms, or textures creating a new composite sound or image.  Phasing/shadows—time-based or time-connected: the reflected image changes over time in its physical placement with regards to the original and creating a potentially new composite sound.   Most of these ways of working require more than simple delay and benefit from speed changes, filtering, pitch-shift/time-compression, and other things I will delve into in the coming weeks.

The myths of Echo and Narcissus are both analogies and warning tales for live electroacoustic music.

We should consider the myths of Echo and Narcissus both as analogies and warning tales for live electroacoustic music. When we use delays and reverb, we hear many copies of our own voice/sound overlapping each other and create simple musical reflections of our own sound, smoothed out by the overlaps, and amplified into a more beautiful version of ourselves!  Warning!  Just like when we sing in the shower, we might fall in love the sound (to the detriment of the overall sound of the music).


Getting techie Here – How does Delay work?

Early Systems: Tape Delay

A drawing of the trajectory of a piece of magnetic tape between the reels, passing the erase, record, and playback heads.

A drawing by Mark Ballora which demonstrates how delay works using a tape recorder. (Image reprinted with permission.)

The earliest method used to artificially create the effect of an echo or simple delay was to take advantage of the spacing between the record and playback heads on a multi-track tape recorder. The output of the playback head could be read by the record head and rerecorded on a different track of the same machine.  That signal would then be read again by the playback head (on its new track).  The signal will have been delayed by the amount of time it took for the tape to travel from the record head to the playback head.

The delay time is determined by the physical distance between the tape heads, and by the tape speed being used.  One limitation is that delay times are limited to those that can be created at the playback speed of the tape. (e.g. At a tape speed of 15 inches per second (ips), tape heads spaced 3/4 to 2 inches apart can create echoes at 50ms to 133ms; at 7ips yields 107ms to 285ms, etc.)

Here is an example of analog tape delay in use:

Longer/More delays: By using a second tape recorder, we can make a longer sequence of delays, but it would be difficult to emulate natural echoes and reverberation because all our delay lengths would be simple multiples of the first delay. Reverbs have a much more complex distribution of many, many small delays. The output volume of those delays decreases differently (more linearly) in a tape system than it would in a natural acoustic environment (more exponentially).

More noise: Another side effect of creating the delays by re-recording audio is that after many recordings/repetitions the audio signal will start to degrade, affecting its overall spectral qualities, as the high and low frequencies die out more quickly, eventually degrading into, as Hal Chamberlin has aptly described it in his 1985 book Musical Applications of Microprocessors, a “howl with a periodic amplitude envelope.”

Added noise from degradation and overlapped voice and room acoustics is turned into something beautiful in I Am Sitting In A Room, Alvin Lucier’s seminal 1969 work.  Though not technically using delay, the piece is a slowed down microcosm of what happens to sound when we overlap / re-record many many copies of the same sound and its related room acoustics.

A degree of unpredictability certainly enhances the use of any musical device being used for improvisation, including echo and delay. Digital delay makes it possible to overcome the inherent inflexibility and static quality of most tape delay systems, which remain popular for other reasons (e.g. audio quality or nostalgia as noted above).

The list of influential pieces using a tape machine for delay is canonically long.  A favorite of mine is Terry Riley’s piece, Music for the Gift (1963), written for trumpeter Chet Baker. It was the first use of very long delays on two tape machines, something Riley dubbed the “Time Lag Accumulator.”

Terry Riley: Music for the Gift III with Chet Baker

Tape delay was used by Pauline Oliveros and others from the San Francisco Tape Music Center for pieces that were created live as well as in the studio, with no overdubs, which therefore could be considered performances and not just recordings.   The Echoplex, created around 1959, was one of the first commercially manufactured tape delay systems and was widely used in the ‘60s and ‘70s. Advances in the design of commercial tape delays, included the addition of more and moveable tape-heads, increased the number of delays and flexibility of changing delay times on the fly.

Stockhausen’s Solo (1966), for soloist and “feedback system,” was first performed live in Tokyo using seven tape recorders (the “feedback” system) with specially adjustable tape heads to allow music played by the soloist to “return” at various delay times and combinations throughout the piece.  Though technically not improvised, Solo is an early example of tape music for performed “looping.”  All the music was scored, and a choice of which tape recorders would be used and when was determined prior to each performance.

I would characterize the continued use of analog tape delay as nostalgia.

Despite many advances in tape delay, today digital delay is much more commonly used, whether it is in an external pedal unit or computer-based. This is because it is convenient—it’s smaller, lighter, and easier to carry around—and because it much more flexible. Multiple outputs don’t require multiple tape heads or more tape recorders. Digital delay enables quick access to a greater range of delay times, and the maximum delay time is simply a function of the available memory (and memory is much cheaper than it used to be).   Yet, in spite of the convenience and expandability of digital delay, there is continued use of analog tape delay in some circles.  I would simply characterize this as nostalgia (for the physicality of the older devices and dealing with analog tape, and for the warmth of analog sound; all of these we relate to music from an earlier time).

What is a Digital Delay?

Delay is the most basic component of most digital effects systems, and so it’s critical to discuss it in some detail before moving on to some of the effects that are based upon it.   Below, and in my next post, I’ll also discuss some physical and perceptual phenomena that need to be taken into consideration when using delay as a performance tool / ersatz instrument.

Basic Design

In the simplest terms, a “delay” is simple digital storage.  Just one audio sample or a small block of samples, are stored in memory then can be read and played back at some later time, and used as output. A one second delay (1000ms), mono, requires storing one second of audio. (At a 16-bit CD sample rate of 44.1kHz, this means about 88kb of data.) These sizes are teeny by today’s standards but if we use many delays or very long delays it adds up. (It is not infinite or magic!)

Besides being used in creating many types of echo-like effects applications, a simple one-sample delay is also a key component of the underlying structure of all digital filters, and many reverbs.  An important distinction between each of these applications is the length of the delay. As described below, when a delay time is short, the input sounds get filtered, and with longer delay times other effects such as echo can be heard.

Perception of Delay — Haas (a.k.a. Precedence) Effect

Did you ever drop a pin on the floor?   You can’t see it, but you still know exactly where it is? We humans naturally have a set of skills for sound localization.  These psychoacoustic phenomena have to do with how we perceive the very small time, volume, and timbre differences between the sounds arriving in our ears.

In 1949, Helmut Haas made observations about how humans localize sound by using simple delays of various lengths and a simple 2-speaker system.  He played the same sound (speech, short test tones), at the same volume, out of both speakers. When the two sounds were played simultaneously (no delay), listeners reported hearing the sound as if it were coming from the center point between the speakers (an audio illusion not very different from how we see).  His findings give us some clues about stereo sound and how we know where sounds are coming from.  They also relate to how we work with delays in music.

  • Between 1-10ms delay: If the delay between sounds is used was anywhere from 1ms to 10ms, the sound appears to emanate from the first speaker (the first sound we hear is where we locate the sound).pix here of Haas effect setup p 11
  • Between 10-30ms delay: The sound source continues to be heard as coming from the primary (first sounding) speaker, with the delay/echo adding a “liveliness” or “body” to the sound. This is similar to what happens in a concert hall—listeners are aware of the reflected sounds but don’t hear them as separate from the source.
  • Between 30-50ms delay: The listener becomes aware of the delayed signal, but still senses the direct signal as the primary source. (Think of the sound in a big box store “Attention shoppers!”)
  • At 50ms or more: A discrete echo is heard, distinct from the first heard sound, and this is what we often refer to as a “delay” or slap-back echo.

The important fact here is that when the delay between speakers is lowered to 10ms (1/100th of a second), the delayed sound is no longer perceived as a discrete event. This is true even when the volume of the delayed sound is the same as the direct signal. [Haas, “The Influence of Single Echo on the Audibility of Speech” (1949)].

A diagram of the Haas effect showing how the position of the listener in relationship to a sound source affects the perception of that sound source.

The Haas Effect (a.k.a. Precedence Effect) is related to our skill set for sound localization and other psychoacoustic phenomena. Learning a little about these phenomena (Interaural Time Difference, Interaural Level Difference, and Head Shadow) is useful not only for an audio engineer, but is also important for us when considering the effects and uses of delay in Electroacoustic musical contexts.

What if I Want More Than One?

Musicians usually want the choice to play more than one delayed sound, or to repeat their sound several times. We do this by adding more delays, or we can use feedback, and route a portion of our output right back into the input. (Delaying our delayed sound is something like an audio hall of mirrors.) We usually route only some of the sound (not 100%) so that each time the output is a little quieter and the sound eventually dies out in decaying echoes.  If our feedback level is high, the sound may recirculate for a while in an endless repeat, and may even overload/clip if new sounds are added.

When two or more copies of the same sound event play at nearly the same time, they will comb filter each other. Our sensitivity to these small differences in timbre that result are a key to understanding, for instance, why the many reflections in a performance space don’t usually get mistaken for the real thing (the direct sound).   Likewise, if we work with multiple delays or feedback, when multiple copies of the same sound play over each other, they also necessarily interact and filter each other causing changes in the timbre. (This relates again to I Am Sitting In A Room.)

In the end, all of the above (delay length, using feedback or additional delays, overlap) all determine how we perceive the music we make using delays as a musical instrument. I will discuss Feedback and room acoustics and its potential role as a musical device in the next post later this month.


My Aesthetics of Delay

To close this post, here are some opinionated conclusions of mine based upon what I have read/studied and borne out in many, many sessions working with other people’s sounds.

  • Short delay times tend to change our perception of the sound: its timbre, and its location.
  • Sounds that are delayed longer than 50ms (or even up to 100ms for some musical sounds) become echoes, or musically speaking, textures.
  • At the in-between delay times (the 30-50ms range give or take a little) it is the input (the performed sound itself) that determines what will happen. Speech sounds or other percussive sounds with a lot of transients (high amplitude short duration) will respond differently than long resonant tones (which will likely overlap and be filtered). It is precisely in this domain that the live sound-processing musician will needs to do extra listening/evaluating to gain experience and predict what might be the outcome. Knowing what might happen in many different scenarios is critical to creating a playable sound processing “instrument.”

It’s About the Overlap

Using feedback on long delays, we create texture or density, as we overlap sounds and/or extend the echoes to create rhythm.  With shorter delays, using feedback instead can be a way to move toward the resonance and filtering of a sound.  With extremely short delays, control over feedback to create resonance is a powerful way to create predictable, performable, electronic sounds from nearly any source. (More on this in the next post.)

Live processing (for me) all boils down to small differences in delay times.

Live processing (for me) all boils down to these small differences in delay times—between an original sound and its copy (very short, medium and long delays).  It is a matter of the sounds overlapping in time or not.   When they overlap (due to short delay times or use of feedback) we hear filtering.   When the sounds do not overlap (delay times are longer than the discrete audio events), we hear texture.   A good deal of my own musical output depends on these two facts.


Some Further Reading and Listening

On Sound Perception of Rhythm and Duration

Karlheinz Stockhausen’s 1972 lecture The Four Criterion of Electronic Music (Part I)
(I find intriguing Stockhausen’s discussion of unified time structuring and his description of the continuum of rhythms: from those played very fast (creating timbre), to medium fast (heard as rhythms), to very very slow (heard as form). This lecture both expanded and confirmed my long-held ideas about the perceptual boundaries between short and long repetitions of sound events.)

Pierre Schaeffer’s 1966 Solfège de l’Objet Sonore
(A superb book and accompanying CDs with 285 tracks of example audio. Particularly useful for my work and the discussion above are sections on “The Ear’s Resolving Power” and “The Ear’s Time Constant” and many other of his findings and examples. [Ed. note: Andreas Bick has written a nice blog post about this.])

On Repetition in All Its Varieties

Jean-Francois Augoyard and Henri Torgue, Sonic Experience: a Guide to Everyday Sounds (McGill-Queen’s University Press, 2014)
(See their terrific chapters on “Repetition”, “Resonance” and “Filtration”)

Elizabeth Hellmuth Margulis, On Repeat: How Music Plays the Mind (Oxford University Press, 2014)

Ben Ratliff, Every Song Ever (Farrar, Straus and Giroux, 2016)
(Particularly the chapter “Let Me Listen: Repetition”)

Other Recommended Reading

Bob Gluck’s book You’ll Know When You Get There: Herbie Hancock and the Mwandishi Band (University of Chicago Press, 2014)

Michael Peter’s essay “The Birth of the Loop
http://preparedguitar.blogspot.de/2015/04/the-birth-of-loop-by-michael-peters.html

Phil Taylor’s essay “History of Delay

My chapter “What if your instrument is Invisible?” in the 2017 book Musical Instruments in the 21st Century as well as my 2010 Leonardo Music Journal essay “A View on Improvisation from the Kitchen Sink” co-written with Hans Tammen.

LiveLooping.org
(A musician community built site around the concept of live looping with links to tools, writing, events, etc.)

Some listening

John Schaeffer’s WNYC radio program “New Sounds” has featured several episodes on looping.
Looping and Delays
Just Looping Strings
Delay Music

And finally something to hear and watch…

Stockhausen’s former assistant Volker Müller performing on generator, radio, and three tape machines