Spaces Speak, Are You Listening?

Spaces Speak, Are You Listening?

With the evolution of advanced electroacoustic tools, musical space became increasing fluid, flexible, abstract, and imaginary.

Written By

Multiple Authors

The following excerpts are reprinted from Chapter Five, “Inventing Virtual Spaces for Music” of the book, Spaces Speak, Are You Listening? by Barry Blesser and Linda-Ruth Salter, pp. 164-170. Copyright (c) 2007 by the MIT Press. Used with permission of the publisher.

  • READ an interview with authors Barry Blesser and Linda-Ruth Salter.
    name

     

    Prophetic visions of the future are sometimes found in the distant past, especially when brilliant minds anticipate what will be possible without being confined by their immediate reality. When Francis Bacon (1626) described the “sound houses” of his utopian college in his essay The New Atlantis, he was prophesying the electroacoustic world of contemporary music of the twentieth century:

    We have also diverse strange and artificial echoes, reflecting the voice many times, and, as it were, tossing it; and some that give back the voice louder than it came, some shriller and deeper; yea, some rendering the voice, differing in letters or articulation from that they receive. We have means to convey sounds in trunks and pipes, in strange lines and distances.

    Without tools for creating an aural space, spatiality remained subservient to other compositional elements, such as rhythm, melody, timbre, and tempo. But with the evolution of advanced electroacoustic tools, Bacon’s seventeenth-century ideas, once merely footnotes to history, would be rendered into sound for ordinary listeners to hear; musical space became increasing fluid, flexible, abstract, and imaginary. This trend was most apparent in the second half of the twentieth century. From the perspective of electronic music, spatial design is an application of aural architecture without assuming a physical space. Musical space is unconstrained by the requirements for normal living, and musical artists are inclined to conceive of surreal spatial concepts.

    Like M. C. Escher’s painting of an imaginary space with interwoven staircases that simultaneously lead upward and downward, aural artists also have the freedom to construct contradictory spaces. As an analogy to a virtual aural space, [ M.C. Escher’s Relativity] has elements of visual spatiality, but the space itself could not exist. Similarly for an aural space, we can create sounds that appear to come closer without moving, or a spatial volume that is simultaneously large and small. Modern audio engineers and electronic composers, without necessarily realizing their new role, became the aural architects of virtual, imaginary, and contradictory spaces. Aural spatiality can exist without a physical space.

    By abandoning conventional norms defining music and space, modern artists created contemporary music. Although this class of music is considered by some to be an irreverent and unpleasant form of noise, the new rules of space are still worth investigating because they exist apart from the compositional creations that incorporate them. These rules are interesting both because they predicted the popular music of the late twentieth century and because they suggest future direction for the twenty-first century. Even if some twentieth-century contemporary music has not left an enduring legacy, the new rules of aural space are likely to survive in other aspects of our art and culture.

    The rule that requires musicians to perform in a tight cluster on the stage and listeners in predefined seats in the audience area is readily broken, as is the rule that requires both musicians and listeners to maintain a static geometric relationship throughout the performance. Moreover, when knobs on equipment can alter virtual spatial attributes, the rule that requires spatial acoustics to remain constant and consistent during a performance is also easy to transcend. In the world of virtual spatiality, acoustic space and sound location are no longer based on the laws of physics; acoustic objects can change their size and location instantly. Acoustic space and sound location have become as dynamic as the sequence of notes in the composition. As with all artistic rule systems, however, breaking old rules is easier than replacing them with meaningful new ones. A few decades is a very short duration for refining a new art form.

    A virtual space is not only a compositional element in music, but also an experience that can be extracted from music and then applied elsewhere, for example, to auditory displays in the cockpit of an airplane, the fictional spaces of computer games, or the dual audiovisual spaces of cinema. In these applications, there may not be consistency among the different sensory modalities. In some sense, with the ubiquitous technology of the twenty-first century, the experience of spatiality frequently dominates the experience of a physical environment. Space is no longer just a geographic framework (near-far, front-back, up-down) for positioning sounds relative to listeners. Space is no longer just a response to the acoustics of the environment. The older definition of cognitive maps of space as the internal representation of an external world, introduced in chapter 2, becomes fluid, plastic, and even more subjective. Aural architects of virtual spaces are manipulating their listeners’ cognitive maps.

    Artistic Dimensions of Space and Location

    Composers have always understood, both intuitively and consciously, that the location of the musicians contributes to listeners’ experience of a musical space. The hidden problem with positioning musicians throughout a space is that sound waves move comparatively slowly. Large acoustic spaces produce large delays, which displaces the temporal alignment of music arriving from different locations. Two notes beginning at the same time may arrive at a listener at different times. The spatial manifestation of time is an artistic issue for both listeners and performers, and as in advanced physics, time and space are related and connected concepts.

    When musicians are tightly clustered, the time for a direct sound to travel among them is small, and synchronization depends on their artistic skills alone. Conversely, when an orchestra is large and spread across the stage, the sound delay places a limit on aural synchronization. Because musicians separated by 20 meters (65 feet) will hear each other with a 60-millisecond delay, the visual cue of the conductor’s moving baton takes over the function of producing temporal consistency. When musicians in a large orchestra are perfectly synchronized in time, neither the conductor nor the listeners hear that temporal alignment because they are closer to some musicians than others. For example, a listener near the stage but far off to the left will hear a musician at the far right side of the stage with a delay after hearing a musician on the left, even though the two musicians are playing the same note at the same time. This problem is exacerbated if musicians are widely distributed throughout a large space.

    Composers can compensate for audio delay in several ways. Tight synchronization is not required if the composer includes a temporal gap, perhaps silence, between sounds originating from widely distributed locations. The location of the musicians, which depends on the particular geometry of a space, can then become a compositional component, although when the composition depends on a specific spatial organization, the music is not easily transported to other spaces without having to be adapted. For this reason and because it is less flexible than other options, composers have seldom manipulated the spatial distribution of musicians.

    With the advent of electroacoustics, perceived location and intrinsic audio delays were separated. For example, deploying individual microphones and headphones for each musician removes the intrinsic delays when they listen to their colleagues. Unlike air as a medium, electrified sound moves through wires instantaneously. The sound engineer is therefore free to electroacoustically reposition musicians anywhere in the virtual space, without destroying the synchronization among them. Two musicians separated by a distance of 50 meters (165 feet) can still be heard synchronously. Aurally perceived location has nothing to do with actual location; virtual spaces and virtual locations break the relationship between time and space.

    Anyone who creates a complete sound field that produces the experience of spatiality is functioning as an aural architect. Traditionally, sound sources from loudspeakers were viewed as injecting sonic events into a listening space, but with the advent of surround-sound reproduction, the sound field includes, and in some cases, replaces the experience of the listening space. This chapter traces the history and evolution of space in music, ending with the aural architecture of virtual spaces.

    Incorporating Location within Traditional Music

    Many of the spatial ideas found in contemporary music originated from an earlier period when musicians were occasionally distributed within the performance space. There is a long tradition of antiphonal music, a dialogue of call and response among distinct groups of musicians at different locations, which does not require tight synchronization or simultaneous playing. This style is found in the chanting psalms of Jews in biblical times, and in early Christian music dating from the fourth century. In the late sixteenth century, Giovanni Gabrieli extended the tradition of cori spezzati (divided choirs) as an adaptation to the unique architecture of Saint Mark’s Cathedral in Venice (Grout, 1960). The musical space was vast, and it contained two widely separated organs and choirs at opposite sides of the cathedral. Adapting to that uniqueness, composers at Saint Mark’s featured a dramatic use of antiphony between the halves of the double choir. The penchant to divide performers was also part of the Venetian polychoral tradition, started by Adrian Willaert and culminating with nine choral groups distributed throughout the cathedral (Mason, 1976). The refinement of cori spezzati represented a musical revolution, and also appeared in secular music of this and earlier periods, such as madrigals with echoes (Arnold, 1959). By the twentieth century, the use of spatially distributed musicians became less unusual and more innovative. Richard Zvonar (1999) cites numerous examples. Charles Ives, in The Unanswered Question (1908), placed the strings offstage to contrast with the onstage trumpet soloist and woodwind ensembles. He was influenced by his father, a Civil War bandmaster and music teacher, who had experimented with two marching bands approaching the town center from different directions. Henry Bryant then extended the idea in Antiphony I (1953) and Voyager Four (1963) with five ensemble groups placed along the front, back, and sides of the space. Three conductors were required.

    For modern composers, dispersing musical sources throughout a space is no longer revolutionary; location is an active component of a composition. Antiphony and spatial distribution evolved into a space-time continuum, which Maja Trochimczyk (2001) calls “spatiotemporal texture.” At any time, a musical voice could appear from any direction, and by intentionally sequencing attributes of space, time, pitch, and timbre, a voice can create the illusion of movement (changing position) and transformation (changing size). When used in this way, space is a musical dimension. Charles Hoag, in Trornbonehenge (1980), used thirty trombones surrounding the audience as an imitation of Stonehenge, and R. Murray Schafer, in Credo (1981), surrounded the audience with twelve mixed choirs. Extending the blending of musicians and listeners still further, Iannis Xenakis scattered 88 musicians among the audience so that the listeners are actually inside the music; in another of his compositions, musicians moved through the space rather than remaining seated.

    Based on traditional theory, music has a temporal and pitch structure, and within those dimensions, a composer manipulates musical voices so that they either fuse into a unitary whole or remain segregated as distinct elements-musical layers. Contemporary music, however, has added a spatial dimension. Composers now require new rules for manipulating fusion and segregation. The proliferation of compositions that manipulate space signifies a new form of sound imagery (Trochimczyk, 2001).

    An analysis of contemporary music is made even more complex by the addition of the two related ideas: incorporating the spatial dimension of voice location, and elevating sonic segregation over fusion and blending. During the last century, even without using space as an artistic element, Western music abandoned fusion as a prerequisite. Layered musical elements retain more of their perceptual identity when not fused. Space has become just another tool for creating musical layers. Maria Anna Harley (1998) analyzed spatial music in terms of perceptual principles that contribute to segregating musical elements. By drawing on Albert S. Bregman’s Auditory Scene Analysis (1990), she applied the principles of perceptual psychology to music. Spatial differences between sound sources that result in temporal differences at the ears augment the aurally perceived segregation of musical elements. Like differences in time, pitch, timbre, and attack, differences in spatial location are yet another means to enhance this segregation. In other words, similar but not identical sounds belong to separate musical layers when they are also spatially separated. Disparate locations de-emphasize fusion. Many modern composers, such as Bartok, Boulez, and Stockhausen, intuitively use this principle in their music.

    That twentieth-century music drifted away from fusion is consistent with spatial separation of sound sources. As a means of preventing fusion, Bryant (1967) used several artistic principles that derive from spatial separation. In one composition, he illustrated his concepts by distributing stringed instruments along the walls on the ground floor of a concert hall, as well as in the first, second, and third balconies, thereby creating a broad and intense wave of sound. Spatial separation preserved the clarity of contrasting layers, especially when different musical elements are in the same register. Because identical or harmonically related notes in two musical layers would typically fuse if not spatially separated, spatial separation afforded the composer greater musical flexibility by permitting increased complexity without concern for unintended confusion. Placing the performers below, above, behind, or to the side of listeners is not intrinsically interesting. Indeed, serializing the direction of music from a sequence of orientations or choosing an arbitrary geometric shape for performer location is, for Harley (1998), simply a failure to understand the new art. Spatial music is interesting precisely because, and only because, it allows combinations of musical elements that would otherwise be artistically weak without using spatial distribution. As if to prove this assertion, Trevor Wishart (1996) analyzed spatial movement in soundscape art, apart from a musical context, and came to a similar conclusion about space as a segmentation tool.

    In her summary of musical space, Harley (1998) concluded that “geometric floor plans and performance placement diagrams are integral, though inaudible, elements of the musical structure – as integral and inaudible as some abstract orderings in the domains of pitch and rhythm.” Spatial organization of sound sources and listener locations are components of music. Yet even when the musical score carefully specifies an organization in time and space, the composer is still constrained by the inherent inadequacy of human performers to achieve precision timing when physically separated.

    Consider two musicians located at different places but playing the same note on the same instrument. Using the concepts of Pierre Boulez (1971), there are four important cases that differ only in relative timing: simultaneous beginning and ending (fused), delayed onset of one musician’s note relative to the other’s but still overlapping (conjunctive interval), a small temporal gap between the end of one musician’s note and the beginning of the other’s (disjunctive interval), and a large delay between the two musicians’ notes (distinct sonic events). The fused case corresponds to a distributed choir singing in unison, and the last case corresponds to the historical use of antiphony. The middle two cases are interesting because they have the potential to create the perception of virtual movement, which Boulez calls “mobile distribution” or “dynamic relief.” In contrast, a fixed distribution or static relief represents a static state without kinematics. Timing has always been a critical dimension in composition, but timing combined with space becomes two-dimensional: spatiotemporal.

    This extra spatial dimension, in addition to preserving segregation of musical textures, offers other possibilities. A disjunctive interval can produce a sudden change in the aurally perceived location of a musician, and a conjunctive interval can produce smooth transition between the two locations, spice glissando. However, both effects are fragile, depending on the skill of the musicians to control timing, pitch, timbre, attack onset, and termination. And both effects depend on the location of the listener relative to the musicians. Musical movement is therefore an illusion, or a metaphoric allusion, rather than an imitation of a physical process. In addition to this change in perceived location, true motion of a sound source produces a Doppler frequency shift. Whereas physical motion in physical space has a reality, virtual motion in virtual spaces is an artistic prerogative.