[Ed. Note: Beginning this Sunday, June 5, 2016, the seventh New York City Electroacoustic Music Festival (NYCEMF) will present a total of 35 concerts over the course of ten days, beginning with a series of seven programs in Brooklyn at National Sawdust—as part of the New York Philharmonic’s Biennial—from June 5 to 7 and then a marathon of 28 events over the course of seven days—June 13-19—at the Abrons Arts Center in Manhattan. A complete listing of all of the music that will be featured can be found on the NYCEMF website. To coincide with the latest iteration of the important annual musical immersion we’ve asked two composers who work extensively with electronics—Eric Chasalow and Alice Shields—to offer their thoughts on the current state of the art over the course of the next eight weeks. This week, Chasalow explains his preference for the term electroacoustic music to describe this broad field.—FJO]
When I interviewed Milton Babbitt in 1997 for The Video Archive of Electroacoustic Music, one of the things on my mind was the changing nomenclature for the work we did in the studio. As I began the interview, I casually said, “The main thing we are doing here, Milton, is talking about the history of electronic music. Everyone is calling it electroacoustic music now…” Before I could continue, he replied, “I don’t give a damn what we call it, as long as we both know what we are talking about which is such a rarity these days.”
Many people, it turns out, do give a damn. I find the topic coming up with students almost every year, and it continues to be discussed even among composers who have been working in the field for decades. Old habits die hard, and twenty years later, I most often still catch myself calling it electronic music, even as that term has been appropriated by popular music. To clarify my own point of view, I find electroacoustic works best. It distinguishes what we do from “other” music, while attempting to encompass many modes of production and performance. The term has also come into wide and less controversial usage over the past twenty years (though not without some continued confusion). There are many other terms though, that people still carefully choose, and the “what and why” of those choices is the main point of my following musings.
I wish I could say with confidence that I don’t give a damn, as Milton did, but the truth is much more nuanced than that. Babbitt and I shared a context that gave the term “electronic music” a meaning. Yet even then, our meaning—the Columbia-Princeton Electronic Music Center meaning—was different in critical ways from that of the Cologne Studio a decade or so before, elektronische Musik referring to the particular source material. In the late 1940s, Pierre Schaeffer’s new term, musique concrète, was more than a description of a new way of making music; it was a political stance, announcing something fundamentally new about music and its relationship to its material. I still hear people express that position from time to time, and I admire their conviction. Yet today, musique concrète has little cultural currency, and we tend to use it more casually, if at all. I like to align my own work historically by describing it as “a kind of super musique concrète” or “hyper musique concrète,” employing a wide array of pre-recorded material—“samples” as most people would call them now. (The baggage associated with that term bears getting back to one day.) I feel this helps me communicate effectively about what I do and why, connecting to a meaningful history, even though my philosophy is quite different from Schaeffer’s. Most of us now would be tossed out of the studio by the true believers, so specific was the original meaning of musique concrète.
Among colleagues, this conversation purports to be about discovering some logical endpoint. In a paper presented at the Electroacoustic Music Studies Network (EMS) conference in Beijing in 2006, Leigh Landy writes, “Terminology: We don’t all have to agree, but the current state of affairs is embarrassing.” I share Landy’s desire to find a common nomenclature across our discipline; that is, so far as one agrees that composers of electroacoustic music comprise a discipline. It may be that within a professional organization such as the EMS Network, we might make progress on that front. However, the music world is huge and wonderfully messy, and we have plenty of evidence that agreement in terminology beyond a small cohort of academics is just not going to happen. The terminology changes as the music changes, and arguments about what defines a new kind of music require new language. For better or worse, no one is asking academics what to call things.
Outside the world of so-called art music, the term electronic music encompasses a mind-boggling array of subgenres and sub-subgenres, defined by anything from the city where it started (e.g. Chicago House) to tempo (Dubstep is at 138 to 142 bpm) to a specific synthesizer sound (Trap is defined by the use of Roland TR-808 kick drum sounds). This parsing and overarching definition of electronic music has solidified and moved into the mainstream of popular music. The conversation I am writing about runs in parallel, largely unknown and irrelevant to that worldview. We inhabit different spaces, even if we identify with an overlapping set of pioneers (think Bob Moog). Not that there aren’t composers who know and care about both worlds. Many of us do make music in multiple genres. Attempts at creating a history that encompasses both the popular and non-, however, have been more confusing than enlightening. To get an idea of what seems to me a strange historical conflation, watch the film Modulations from 1998, a flawed yet entertaining attempt to clarify relationships.
Rather than thinking about the terminology as something to resolve, we might accept the evolving taxonomy for what it reveals. If we consider just a little bit of the history of each term, interesting questions arise. The terms tape music, computer music, fixed media, and the like derive from more or less specific modes of production. Vladimir Ussachevsky, one of the co-founders of the Columbia-Princeton Electronic Music Center, called recorded pieces that are to be played in the concert hall tape music and wrote about this choice in Perspectives of New Music. Is this term still meaningful now that the medium of tape no longer exists? Even after forty years of making pieces that seek to seamlessly combine the sounds of traditional instruments with non-instrumental sounds, I only recently—and with ambivalence—have returned to using the term instrument and tape. Many of us understand that in this usage, tape is a metaphor. (Renewed interest in analog tape is a topic for another set of musings.) Organised sound, sonic art, and sonic arts—all terms primarily of UK origin—focus on the central role of sound itself and generalize further by excluding the word music; acousmatic music, a French term, according to Leigh Landy, has largely replaced musique concrète.
The term sound art places the work in a different community altogether—that of the gallery and museum rather than the concert hall. Who gets to call themselves a sound artist is a significant matter, since unlike music—which is duplicated and stolen worldwide across all genres—fine art, including sound art, is a commodity that must be purchased for considerable sums and may be owned. Sound artists are mostly self-identified and usually come from the visual art world. Those who come from music are often aware that they are negotiating a tricky crossing from one tribe to another. I think of examples of musicians such as Alvin Lucier, Paula Matthusen, and Stephen Vitiello, each of who lead this double life to various degrees.
There is a fundamental truth that explains what is at the heart of the terminology issue, one I have hinted at several times. Terminology is a signifier of identity, who we are and where we belong. We all have stories about which community has felt welcoming and which has not. These are narratives of exclusion and inclusion. Think uptown vs. downtown, serial vs. tonal, and other journalistic, less than perfect, dichotomies—each lack nuance, and yet still are not without meaning. It is not surprising that these forces are at play in the electroacoustic music community as well. My interview with Bebe Barron had a particularly poignant moment when she described how she and Louie were never accepted by the composers who worked in university studios rather than in homegrown ones like hers. As friendly as we had become, to her I clearly represented the “other,” and in telling the story she vented about it. “You guys!” she said, still exasperated after almost fifty years.
In thinking about the politics of identity in electroacoustic music I am reminded of part of my own story. The term computer music has a particular historical resonance for me, albeit one that has faded. As soon as Max Matthews and Joan Miller created a way to use general-purpose computers to synthesize sounds, composers began to refer to music made in that way as computer music distinguishing it from earlier work. This is a strange distinction if one considers that computer music languages are modeled on the classic analog studio. In fact, Max Matthews worked in the studio at Columbia and in several ways many of the pioneers were part of one community. In his Archive interview, Max tells of early experiments being of interest—not yet musically, but as a possible future—to Babbitt, Ussachevsky, and Edgard Varèse, none of whom went on to use the computer in music making. One might argue that Varèse would have embraced the technology if he had lived longer. Ussachevsky made an attempt. Babbitt, who could have used computers, chose not to and stuck to the RCA Mark II (an analog machine). Of course Babbitt would remain virtually self-segregated in an exclusive club around an idiosyncratic and remote machine. In fact, were we to construct an ethnography of electroacoustic music, we would find distinct enclaves of computer composers and others who were not, mostly working in the analog studio. I was clearly among the later. I attempted to use computers to make a piece, taking a class in MUSIC360 at Columbia and spending time with a PDP11 at Bell Labs (my father worked there and made the introduction to Matthews), but no music emerged from those efforts. I retreated to the analog studio where I knew I could get some pieces made.
Splicing analog tape to make pieces required total commitment and many hours, too. Even so, for me the difficulties of coding and actually getting a musical result were, at that time, insurmountable. There were those who wrote code and those who did not; I did not. We formed into distinct subcommunities, even at a place like Columbia where we eventually worked up the hall from one another. In the early years of professionalization, I did not even consider attending some conferences. How could I participate in the International Computer Music Conference when my work did not use computers? As computers became smaller, ubiquitous, and more user-friendly, there was a convergence of populations and all of this became irrelevant. But there is no doubt that it was a very real cultural distinction at that time. The question of where and how to keep working when confronted with these shifting distinctions is something we all must navigate. I would go so far as to say that the way we face these social challenges may be artistically fruitful so long as we continue to work with persistence and a good dose of confidence.
A sampling of excerpts from the Video Archive of Electroacoustic Music may be found here.
Eric Chasalow is a composer known for both electro-acoustic music and music for traditional instruments. He is co-curator of the The Video Archive of Electroacoustic Music, an oral history project chronicling pioneers of electronic music, and is president of the board of the Composers Conference, a summer institute for young composers, which is now in its 73rd year. Eric is the Irving G. Fine Professor of Music and the Dean of the Graduate School of Arts and Sciences at Brandeis University.