Photo credit: Drew Patrick Miller

The Voice in the Machine

Our inclination to hear the human in the machine is far from new. Technology is as old as humanity. Perhaps we have always seen—and heard—ourselves in our tools.

Written By

Kelly Hiser

Electronic sound has been part of American musical life for over a century. As early as 1907, audiences in New York attended concerts featuring the massive early synthesizer, the Telharmonium. Just over two decades later, tens of thousands of listeners heard the theremin in concerts and radio broadcasts, around the same time that thousands of organists began playing the Hammond Organ in churches across the country.

Music historians tend to use these three instruments as examples of early technology that presaged—but were not part of—electronic music history, rarely mentioning the communities, traditions, practices, and meanings that coalesced around these instruments and their sonorities. To historians, the instruments and their popular practices simply weren’t revolutionary enough to merit inclusion. Their argument: early electronic instruments “did nothing to change the nature of musical composition or performance.” These instruments may have mattered little to composers like John Cage and Karlheinz Stockhausen, but their sounds and performance practices resonated with performers and audiences across the U.S.

A common thread runs through the early reception histories of these instruments: their emotional impact. Audiences heard their electronic sounds as deeply expressive, even human. In 1906 critics raved about the Telharmonium’s “delicacy of expression”; one Literary Digest writer claimed it was “as sensitive to moods and emotions as a living thing.” Two decades later, writers described the theremin’s sonority as “clear, singing, almost mournful” around the same time that Black Pentecostal worshippers celebrated the voice-like qualities of the Hammond Organ.

Today, electronic musical sounds have become so pervasive that we hardly notice them. We take their ubiquity as a matter of fact: one is hard-pressed to find much commentary on their impact or meaning. While historians often explain electronic music’s popularity as the outgrowth of the “pioneering” experimentalism of avant-garde composers such as John Cage or Stockhausen, no real evidence backs this up.

Electronic musical sound is essentially a “black box”: a technology so universally accepted that it is difficult to discern the processes that led up to its establishment. And yet occasionally, a new technology emerges that creates controversy, causing the black box to fall open and inviting us to examine why electronic musical sound seems as compelling today as it was more than a century ago.

“As tidy as a golf green”: a new(er) electronic sound and its haters

Enter Auto-Tune. In 1997 Antares Audio Technologies put Auto-Tune on the market as a Pro Tools plug-in meant to correct poorly intonated vocals. Auto-Tune’s fixes were meant to be indistinguishable to listeners, but within a year artists began using the tool in audible ways for expressive purposes. Cher’s 1998 hit “Believe” was the first high-profile instance. Studio engineers achieved the effect in this song and countless others since by setting Auto-Tune to makes pitch adjustments rapidly or even instantaneously (many artists now sing with Auto-Tune even before the production process begins). The results change not just the pitch but the timbre of the singer’s voice, rendering it machine-like and digital. Artists as varied as Kanye West, Kesha, and Bon Iver adopted the technique. T-Pain built his career on it, crafting a distinctive vocal sound that dominated the airwaves in the late aughts.

The increasing preponderance of Auto-Tune precipitated a backlash among critics and musicians that peaked with T-Pain’s popularity, and has not fully subsided. Most ground their criticisms of Auto-Tune in notions of authenticity and skill, but frequently lace their attacks with the kind of identity politics I’ve traced in the histories of earlier instruments like the theremin.

Robert Everett-Green, writing for The Globe and Mail in 2006 complained that Green Day’s recent use of Auto-Tune made punk seem “as tidy as a golf green,” and worried that as “dead-centre pitch” became the new norm, “a lot of popular music’s expressive capacities may wither away.” In a 2006 Pitchfork interview, Neko Case denied that artists used Auto-Tune as an expressive tool, declaring that its purpose was, “so you don’t have to know how to sing. That shit sounds like shit! It’s like that taste in diet soda, I can taste it—and it makes me sick.” Jay-Z’s 2009 “D.O.A. (Death of Auto-Tune)” admonished artists to “put your skirt back down, grow a set man,” and “get back to rap, you T-Paining too much.”

For these detractors, and many more like them, Auto-Tune wasn’t simply the hallmark of an artistic poseur; it threatened to destroy the political, racial, gendered, and socio-economic identities of the music it inhabited and the musicians who used it. It neutered rap, turned punk’s anti-authoritarian stance on its head, and diseased everything it touched.

“Digital souls, for digital beings”

Yet some argued that rather than sap music of its authenticity, Auto-Tune honed and complicated the expressivity of the voices it inflected. While many attributed Kanye West’s sustained use of Auto-Tune on the 2008 album 808s and Heartbreak to Kanye’s poor singing skills, Oliver Wang wrote that the result was “a melancholy, intimate and decidedly quirky effort.” According to Wang, Kanye’s “ghostly, mechanical vocals enhance the album’s already despondent atmosphere,” even if the “inhuman” qualities of those vocals rendered it a “frigid, passionless despair.” Musicologist James Gordon Williams argued that T-Pain uses Auto-Tune to trouble “the binary between racially authentic sound and technologically manipulated sound,” and in so doing created an inimitable personal voice.

The theme of expressivity that recurs in the histories of the Telharmonium through Auto-Tune raises an inevitable question: Why? Why has electronic musical sonority—across time, instruments, and performers—sounded so human to so many? Is it because such sounds remind us that our lives would be completely dismantled without technology? That technology is an inextricable part of the human condition? When an Auto-Tuned voice sounds melancholic, is it because such reminders trigger feelings of dependency or inadequacy?

In his history of Auto-Tune for Pitchfork, Simon Reynolds posited that Auto-Tune is so compelling to modern listeners because its “sparkle suits the feel of our time”:

It makes absolute sense that Auto-Tuned singing—bodily breath transubstantiated into beyond-human data—is how desire, heartbreak, and the rest of the emotions sound today. Digital soul, for digital beings, leading digital lives.

Our immersion in digitality may explain the allure of the Auto-Tuned voice, but our inclination to hear the human in the machine is far from new. Technology is as old as humanity. Perhaps we have always seen—and heard—ourselves in our tools.

Beyond revolution

While Pitchfork grapples with thorny questions about technology and art, academic electronic music histories sorely lack nuanced approaches to the impact of technology on musical life. Scholars tend to treat the adoption of new musical technologies as points of rupture and revolution. In doing so, they obscure the ways musicians and listeners use technology toward more traditional ends, like expression and entertainment. This is not to say that technology does not change us, or what we do, or how we do it. Anyone who reads the news in 2019 is constantly reminded that technology shapes our lives and our world in myriad ways. It is crucial, though, that we not twist this fact into a totalizing concept of technological change, in which new tools sweep away existing values and activities. All too often, fixation on what we see as revolutionary obscures the work and impact of marginalized people.

When we pay attention to non-revolutionary popular electronic musical practices, we can begin to better understand why those practices and the sounds they produce command such lasting popularity. Mainstream electronic music historians would have us believe that electronic music owes its current popularity to the boundary breaking of avant garde composers. But from the Telharmonium to Auto-Tune, it seems there were never barriers to overcome: audiences and performers embraced electronic musical sound from the start. To the thousands of people who first experienced them, and to listeners today, electronic sounds ultimately mattered not because they were pioneering or innovative, but because they performed emotional, expressive, and cultural work that resonated with audiences. The instruments, techniques, and sonorities were new, but the ultimate ends—expression, communication, pleasure—were as old as music itself.