Trevor Hunter: When I first picked up this book, I read the phrases “aural architecture” and “MIT press” and expected to find nothing but equations, diagrams, and footnotes inside. Instead, the topic is approached from a variety of different directions in a clear and direct fashion. Obviously this “phenomenon of auditory spatial awareness” is of great interest to you both, and has been for quite some time. What was the genesis of your interest in the subject?
Barry Blesser: Both of us arrived at the phenomenology of auditory spatial awareness through very different paths over a 35-year interval. As husband and wife co-authors, we have been synthesizing a common viewpoint even though we started from polar opposites: hard science and engineering versus an interdisciplinary concept of space and culture. The foundation for the book was actually being formed during thousands of dinnertime conversations, since we both shared a strong interest in each other’s perspective. Neither of us ever lost our child-like curiosity. The whole is worth more than the sum of its parts. While Linda did her graduate work in interdisciplinary studies, looking back, I see clearly that I, too, had an interdisciplinary mentality. My PhD thesis was interdisciplinary and somewhat revolutionary within the narrow confines of an academic setting.
Linda-Ruth Salter: My doctoral work, back in the golden age of environmental design, was in the interdisciplinary field of environmental psychology. I was fascinated with the interactions between people and the spaces they inhabited. In particular, I enjoyed looking at the symbolic and experiential meanings of various spatial designs. Having spent several years living in different countries, particularly Japan, I had come to appreciate the importance of cultural relativism. Ideas that we think of as universal truths are actually nothing more than a reflection of our particular culture. Concepts of space in Japan, how people relate to each other in space, are different than in other cultures. When we were done with the book, it was clear that many versions of “absolute truth,” especially in cognitive science, were actually cultural expressions of a particular group of people with a particular set of values. My PhD thesis, ‘”Sanctified Space and Urban Land Use in Boston,” gave me a deep appreciation for the complexity of how we experience space. While I was not at first aware of the importance of sound in experiencing a space, my 35-year association with Barry made me appreciate the importance of sound.
In developing teaching tools for environmental psychology, I put together an environmental audit that focused on determining whether or not a space supports or interferes with a visitor’s goals for being in that space. For example, if an individual in a library is there for studying, the organization and design of the library must be helpful for achieving that goal. If, on the other hand, an individual is in the library for social purposes, is the library design helpful or obstructive for achieving that goal, and again in what ways? A design that supports studying would probably not successfully support socializing or hearing music.
This kind of functional use-audit considers many aspects of a target space to determine success or failure of a particular environmental design in helping users achieve their goals. Spatial designs are experienced though all the senses. I focused on the sensory aspects of space because these components were usually not considered by designers. It gradually became clear that one of the most important sensory spatial components was sound. To stay with the example of a library, a quiet, non-reverberant space would be preferred if the goal is studying, and a lively, highly reverberant space would be preferred for listening to music. However, rarely were the sound qualities of a space consciously included in its design.
TH: What was the bridge between those discussions and early research activities and this book?
BB: I can trace the beginning of the book to a research fantasy that I entertained in the late 1960s while I was a graduate student at MIT: building a portable concert hall of the quality of Boston’s Symphony Hall. It was not possible then. The fantasy remained as a soft focus goal throughout my 40 year career, but it never disappeared. When I developed the world’s first commercial digital spatial simulator, the EMT 250, in 1976, it was clear that I was moving closer to that fantasy. Nevertheless, my career remained focused on audio engineering, with an emphasis on creating spatial experiences in music within the confines of the recording studio, simulating a real space.
At the age of almost 60, I decided to write a paper that would put my career into a larger perspective. In 2001, The Journal of the Audio Engineering Society published my paper, “An Interdisciplinary Synthesis of Reverberation Viewpoints.” Even though it was very long, I had the feeling that the story was actually much larger than what I had written. I sent a book proposal based on the paper to MIT Press and they contracted me to write a book. I had no idea what the book would be about. Not only did I not know the answers, but I also had no idea about the questions. I simply wanted to write a book that could put my life into perspective. It was not initially written for any particular reader. And so began a research project that lasted five years: find all those disciplines and research fields that could contribute to our understanding of the phenomenology of auditory spatial awareness. It was only in the fourth year of the project that I realized that we lacked a common theme that would tie all the insights for dozens of disciplines into a coherent picture. And so the concept of aural architecture was born out of the struggle to make sense out of thousands of man-years of research and insights. For the first time in four years, and after some half-dozen draft manuscripts, it all made sense. As a concept, aural architecture was not obvious. We never found anyone else who had formulated the parallel to visual architecture.
TH: Yet despite all this highly specialized research, your target audience seems to be the general reader.
LRS: When Barry began to write about his experiences in the world of digital audio, he shared his writings with me. I consistently asked him to explain the point of his writing. In what way would this discussion interest the average reader? By consistently looking for cogent answers to this broader question, we arrived at a viewpoint that had direct relevance to all of us who appreciate the importance of sound. Fusing a multiplicity of disciplines into a consistent picture only took us five years! We now have a workable vocabulary that describes many of the issues that I first identified as being important in spatial experience almost 30 years ago. While the book project began as Barry’s intellectual adventure story, it evolved to become our collective vision. The book could not have been written by either of us alone.
BB: There was also a secondary goal: to change the modern world’s under appreciation of sound. In order to get our message to a wide audience, the book was structured so that the reader did not need expertise in any particular field. As you observed, there are no equations, and there is no assumption that any prior knowledge is required to understand the ideas. We were not writing for our peers in a particular discipline, we were now writing for the wider audience: people with an intellectual curiosity about music, sound, space, and architecture. Human experience is something that we all share. Formal science is not necessarily the best way to understand such a phenomenon.
LRS: The best form of environmental audit is participant observation: participating in, experiencing, and observing the experiences of others—in other words, examining the phenomenon of being in that space. Being in a space is a multi-faceted experience, and differs from person to person, time to time, goal to goal. Hence I realized the primacy of including phenomenological perspective in evaluating and understanding a space. When we are in a space, we are experiencing it; we don’t parse it into tiny pieces and measure it. Researchers in a particular discipline may segment and study components of experience, but they have no motivation for fusing those insights with those of other disciplines. Neither Barry nor I enjoyed the highly fragmented nature of modern academic research. For us it was too sterile.
BB: Since neither of us belonged to a single discipline nor were we locked into a rigid institution that required peer approval, we could go explore any ideas that we though relevant. Every assumption was open to reexamination. Even in the context of spatial reverberation in music, we took a fresh look at the questions. While acoustic scientists and perceptual psychologist had already accumulated a large body of insight into the acoustics of concert halls, their views were too narrow and limited.
Space in music was actually an historic accident of concert halls, which originated as a place to avoid rain and wind. Not every culture used enclosed spaces for their music. Concert halls were a solution to an amalgam of issues that could all be separated using 21st-century technology. A concert hall: (a) provides a place for everyone to sit together if they want to share a common experience, (b) protects the musicians and audience from distracting street noises and rain, (c) produces temporal spreading of notes that would have sharp onset and decay, such as a clarinet, and (d) envelops the listeners in a reverberation that itself then provides an aural stimulant. By separating the experience of a real space from the experience of spatial attributes, one arrives at the idea of “spatiality,” which is the experience of spatial attributes without there being an actual space.
In the prevailing culture of the 1970s, the goal was to replicate the acoustics of a concert hall. From the perspective of modern music, performers and composers do not have to be restricted to something real, and most modern electronic music takes advantage of sounds that could not be created by vibrating strings and cavities. Why should musical space be restricted to a concert hall? Musicians need musical spatiality but not a particular reverberation corresponding to a particular seat in a particular concert hall. Aural architecture liberates musical artists to treat spatiality as an artistic component of their music.
On a final note, by creating the concept of aural architecture, we were able to create the language bridges that connect the other disciplines. It all made sense when we were done. But neither of us understood the framework while we were active in our respective professional and research activities. The ideas in the book were simply not obvious to us even though we had thought about them for decades. We were biased by the hidden assumptions in our respective professions. As one gets older, one is more willing to take the risk of deviating from the conventional wisdom and paradigms of our colleagues. On the other hand, we tried to reconcile their contributions without being limited by them. The sum is always greater than the parts, and the parts were contributed by thousands of others, both formal science and folk wisdom.
TH: You state at the end of Chapter 5 that “virtual spaces for music are no longer related to social spaces for people.” Could you speak further on that? Do you see any social consequences from this phenomenon?
BB: After finishing the research for the book, it became clear that attitudes towards all forms of aural space are the result of social and cultural forces that are often unrecognized. Virtual spaces are a perfect example of a cultural shift that was enabled by advances in technology in combination with a rapidly evolving change in our social system.
Originally, physical spaces and social spaces were essentially the same. Social spaces were, by definition, also physical spaces. Virtual spaces split all of the properties of classical spaces into independent components: (a) performance space is where the music originates, perhaps in a recording studio, (b) listening space is where the audience hears the music, perhaps on an iPod or surround-sound system in the living room, (c) reverberation is added as a musical element in the sound mixing studio, even though its acoustic properties could not be the result of a physical process in a physical space (d) the reproduction process, which is a key element in the experience of spatiality, is now individualized and dependent on the selected technology, (e) individuals listen in isolation when and where they like, (f) it is no longer possible for such music to be played in a concert hall if the goal is to make it sound the same as the recorded version; pop singers sometimes lip-sync to recorded versions when on stage as a result.
The social consequences are not subtle. Music has become a much more a private experience than a manifestation of social cohesion. Moreover, even in a social setting, the music is sufficiently loud that listeners are functionally deaf to the social sounds of friends. In fact, the music produces aural saturation. Listeners only exist in the musical space; side conversations and emotional signaling are no longer possible. Listeners are isolated. Just as ballroom dancing gave way to independent gyrations, music moved from a shared experience to one that is highly individualized. Total immersion, whether a cause or result of social isolation, is widespread in many manifestations of our modern culture. Actually, we have a bimodal split. Globalization, the internet, email, and text messages connect all of us to thousands of individuals (expanding social connections) but at the same time, those connections are emotionally weak and without the personal intimacy of living in a small community based on communicating with body language and tone of voice. Music is no different. I doubt if there is a clear cause and effect relationship.
Increasing loudness is a hallmark of this same virtualization. Acoustic music had natural limits regarding the amount of sound energy that could be created. Electronic amplification and earbuds have no such limits. Rather than stress the dangers to the auditory system from loud music, one can ask the reverse question: what is the payoff to raising the sound level? This question has perhaps a half-dozen answers that depend on the individual. Nevertheless, there is a payoff. In one study, there was the suggestion that intensity changes your brain in ways that parallel drug abuse. Some students showed clinical symptoms similar to drug withdrawal when deprived of loud music. The intensity changes the brain, as well as the emotional state of the listener. In fact, NYC began the process of making walking with earbuds illegal after three people were killed crossing the street. They were in their virtual musical space, but the trucks were in their physical space. Loud virtual music is like a science fiction “transporter” bringing the listener to other worlds. Similarly, virtual spaces are a key element in video games, which are also space transporters.
The answer to the question “where are you?” is no longer obvious. We have a physical space, virtual space, visual space, aural space, tactile space, olfactory space, and so on—none of which have to be consistent with any of the others. Each has multiple types of spatiality: social, navigational, symbolic, aesthetic and musical. Without thinking about it, our culture changed ideas that had been assumed to be static and intrinsic.
LRS: There is a mutually impacting, circular interaction that constantly occurs between technology and society. The components of a total social system include technology, the values held by members of the society, and various pieces of the social infrastructure that uphold the society. A change in any one of these components reverberates within the other components. It is meaningless to ask which came first, because the influence of change is taken up so quickly in all the components, and as Neil Postman pointed out, the changes are total. Postman gives as his example putting a drop of red dye into a beaker of clear water; the result is not a drop of red dye sitting at the bottom of the beaker—it is a beaker now colored red through and through. Systemic change cannot be predicted, and therefore cannot be controlled. It can only be observed and responded to with additional changes, which in turn produce other changes.
It doesn’t really matter where we begin to look at this complex amalgam of change and response, so let’s look at the technological change represented by digitization of the music signal. We can follow the repercussions of this change into the manufacturing infrastructure, where we then produced globalized production through automation—based on additional technological changes—and cheap universal access to music—a resulting change in economics, that in turn produced changes in listening patterns, which went on to be expressed as changes in the values of the users and was additionally experienced through changes in the way musicians produce music.
The end result is an individual listening to new kinds of sounds projected directly into the ear via an iPod and an earbud, thus creating aural privacy completely divorced from physical surroundings. To complete the isolation from the here-and-now, the music itself is spaceless: it is created by individual musicians working independently in a sound studio at different times, bolstered by audio effects created by an electronic machine that is not tied to any real spatial experiences. This continuous round-robin of change stops only when one piece drops away: when the technology again changes, or the values of the users change, or the social infrastructure no longer supports these events.
Today, we are still exploring the implications of all these repercussive changes in sound. They have occurred so quickly, and are so thoroughgoing, that individuals do not even realize they are participating in these changes. We could bemoan all these changes—”why, back in my day”—but that gets tired quickly because it ignores the excitement that accompanies change. It also ignores the fact that these changes are irreversible: there is no going back, nor is there any desire to go back. There are possibilities galore available in each and every change that occurs, and opportunities for even more changes, in an almost endless cycle. Where artists and technologists chose to jump on, or off, this merry-go-round of change is up to them. In addition, we must realize the power of the users, who themselves can control change to some degree by jumping on or off the change ride.
We can list only some of the changes we see today. Most are hidden to us. The most important thing is to avoid assigning a value to these changes, because valences are changing daily. What is perceived as a negative today becomes a positive tomorrow as the change is more fully integrated into society, and the new possibilities made available are explored and expanded. Humans are endlessly resilient and resourceful, and thus the societies we created are also. As an old-timer I may regret the hearing loss brought about by too-loud music, the isolation of the individual from her social and physical surroundings, the exhausting demand placed on the brain to process simultaneous auditory signals coming from both an earbud and the immediate environment, and the loss of control we experience when sounds intrude uncontrollably into our minds producing a kind of mind-garbage. But, it is really a lot more interesting and rewarding to catalogue and observe these changes, and their impact, and their experience, and the new changes that result.
TH: You stated there is an under-appreciation of sound; even further, in the book you say there is a devaluing of all other senses in this visual-centric culture. This, as Linda points out, affects how we think of spatial designs, but not necessarily how we actually experience them. In practice, how limiting is this? How important is it to think of space in terms of all the senses?
LRS: Very important! Each of our senses produces a different experience of a space. When we let any one sense become dominant, inevitably we experience the decline of functioning in the other senses. Life becomes mighty boring. That would be like eating the same food all the time, or listening to the same music all the time, or smelling the same flower all the time, or having the temperature at the same level all the time, or walking along the same path all the time.
Evolution is about expanding possibilities, not limiting them. Humans are multi-sensory so that we can respond more effectively to changes in our environment. As we discussed above, changes are inevitable and the pace of change is accelerating. I am not prepared to throw out any one of my senses. Humans are pleasure-seeking animals, and I want all the pleasures evolution has given me to enjoy, and to use effectively in mastering my environment. That is, mastering my environment before the next change in any area of culture creates the next experiential tsunami.
Technology is change; engineers are never content to leave things as they are. So, change becomes both danger and opportunity, and it is the artists who are in the best position to take advantage of these changes. Artists, by definition, explore and exploit new opportunities for expression and experience. In the area of values, they are both change agents and the recipients of changes. They abhor boundaries as being too limiting on their creativity, and thus, together with technologists, are the radical elements in society. Artists are eager to include all the senses, partly because as artists they pay attention to the experience of all the senses, and partly because they are always looking for the biggest and fullest experience.
These days the visual has gained dominance, and we act as if that is all right with us. Perhaps one interpretation of our willingness to cede spatial design to those with a visual dominance is that we have taken back control of our spatial experiences via control of the sound we hear in those spaces. Perhaps we are saying: Mess around with how a space looks, we don’t mind, because actually we aren’t really here—we are in our own space created via our ears. We are listening to our own sounds; we have auditorally tuned into our own choice of place and have thus successfully tuned out the visually dominant designer’s choice of place. There are many places that I would be thrilled to be able to remove myself from, such as a subway waiting platform, or the checkout line of a crowded supermarket, or a school cafeteria at feeding time. Perhaps we are not limiting our use of all our senses; we are still fighting the issue of who will be in charge of determining which sense will be dominant, and technology has given us more weapons to use.
The modern music world is well-integrated into other aspects of society. Music today is very closely connected to and comments upon daily experiences. It is also widely distributed among members of society. Today, music is integral to personal identity, cross-group communication, self expression and comfort. Young people now claim they could not function without their music. As we pay increased attention to our aural experiences in space, we find music is increasingly a companion to our experiences in space, whether that space is real or virtual. With the proliferation of personal listening devices and the easy availability of musical sounds on these devices, it will be increasingly difficult to isolate musical experience from spatial experience.
BB: My favorite answer to this question is the example of an elegant restaurant. Consider being invited to dinner at such a restaurant with the president of your company. Because the relationship is somewhat formal, the natural social distance is perhaps three feet. If the acoustics in the restaurant are sufficiently corrosive, the acoustic arena may be so small that communicating requires a distance of only one foot, which corresponds to that of an intimate relationship. You are trapped into the choice of being functionally deaf at the natural distance or breaking the social taboo of being too close.
My next favorite answer is very personal. Our family designed the aural architecture of our home using very simple methods. We removed the doors from all the common rooms on the first floor of our house, which allows everyone in the family to make a connection to all the other social events. Yet to keep the noise level relatively low, we used extensive sound absorption in the form of carpets and furnishing. As a result, we have an environment that matches our family’s value system: social connection without noise. But on the second floor, with its very thick walls and well fitted solid doors, we have aural privacy. Other families might choose a different design based on their values.
In some older cultures, hearing was the number one sense modality, and vision was third. The second was actually touch. Sound provides an intimate connection for many reasons: the auditory system is wired deep in our cortex, we have no earlids to shut off sound, we respond to sounds even when we sleep, sound provides a sense of the interior of the source (be it emotions or construction), sound flows through cracks and crevasses to connect us with the events in our environment. These are properties of sound and hearing, but a culture (or individual) may or may not value such properties.
Consider the progression of communications. At one time, all important conversations were done in person where the voice conveyed far more than just information. In fact, sound communication is often more a vehicle for emotions than facts. The telephone provides a way to communicate at a distance, preserving some of the attributes of person-to-person interactions. The cell phone is far worse than the old land line in terms of communicating nuances. One can often not even recognize who is calling. Text messages are such a primitive version of communications that the emoticon was invented to explicitly signal the real meaning of the message. As a culture, we have moved from the high intimacy of direct connection to the sanitized equivalent of visual text. It may well be that this shift is desirable in order for the recipient to reduce the intensity of communications.
We may be inundated with text spam, but we are not yet inundated with aural messages. There are exceptions, of course. Public spaces such as supermarkets and airports sell the soundscape to advertisers, who know that you cannot shut off their messages if you are physically present, except of course to wear earbuds and create functional deafness. Designing the soundscape, which includes aural architecture, may well be rediscovered by those who seek to monetize it. There is a new technology yet to come: narrow beam loudspeakers that can be embedded in vending machines. If one adds a tracking system, such a system becomes auditory stalking. Sound is a great way to capture headspace because it is on 24/7.
To go back to your question, each sense has special properties that contribute to our sense of where we are. But with the splitting of the senses, there may no longer be a unified sense of place.