The desire for musical expression runs deeply across human culture; although specific styles can vary, music is generally considered a universal language. It is tempting to surmise that one of the earliest applications of human toolmaking, after hunting, shelter, defense, and general survival, was probably to create expressive sound, developing into what we know and love as music. As toolmaking evolved into technology over the last centuries, inventors and musicians have been driven to apply new concepts and ideas into improving musical instruments or creating entirely new means of controlling and generating musical sounds. The classic acoustic instruments, such as the strings, horns, woodwinds, and percussion of the modern orchestra and others of the non-western world have been with us for centuries, thus have settled into what many think of being a near-optimal design, only slowly yielding to gradual change and improvements. For hundreds of years, the detailed construction of prized acoustic instruments has remained a mysterious art, and only recently have their structural, acoustic, and material properties been understood in enough detail for new contenders to emerge.
Electronic music, in contrast, has no such legacy. The field has only existed for under a century, giving electronic instruments far less time to mature. Even more significantly, technology is developing so quickly that new sound synthesis methods and capabilities rapidly augment and displace those of only a few years before. The design of appropriate musical interfaces is therefore in a continual state of revolution, always driven by new methods of sound generation that enable (and occasionally require) expression and control over new degrees of freedom.
Although many crucial innovations (and several of the most vital innovators, studios, and composers) in electronic music have hailed from Europe and other parts of the world, North America has held a key position in pioneering the development of electronic instruments and musical interfaces. Technology enables new modes of musical expression, and as America has captured many of the major milestones in electronics and engineering throughout the last century, their musical applications found fertile ground here.
This century has witnessed the development of electronic musical instruments, from their inception as an outgrowth of telegraph and radio through the modern musical applications of computers, which are beginning to alter the conceptions of musician, audience, and performance. Although the distinctions between them are starting to evaporate, electronic sound generators can be broadly classed into two types: music synthesizers, which directly generate their timbre via an algorithm or set of hardware or software rules, and samplers or wavetable synthesizers, which play back and process waveforms stored in some kind of memory. With the notable exception of the theremin (the most famous non-contact controller), all of the earliest electronic musical instruments were primarily controlled by a keyboard, frequently the standard 12-tone (chromatic) layout we know from the acoustic piano.
Although interesting experiments with alternative controllers have been sprinkled throughout the history of electronic music, most current electronic instruments remain keyboard-dominated. As it allows essentially any interface to be used with any synthesis device, the MIDI Standard has encouraged other types of musical controllers to be recently developed, as sketched below.
Drum interfaces, which give percussionists access to the world of electronic sound, are related to keyboards, in that they essentially measure contact and impact velocity. Stringed instruments are highly expressive and complex acoustic devices that have followed a long and difficult path into the world of electronic music controllers. The popularity of the guitar in modern music has given it considerable priority for being assimilated into the world of the synthesizer, and a look at the history of the guitar controller aptly reflects the evolution of signal-processing technology. Of course, electric guitars were primarily responsible for ushering in the multiplicity of effects devices and audio processors that delightfully twisted and warped the sound of many an electrified instrument over the last decades, opening new worlds of expression long before digital synthesizers and MIDI appeared. Orchestral stringed instruments, such as the violin and cello, have not been spared from electronic assimilation either, although capturing prompt and precise musical gesture on these instruments is still technically challenging. Wind interfaces, being monophonic by nature, have been around since the days of the analog synthesizer, but now find fresh applications in driving expressive, multi-parameter digital synthesis schemes based on physical models.
Different kinds of abstract gesture interfaces have likewise been developed for both high-level conducting and intimate performance. These include Batons and hand-held trackers, non-contact interfaces that trace the body through the air, sensors that measure other activity in “smart rooms” or other responsive environments, and interfaces that are worn in active clothing. Such gesture interfaces are generally completely abstracted from any kind of direct sound generation, hence all musical response must come through a mapping algorithm programmed into a computer that assigns sonic events to perceived motion or detected physical events.
The electronic music field is extremely broad, and designers of all sorts, from basement hackers through university researchers and engineers at large electronics and music companies, have built all kinds of innovative and fascinating devices for generating and interacting with electronic music. Thus, at the outset, I admit that it’s not possible to give justice to all of the worthwhile accomplishments in this area within the confines of a single article, and apologize for those that are missing.