American Innovations in Electronic Musical Instruments

American Innovations in Electronic Musical Instruments

Musical Mapping and Interactive Music Throughout most of the history of electronic music, the interaction end of instrument design could be classed loosely as a branch of ergonomics. Over the last 15 years, electronic instruments became digital, and within the next decade or so, their functions will probably be totally absorbed into what general purpose… Read more »

Written By

Joseph A. Paradiso

Musical Mapping and Interactive Music

Throughout most of the history of electronic music, the interaction end of instrument design could be classed loosely as a branch of ergonomics. Over the last 15 years, electronic instruments became digital, and within the next decade or so, their functions will probably be totally absorbed into what general purpose computers will become. Thus, for all practical purposes, musical interface research has merged with the broader field of human-computer interface. Already, it is not uncommon to see several papers on musical controllers being presented at mainstream computer-human-interface conferences such as ACM SIGCHI. This merger has two basic frontiers; at one end, there are interfaces for virtuoso performers, who practice and become adept at the details of manipulating subtle nuances of sound from a particular instrument. At the other end, the power of the computer can be exploited to map basic gesture into complex sound generation, allowing even non musicians to conduct, initiate and to some extent control a dense musical stream. While the former efforts will push the application of noninvasive, precision sensing technologies in very demanding real-time user interfaces, the latter relies more on pattern recognition, algorithmic composition, and artificial intelligence.

Interposing a computer in the loop between physical action and musical response allows essentially any imaginable sonic response to a given set of actions; this is termed “mapping”. As digital musical interfaces are so recent, there is no clear set of rules that govern appropriate mappings, although (arguably) some sense of causality should be maintained in order that performers perceive a level of deterministic feedback to their gesture. Likewise, there is considerable debate surrounding the groundrules of a digital performance. For example, when a violinist plays their instrument, the audience has a reasonable idea of what they’ll be hearing in response, but with any possible sonic event resulting from any gesture on an unfamiliar digital interface, the performing artist risks losing his audience. It’s not entirely trivial for modern composers working in this genre to maintain the excitement inherent in watching a trained musician push their instrument to the edge of their capability; in most venues, audiences will expect avenues through which they can feel the performer’s tension and sweat, so to speak. Again, this topic is being argued from many sides as the boundaries blur between performer and audience and between linear compositions and interactive media.

Musical mapping is a very important component of all modern musical systems. On any given synthesizer, there are a variety of options (often presets on a small LCD screen) for interpreting a MIDI input stream in different ways. The available choices are usually quite limited, however, hence many interesting hardware and software packages were developed for this purpose; these are indeed required if one wants to go beyond “hit a key and hear the note” or use alternative musical input devices. The first MIDI mappers were all done in hardware, with an embedded microprocessor interpreting a MIDI stream and outputting additional or modified MIDI events, depending on what behavior was programmed into the device. Examples of this include the multimodal performance mapper made by the author for jazz keyboardist Lyle Mays and several devices made by Oberheim during the late 1980’s, such as the Strummer, which converted keyboard events into multiple-triggers to simulate guitar strumming and the Cyclone, which embellished played notes with others generated through a programmable set of musical rules, thereby augmenting the player’s technique. MIDI mappers rapidly migrated out of hardware and into software applications, such as the currently-distributed Flex from Cesium Sound, the Lick Machine from STEIM in the Netherlands, and the classic “M and Jam Factory” packages by David Zicarelli, Joel Chadabe, and associates at Intelligent Music.

Other environments are either programming languages themselves or extensions to established languages; e.g., a set of tools are provided for the user to use in scripting MIDI behavior. The most well-known system in this area is Opcode’s MAX, developed by Miller Puckette at UCSD. MAX is a graphical programming dataflow language, where the programmer moves programmable icons representing functional modules around the screen, and connects control lines from one to another (much like the patchcords of the old modular synthesizers) to define event flow. MAX, originally built only for MIDI control, has been extended into sound synthesis with David Zicarelli’s MSP package, which uses the PowerPC Processor of a Macintosh computer to generate real-time digital audio under MAX control. Miller Puckette has likewise continued work in this area, contributing to MSP and producing a new programming environment called “Pure Data” (or Pd), which has been ported to a variety of platforms. (It is available for download off his website.) The Kyma package by Symbolic Sound is also a powerful graphical programming environment that allows specification of both synthesis algorithms and control functionsAnother graphical interface, Interactor, has been developed for interactive dancers by Mark Coniglio and CalArts composer Morton Subotnick. A graphical mapping package that’s been developed for PC users is Building Blocks by AuReality in Belgium.

Other packages are extensions to standard programming languages such as C, C++, or Lisp. These include the CMU MIDI Toolkit by Roger Dannenberg, ROGUS and HyperLisp, both from the MIT Media Lab.

From American Innovations in Electronic Musical Instruments
by Joseph A. Paradiso
© 1999 NewMusicBox