Can I Have the Recipe for Your String Quartets?

Can I Have the Recipe for Your String Quartets?

As promised, I want to take some time this week to investigate what the verb “to compose” is really used for; specifically, I’d like to speculate about how “composing at the computer”—in its strictest form, with only notation software—differs from “composing away from the computer” with only a pencil. It hardly needs pointing out that composing is a complex process that can be prosecuted in many ways; these ways often comprise a number of methods and techniques that are by no means mutually exclusive. You can compose at the piano and at the computer—or, for that matter, at the Korg Triton and in front of the manuscript pad. You can sketch a score by hand, then input it into Sibelius for formatting; you can generate material in OpenMusic, view it in Sibelius, then prepare it meticulously with pen and paper. You can study the imperfections in a blank sheet by holding it up to the light, or you can data-mine the human genome in MATLAB. You can transcribe the tune in your head while strolling through the woods, or you can write a program on your laptop while flying across the Atlantic that will generate a similar tune to suit your tastes.

My own way of composing is a sticky, gooey affair that includes graph paper, scratch paper, spreadsheets, microtonal software synths, Finale, and usually several months of “think time” vital to the piece’s gestation. I tell people I compose “at the computer,” but the computer is really just where I commit the notes. It’s easy to confuse writing notes with composing; if there’s an elaborate architecture of system and intuition that governs what each note will be, however, it doesn’t much matter how the black dots are made manifest. Technology—piano and iMac alike—allows us to audition the notes as we write them, but only the most naïve composer would confuse this audition with the sound of an actual performance. Obviously, the importance of MIDI playback and piano reductions varies significantly from one composer’s creative process to the next. It’s a big part of what I do, but one has to be ever-vigilant; it represents only one dimension of a multidimensional phenomenon.

Ultimately, “composing” is a gerund that means something different for every composer. “Composing at the computer” is a gerund phrase with almost as many shades. Given the plurality of means available to composers today, to decry “composing at the piano” or “composing at the computer” without addressing a specific, individual way of working would necessarily involve a reductive and conflatory label—and, probably, the imposition of a procrustean workbench in its place.

NewMusicBox provides a space for those engaged with new music to communicate their experiences and ideas in their own words. Articles and commentary posted here reflect the viewpoints of their individual authors; their appearance on NewMusicBox does not imply endorsement by New Music USA.

8 thoughts on “Can I Have the Recipe for Your String Quartets?

  1. david toub

    I used to compose at a piano with score paper in front of me. Now I compose at a synthesizer, where I usually start off improvising, write own a few snippets or capture it into Reason’s sequencer (or the synth’s onboard sequencer) and eventually develop it into Finale on my computer. What I like about this method is that it is no different from how I used to compose (which always involved improvisation) but I can play it back and record the output with ease, so I get to listen to multiple takes along the way on my iPod and make adjustments as I go.

    I don’t think there’s anything wrong with composing at a computer or keyboard, although I must confess I am not gifted enough to compose just at a computer and not involve a piano or synth keyboard.

  2. Chris Becker

    In the past year, I’ve begun pieces using Ableton Live and a keyboard controller creating templates that are to be cued and manipulated live alongside musicians playing composed and improvised parts. In rehearsals, I do further editing (often removing sounds entirely) in order to allow room for the other musicians and their respective instruments. So my compositional process doesn’t subscribe to a time line. It’s a more fluid dub like approach.

    Sometimes I notate parts, but more often than not the musicians and I improvise and shape the piece accordingly. I’m a great believer that improvisation is just another means of composing. In performance, the energy level is high since noone is really restricted to a page of notes. Each players part (including mine which sits on the computer and within the Ableton session) float freely like a mobile of sounds.

    There is a compositional gesture happening though as so much prep work goes into the Ableton sessions. I don’t improvise or play freely with sound the way a trumpet player can for example…I have to set up some kind of a composition and then play with the prescribed material.

    The dynamic range of expression with the laptop computer is – in my opinion – seriously lacking when you compare it to a voice or even an electric guitar (an instrument that was/is criticized similarly). However, creating an ensemble of mixed instrumentation and techniques has – for me and my ears – resulted in some really expansive and emotionally engaging music. I can’t imagine doing a gig alone with my laptop. I’d rather play vinyl records.

    Ableton has allowed me to introduce a performance aspect to my composed work which in turn shapes the ideas I come up with alone in the studio. It’s a very flexible program – one of those tools created for DJed and popular musics that so-called “new music” artists (like Robert Ashley) have appropriated for their own creative endeavors.

  3. JJeffers

    I have recently been taking my keyboard in and out of the house for various things and more than once found myself sitting and writing with nothing physical for me to sit there and play.

    I used to do it a lot when I very first started writing music, just pick a key signature and start putting notes in. I find now that working without a keyboard there makes the music that goes in a little more interesting but sometimes harder to play or awkward, like I’m writing for an automaton instead of people. I like it, but I hesitate to set it in front of anyone.

    My point would be that a keyboard there to play helps the music stay grounded and connected to the idea before it, whereas “unattached” the music seems to be a bit more wild and unwieldy. In my own observation anyway.

  4. Dennis Bathory-Kitsz

    Or are you suggesting that we just describe our process?

    I’ve tried a few times. Perhaps the best was an essay originally from 1989, I Don’t Know How — Or Why. And that’s still pretty accurate.

    I don’t use a musical keyboard. I never played one, and never wanted to. Beyond my lack of desire, coming through the avant-garde era as a young composer, I felt no need to. It did nothing for performance art (unless you burned one), nothing for the microtonal imagination, nothing but create an impediment to developing something new. My first synth (“Killer”) had a keyboard, but its knobs were the real deal for me.

    Through the early 1980s, I wrote my own software, so the the relatively restricted commercial Midi-based tools of late in that decade were pretty stunted. With the exception of the unknowns (chance, algorithmic operations, improvisation), I’ve had more than four decades to develop my compositional skills anyway.

    That means that most of the sketching and drafting are mental rather than physical activities, except where I just might just forget something (which happened often last year when I was working on perhaps a dozen pieces at once).

    The nastiest business is composing ‘at’ the notation software. Though I do a lot of electronic pieces for which computer tools are a godsend, notation software is miserable stuff. It’s programmed as if the entire 20th century never existed; I wrote about some of it in October 2006 in my blog. It’s just dreadfully imagined stuff, denying almost all creative fluidity.

    I mentioned recently that I compose most of my notated works directly into Finale. That’s not because Finale is good for doing that, but because I’ve got fifteen-plus years of experience in using the program from a computer keyboard. And even so, I have to draft some stuff on paper and render it to Finale’s contorted approach. Sometimes I have to work in programs (such as image analysis or fractals) that output only Midi sequences, and then import that material into Finale’s imbecilic Midi-to-notation ‘feature’. Yuck.

    Sometimes pieces are just written in words. The WAAM creation 99 Events is such a piece.

    I’m still not sure if there was a question in there, Colin. I guess my answer could have been simpler: I use whatever works to create a situation or document where ideas can be resolved into performance or into a sonic artifact.


  5. pgblu

    This is slightly off-topic, but does anyone out there know the source/meaning of the Cunningham-Cage title “Root of an Unfocus” ?

    Related remotely to Denis’ essay, which I recommend listening to while you read it — kind of a neat experience.

  6. GalenHBrown

    “Technology—piano and iMac alike—allows us to audition the notes as we write them, but only the most naïve composer would confuse this audition with the sound of an actual performance.”

    Except, of course, in cases where the synthesized sound is the performance.

    Plus, as technology improvesm the MIDI mockups of music ultimately intended to be played on live instruments will get closer and closer to sounding like the real thing until the synthesized “audition” will be indistinguishable from a live performance. Obviously we have a long way to go, but it’s coming. The state of the art in 1997 was vastly inferior to the state of the art in 2007.

  7. Colin Holter

    I don’t know. . . Even if a recorded synthesized rendition of a piece of instrumental music is someday indistinguishable from a recorded studio performance of same by a real player, that’s not quite the same. You would have to make a robot that looks and sounds exactly like a real person and can interpret the music with as much apparent conviction on an actual stage for me to buy in. As the authenticity gap between us and our simulacra narrows, I feel like the assumptions reinforcing that one-dimensional continuum model have to be scrutinized. This is something I’m willing to be superstitious about. Your mileage may vary.


Leave a Reply

Your email address will not be published.

Conversation and respectful debate is vital to the NewMusicBox community. However, please remember to keep comments constructive and on-topic. Avoid personal attacks and defamatory language. We reserve the right to remove any comment that the community reports as abusive or that the staff determines is inappropriate.