Student Learning in the Music School

Student Learning in the Music School

Given that each music professor achieves his or her expertise in a highly idiomatic way and must impart that expertise in a highly idiomatic way, how does one measure learning across a cohort of music school students?

Written By

Paul Mathews

Two Foundational Questions

".5" by rebecca anne on Flickr

As a final task for my master’s degree, I sat for an oral exam.  Before entering graduate school, I had been an ambivalent student and so I prepared to be questioned with the zealous fervor of a recent convert.  My study paid off, and I answered each question with aplomb. After thirty minutes, the questions came more slowly and the follow-up questions stopped altogether.  It began to feel like that experience I would have after a class that was too long or a lesson that was too short:  when time passes slowly, it is time to leave.  My mentors must have experienced something similar, and smiles and nods became a tacit acknowledgement:  we were done.  However, as I rose to leave, one of the professors stayed the group with the phrase, “I have just one more question.”   Convinced that I had already aced the exam, I sat down and raised one eyebrow in a gesture that I fully intended to mean, “Bring it, Pops.”

“How do you teach someone to compose music?”

Nothing in Dalhaus or Adorno had prepared me for this question, nor had I scribbled the answer among the staves in one of my dozens of spiral-bound sketchbooks. The university archives do not record whatever answer I may have stammered that day, and it was probably irrelevant to the grading: I think the professor was simply putting me in my place.

I suspect my introduction to college teaching is not unlike the experience of many professors in America, particularly in the arts.  Prior to being asked this question at the end of my degree program, little had been said about teaching.  Indeed, my orientation for teaching a fifty-student section of Music Fundamentals was as follows:  “You can use this desk. Here’s the book we use. Don’t screw this up.”

I muddled through, improved, and eventually thrived in teaching.  Many of us do.  There is much to be said for professors who pursue an idiomatic and self-directed path in the teaching profession.

More recently, after years of teaching and light administrative duties, I was confounded by another related, yet broader question.  I was again seated at a long conference table surrounded by august professors, but this time, I was posing the question I could not answer.  Raising my weary eyes from pages of transcripts and audition scores, I asked:

“While it seems laudable to be concerned with how to teach someone to compose, shouldn’t the question rather be, how can I be sure that my students have learned anything at all?”

We now arrive gently, by way of meandering anecdote, to the topic at hand: understanding student learning in a music school.  Given that each music professor achieves his or her expertise in a highly idiomatic way and must impart that expertise in a highly idiomatic way, how does one measure learning across a cohort of music school students?

 

A Short History of Accreditation

Any teacher at an American school will, at some point in his or her career, wrestle with models of assessing student learning in a manner that can be reported to accreditors.  In some cases, colleges may have overlapping accrediting obligations.  For example, my institution is accredited by the state, a regional accrediting agency, and the agency specific to the discipline.  In what follows, I will not speak to the specific requirements of any one accreditor but rather one common characteristic: Congress requires the Secretary of Education to require accreditors to use “an appropriate measure or measures of student achievement.”  [20 USC 1099 § Sec. 1099b]

Accreditation is a means of quality assurance in higher education.  The term quality assurance comes from heavy industry by way of 17th-century shipbuilding.  In those endeavors, the government, as investor, wants a full accounting of any monies invested in a schooner. Or a student.

When government gives or lends money to schools or students, it buys oversight of the school’s finances and mission-critical operations.  The contractual nature of the government-school relationship can be discerned in an 1824 letter from Lord Burghersh, the founder of the Royal Academy of Music, to Lord Liverpool, then Prime Minister of Great Britain:

To effect these objects we require an increase to our funds … I am aware that if it is granted, the Government would have a right to look into, and alter and correct, if it thought necessary, the laws and regulations by which the Institution is at present governed, as well as to claim a control over it, or, in the exercise of its discretion, to leave that control as it at present exists, in the hands of the Directors. But these are minor details…

In addition to playing and composing, Lord Burghersh was a lifelong soldier and diplomat.  He certainly understood that money from the crown came with the expectation of quality assurance.

Unlike most other countries, the United States cannot take a direct role in accreditation, as the framers of the Constitution left oversight of higher education to individual states.   A cursory review of our nation’s forefathers indicates that one would be hard pressed to find a good applicant for graduate school among them: most did not complete college.  They saw little value in the “old course” curriculum modeled on the English universities.

With the onset of the Industrial Revolution, however, it became clear that higher education was vital to the progress of the nation. Many of those “drop-outs” who wrote the Constitution were instrumental in founding new colleges with new curricula. The young nation faced problems that were not adequately addressed by teaching the scions of wealthy planters to translate odes from Greek to Latin.  In 1862, Congress passed the Morrill Act [7 USC § 301-309] to require all states to set aside land and monies for state universities that would teach agricultural and mechanical arts.  Yet, even as some states were seceding from the union, Congress was reluctant to exercise direct control or supply direct support for higher education.

In 1944, Congress passed the GI Bill [P.L. 78-346, 58 Stat. 284m] in order to subsidize college education for enlisted personnel returning from World War II.  The federal government poured money into colleges and universities but relied on states to ensure the legitimate operations of the institutions receiving the funds.  As the student loan debt soared after the Korean Conflict, the nation wrestled with civil rights issues that were not adequately addressed at the state level. Consider: Governor George Wallace made his ceremonial stand against integration by trying to prevent two African American students from enrolling at the University of Alabama.

Governor George Wallace stands defiant at the University of Alabama

The Johnston Administration sought to address a number of these issues with the Higher Education Act of 1965 [Pub. L. No. 89-329].  The HEA has provided funding for two generations of college students.  However, it has also called for more oversight of colleges.  Since the federal government could not directly participate in overseeing school operations, it required accreditation by regional agencies which were recognized by the federal government.

For years, this arrangement, a kind of cooperative federalism, was agreeable to all parties.  The accrediting agencies, which had existed since the late-19th century, continued their self-directed efforts and reported their findings to the government.   However, the nature of this relationship changed with the surge of student loans:  the federal government provides over $150 billion in financial aid to students each year.  The concern for propriety has caused what Judith S. Eaton, president of the Council for Higher Education Accreditation, called an accidental transformation of accreditation. The federal government cannot tell schools what it wants for its money, but it can tell accreditors what they expect of an accredited school.

I sometimes like to think of Lord Burghersh, writing from Florence in the wig and finery befitting a royal embassy.  His letter, no doubt closed with an insignia in sealing wax, traveled across Europe by ship and rider before reaching the Palace of Westminster.  There similarly dressed men scratched their similar wigs and considered whether or not Parliament should advise the King to increase funding to the Royal Academy.  It strikes me that, wigs notwithstanding, the situation was not unlike the situation in which we find ourselves today.

Lord Burghersh

And that gives me pause. Lord Burghersh’s request was denied.

 

Obligation and Opportunity

The need for student financial aid ensures the continued role of accreditation in higher education, as college tuition and expenses are considerable in the context of the average family income.  The good people of Bonn sent Beethoven to Vienna.  Scaled to our economy, one doubts they could have put enough funds in escrow to merit an F-1 visa.

Musician-professors recognize the importance of financial aid. After all, musicians have a long history of accommodating those who have paid the piper, dropped a dime in the jukebox, or commissioned the requiem by masked courier, &c. The chore can be accomplished and the purse can be claimed with only a show of due diligence.

This is unfortunate. The faculty of a music school is not interested in a path of least resistance: indeed, that faculty courts rigor and rote learning.  And yet, despite a single-minded focus to recruit the best students and improve them at all costs, faculty are invariably resistant to engage in a process to evaluate teaching and learning.  I believe this resistance comes from fundamental misunderstandings about assessment.  The means of assessing student learning are actually quite familiar to a trained musician, but the standards and mechanisms are expressed in language that comes from the study of education.   When engaging a community of musicians, often a successful first step is to translate the pidgin of accreditors into the patois of musicians.

 

Four Questions

In essence, evaluating student learning consists of four questions:

1. How does the school define its objectives for students?
2. What are the instructional means the school provides for students to meet those objectives?
3. How does the school determine whether or not the students are meeting those learning objectives?
4. What does the school do to either improve student achievement of those learning objectives or to maintain demonstrated success?

Let us examine these four questions in some detail.

 

The First Question: The Obligatory Mission Statement

The first question asks how a school defines its learning objectives.  The objectives here tend to get conflated with a mission statement, and indeed accreditors often examine the mission statement of a school as the top-level enumeration of goals.  More broadly the learning objectives should inform each requirement of a program, from mission statement, to department, to faculty, to course, to a grade at the end of the semester.

In my experience, faculty tend to dismiss the importance of the first question. (Objective? Duh! Musical awesomeness. Next question.)  We will indeed move on, but we will have occasion to return.

 

The Second Question: Division of Labor, Multiplied

The second question asks what the school does to help students achieve the learning objectives.  Long before it is ever asked by accreditors, this question tends to generate long, sometimes contentious discussions about what students need to take, or need to study, or should learn.  Indeed, this was the question I was asked at my exam: How do you teach someone?

The role and relationship of subjects in curriculum: these are weighty matters. These discussions invariably touch on the things we value as musicians; the things that brought us to the music school in the first place; the things that, the preservation of which, will keep many of us from leaving.

Ultimately an institution’s curriculum will be idiomatic and reflect the values of the faculty.  Curriculum is proprietary like a trade secret (only not profitable and not a secret).  In general, accreditors are less interested in the decisions we make than the discussions that lead to the decisions. They want the quality assurance of knowing that qualified faculty are discussing the curricula and making considered decisions.

Having finally determined what our alumni should be able to do and how we will teach them to do it, we arrive at the crux of the matter: determining whether or not the students are learning anything at all.

 

The First Two Questions As Impetus for a Cancrizans Curriculum

When devising a new course, a discussion may sound something like this: “Well, I’m going to begin with a lecture on this.  Then I’ll probably talk about a little of that.  And of course, it’s important for me to cover that other thing.”  In other words, the original approach to the class considers content: the this, that, and other thing of the class.  And that is appropriate.  However, after some discussion of the content and its proper order, it is important to reconsider the verbs: just because we lecture, talk, and cover content, there’s no guarantee that it gets learned.

Robert Erickson, the composer who founded the music department at UCSD, observed that “teaching is not the transferring of information from teacher to student.  Information gets imparted, but it may be the least of the things that go on in a classroom.”

Like Erickson, I acknowledge that classroom time used for structured interpretation of music should be sufficiently flexible to accommodate extraordinary digressions, which may prove useful in unpredictable ways.  To preserve that time, it may prove more useful to push the content out of the discussion space and monitor comprehension with summative assessments.  Said briefly, one can teach with assignments and assess the learning in real time during classroom discussions.

Ultimately, it may be irrelevant to consider what students will know when they complete a class. It is far more useful to ask what students will be able to do when they complete a class.    Once we determine what a summative assessment will be—be it an exam, a paper, or a fugue—we can determine the formative assessment—or smaller tasks to assign—that will build the skill set required for the summative assessment.

This model of curricula is typically described with the phrase (or battle cry) assessment drives curriculum.  In recent years, this view of curricula has taken on negative connotations, especially when said with fewer syllables and considerably less swagger:  teaching to the test.

The idea of teaching to the specification of a standardized test is repugnant to faculty.  It robs them of the agency of determining the final objective of their teaching.  But the larger idea—the idea of assessment driving curriculum—is still a viable model for a music school.  Faculty regain ownership of the process when they control the assessment.

Nobody objects to teaching to the test when they make the test.  And at a music school, we’ve already got the test. Our test will not require a no. 2 pencil, but it may require formal wear.

 

The Third Question:  Unstandardized Testing

Performance is the objective of most undergraduate study.  Performance is also a summative assessment of musical accomplishment.  We rehearse ( = learning) to perform (= demonstrating learning).   We even use the expression “learning a score.”  Conversely, an assessment of an employee in any other field is typically called a “performance review.”  And yet despite these intermingling connotations, many overlook the relationship between a public performance and a student assessment.

Take, for instance, the term “recital.” The word conjures up a mix of associations to musicians that may be lost outside of a musical school.  If one’s experience with music instruction ended before college, it might be a surprise to learn that an undergraduate’s senior recital is not a matter of Andy Hardy shouting, “Hey, kids: let’s put on a show!”

Consider the dimensions of an undergraduate senior, as they apply to an assessment of student learning:

A recital is an assessment.  It follows a period of individual instruction called a “lesson.” It validates student learning by making a public demonstration of the behavioral objectives in what is called a “performance.”

A recital is a requirement for most undergraduate degrees, more commonly seen in educational models as a capstone project.  Most graduate programs require proof of an undergraduate recital prior to allowing students to matriculate.

The recital is typically a for-credit requirement with a grade.  In such cases, the recital is more than an assessment: it is, in essence, a class.   Since the recital is typically graded by more than one professor, the recital is the rare case of a class that has one student and multiple instructors.

The recital typically happens in the last two years of undergraduate study.  Prior to the recital, students typically play year-end assessments for multiple faculty members, called a ” jury.”

Thus, after the weekly formative assessment in 1:1 instruction in a lesson, the student demonstrates the learning in the summative assessment of a jury. Juries, in turn, become formative assessments in pursuit of the ultimate summative assessment and capstone project, the recital.

Nothing in this list is unduly contorted from the procedures we use at my institution: procedures that have, in fact, been in place for decades.  I have merely reworded and  recontextualized our basic principles—principles we tend to hold dear—into the language of accreditation.   When considered in the vernacular of assessment, music schools are well-positioned to demonstrate student learning. What other discipline affords 1:1 instruction for every student or a public demonstration of student learning?

Following the logic of this argument, it becomes clear how so much of what happens in a music school plays a role in benchmarking student aptitude or measuring student achievement.

For example, my institution just matriculated 105 new undergraduates for the fall semester.

Each played an audition seven months ago to be admitted (data point), submitted transcripts and standardized test results, and took onsite diagnostic exams in music theory and ear-training (four more data points).

Most re-took exams in music theory and ear training on arrival (data point and placement).

All had an individual audition with a member of the Keyboard Studies faculty (data point and placement).

All took an audition for ensembles (data point and placement).

The result of this battery of tests and auditions is a surprisingly flexible and individualized learning plan for students in one of 28 concentrations for the Bachelor of Music Degree.  Two years ago, a poll of 111 undergraduates revealed that only 12% of students began their first semester without either: 1) advanced placement through on-site skills testing, AP testing, and/or transfer credits; 2) remedial coursework to take in addition (or in place of) the stipulated first semester classes; or 3) some combination of advanced placement or remedial coursework.

Data?  We are awash in data.

 

The Fourth Question, or Sempre Da Capo

If we have diligently worked through the first three questions, we arrive at the fourth:  now what?  What do we make of all this data we have collected?  The correct next step is to assess the assessment by revisiting each of the four questions.

Returning to that first question, we might remark that musical awesomeness seemed kind of funny at the time.  However, it may be more appropriate to add practical dimensions to what we might imagine for our students.  Is it okay if our students are musically awesome sales associates at Abercrombie and Fitch?  If that is okay, must we fly a top-tier bassoonist in from Düsseldorf to achieve that outcome?

Naturally, any changes to the mission will necessitate changes to the curriculum.  Any changes to the curriculum will necessitate changes to the means of assessment.  Finally, when that next batch of data arrives, we take the segno back to da capo and play it for more ornamentation.

This constant state of revision is a healthy process and predates the heightened inquiry of accrediting agencies.  But there is a difference.  In the past, institutions—especially music conservatories—were shaped by strong leaders who could make changes by fiat.  Whatever the logic and propriety of the L&M program adopted at Juilliard in the 1940s or the Third-Stream program adopted at the New England Conservatory in the 1960s, one thing is very clear: both William Schuman and Gunther Schuller pursued their curricular designs with very little oversight and no significant data to predict success.

Now that we are managing a wealth of data, it’s fair to ask: are we making better decisions?

 

Data and the False Sense of Security

According to Dika Newlin, when Schoenberg was teaching at UCLA he used the expression Rhaberberkontrapunkt to describe the various lines in Stravinsky’s music.  When asked, Schoenberg—perhaps harkening back to his days in Berlin cabaret—would explain, “Extras in the German theatre would yell ‘Rhabarber‘ (Rhubarb) over and over again to give the effect of a large crowd. ‘Rhabarber counterpoint’ is equally noisy, busy, and empty of content.”

We live in a world that is increasingly saturated with data. Corporations that previously paid vast sums for research seem to pay greater sums for people who can make sense of the sprawling data sets.  Much of that data—like so much ornamentation on the musical surface—is merely flash. Rhaberberkontrapunkt. At some level, data becomes merely the bling-bling of higher-ed administrivia.

That’s cause for concern.  Information, regardless of its utility, always seems pregnant with possibility.  Increasingly, business schools teach future MBAs to collect data like law schools teach future lawyers to ask questions: with at least some idea of the answer and its consequences. Indeed, the danger of collecting the wrong information has already been fashioned into a pithy epigram in leadership circles:  be careful what you measure, because what you measure is what you’ll manage.

We have measured the learning of student musicians.  But are we stuck managing only the learning that is demonstrated in juries and final projects?

A sense of the inflexibility and routine of some curricula is captured in this excerpt from an interview with the recording artist St. Vincent.  When asked why she dropped out of music school, she answered:

I think that with music school and art school, or school in any form, there has to be some system of grading and measurement. The things they can teach you is quantifiable. While all that is good and has its place, at some point you have to learn all you can and then forget everything that you learned in order to actually start making music.

I think a lot of people, if they’re not careful, can err on the side of the quantifiable and approach it like an athlete. Run that little bit faster, do that little bit more and think you’re being more successful. But the truth is that a lot of times it’s not necessarily about merely being the best athlete, it’s about attempting a new sport.

I couldn’t agree more.  However a tendency to err on the side of the quantifiable is not a necessary byproduct of assessment and accreditation.   Rather, I would argue that institutions that unduly prize the results of complicated systems of assessment suffer from a want of innovation.

A curriculum seems rigid.  Indeed, the very word derives from a Latin word for “race track.”  A system of assessment seems inflexible and has the taint of governmental bureaucracy.  But the system does not impose inflexibility onto freedom: rather, it heightens whatever inflexibility has been built into the institution.  If what we require of our students is that they play a new sport, then we should have sufficient data on that student to demonstrate how it is integral to achieving the objectives of the mission statement.  We must make the data work for us, not as ornament but as a plastic substance suitable for developing variation.  A bit of “inside the box” thinking should make it possible to reconfigure the answers to questions two and three until a new box can be devised.  In short, our curricular structures are a box of our own making and subject to our customizations.

 

Final Thoughts

Recently I had the opportunity to speak to some college students about a piece I composed.  When asked about the pacing of a certain passage, I froze.  I knew the answer, but I wasn’t sure if I should answer with absolute candor.

The truth?  Often when I consider the pacing of a passage, I think about James Brown.  I think about how he slowly asks the band if they are ready for the next section. I think about how he telegraphs uncertainty about whether even the audience is ready.  And I think about how that makes a simple prolongation of a subdominant seem like the answer to all the world’s problems. Like James Brown has made the IV chord as bright as the sun. Is that an appropriate measure of musical pacing?  Could such an idiomatic experience be meaningful to a student?  By what index could one gauge the newness of Papa’s bag?

I still have no idea how to teach someone to compose music.  Some days I’m not so sure I can do it myself.  Fortunately, that is not my job.  I work at an institution of artists-teachers, and I trust my colleagues to do this work and do it well.

However, I can be sure that the students at my institution are learning.  We matriculate students with a lot of data. We set clear benchmarks.  We communicate the work in the classroom—the ongoing assessments—across departments.  We test and assess and give feedback.  And we strive constantly to revise our programs.

In short, we have embraced student learning assessment.  We don’t talk about a concordance of the formative assessment in lessons and ensembles and the summative assessment of juries.  We don’t marvel that aural competencies of ear-training resonate in the semi-structured learning laboratory that is the chamber ensemble.  All that stuff is for the reports we file for our parent university, the State, and the Department of Education.  It’s all true, but it’s not exactly grateful reading.   Inside the 19th-century walls of my institution, the concerns of student learning assessment get adumbrated into a single question that is more germane to our mission:

Are we doing everything we can to make the students as musical as possible?

That is the thing we want to measure and manage. And that is what the assessment of student learning asks of us.

 

Resources:

Evolving Bibliography on Accreditation

Music School Histories

Recent Periodical Literature

***

Paul Mathews

Paul Mathews is the Associate Dean for Academic Affairs at the Peabody Conservatory of the Johns Hopkins University in Baltimore, Maryland, where he has taught as part of the Music Theory Faculty since 1998.   He is the author/editor of Orchestration: An Anthology of Writings (Routledge, 2006) and co-author, with Phyllis Bryn-Julson, of the book Inside Pierrot lunaire (Scarecrow Press, 2009).   Most recently, his opera Piecing it Apart was premiered by the Figaro Project.