Thursday, February 17, 2011

Gladwellizing the college rankings

The US News college rankings get the Gladwell treatment in the current New Yorker (subscription required). The ease of parodying Gladwell is a basically good index of the ease of dismissing him, but this time I think he actually sheds light on a phenomenon, rather than just using the phenomenon as an occasion for doing his shtick. The upshot of the piece can be expressed very succinctly: the US News rankings are bunk.

And not just these rankings, but any ranking system that tries to be both homogeneous (ranking any and all members of the genus "college or university" regardless of species-level differences) and comprehensive (evaluating each one according to its total educational package, rather than on one aspect of it). Just as Car and Driver's comprehensive metric breaks down when you try to judge an $80,000 Lotus sports car and a $25,000 Hyundai minivan according to the same formula, so does US News produce such incongruous juxtapositions as Penn State and Yeshiva being right next to each other in the ranking of national universities, as if you could go to either one and have approximately the same educational experience. They're both good schools, after all.

Gladwell rightly observes that
There's no direct way to measure the quality of an institution--how well a college manages to inform, inspire, and challenge its students. So the US News algorithm relies instead on proxies for quality--and the proxies for educational quality turn out to be flimsy at best.
More about those proxies in a minute. You can't meaningfully compare Penn State and Yeshiva because they are only superficially trying to do the same thing. If Yeshiva started graduating students with the same values as the typical Nittany Lion, it will have utterly failed its stakeholders and mission. Likewise, a college chartered to educate "the sons of coalminers" is not set up to rake in $50,000 per head in tuition or build the massive endowment it takes to run an elite liberal arts college. To the extent that my college remains true to this founding mission, it will remain in its modest position in the status system that US News perpetuates: a system in which colleges are evaluated on the extent to which they accomplish Yale's mission. But King's might accomplish something else, providing a sort of social utility that Yale is not situated to provide.

We regularly evaluate other things relative to their aims; why not colleges? As Roger Ebert explains in his review of "Booty Call," a film he awarded three stars (out of four),
To evaluate this movie, I find myself falling back on my time-tested generic approach. First, I determine what the movie is trying to do, and what it promises its audiences they will see. Then, I evaluate how successful it is, and whether audiences will indeed see the movie they've been promised and enjoy it.
Ebert gave "The Fighter" two and a half stars. Is "The Fighter" (currently up for several Academy Awards) truly inferior to "Booty Call" (nominated for no Oscars in 1998)? Is it a worse piece of filmmaking? We might want to say no, but we can't discount that "The Fighter" is trying to be "Rocky" or "Raging Bull," reaching for immortality. "Booty Call" has, shall we say, no such aspirations. Ebert praised it for being everything its producers and its audience hoped it would be.

Given that colleges are presumably trying to cultivate students' faculties of judgment, it is not completely appropriate to say that they succeed when they give their "audience" exactly what they want. I hope, in fact, that colleges might be saved from what many prospective students want. But it's clear that colleges have differing educational missions and should be judged accordingly. At least, to the extent that we can judge their accomplishments at all.

While wines can be evaluated according to one or two variables and movies judged by what the viewer can plainly see, "educational quality" can at best be determined only impressionistically. So we have to look at proxies for quality. The single biggest factor in the US News formula is "undergraduate academic reputation." Which is established largely by ... the US News rankings themselves. To anyone inclined to be suspicious of systems of privilege, this feedback loop, engineered to reinforce status, is infuriating. (I'll say more about this in a future post.)

College presidents may lament the overemphasis on the US News rankings, but they also dutifully fill out the US News survey and await the results with the anxiety that accompanies any review of one's job performance. Could they--would they want to--break free of this codependency?

To short-circuit the rankings, college presidents would have to cooperate. But they are not wired to do so; they see other colleges as competitors, not collaborators in an educational mission. Princeton gets worried when Harvard makes an innovation like dispensing with loans. East Jahunga College worries that West Jahunga State will drop their foreign-language requirement. This massive prisoner's dilemma/Mexican standoff ensures that the system remains untouched, thereby boosting demand for anti-anxiety medication for teenagers, their parents, and college professors whose sense of self-worth is tied to the status of the place where they teach.

Thursday, February 10, 2011

Fear, loathing, and trembling

Søren Kierkegaard wouldn't think much of an academic who was granted a promotion. Here's what he says about assistant professors:
How ludicrous an assistant professor is! We all laugh when a Mad Meyer tugs at a huge boulder which he believes is money - but the assistant professor goes around proudly, proud of his knowledge, and no one laughs. And yet that is just as ludicrous - to be proud of the knowledge by which a man dupes himself eternally.
Yes, you assistant professor, of all the loathsome inhumans the most loathsome, you may very well manage to say the same thing as the religious person has said, perhaps in even more beautiful language, you may also manage to reap worldly advantages with your shrewdness, yes, even honor and esteem such as the authentically religious person never one in this life - but you are duped eternally.
He's not much higher on associate professors:
Secured in life, they live in their thoughts; they have a permanent position and secure prospects in a well-organized state; they have centuries or indeed millennia between themselves and the earthquakes of existence... Their task in life is to judge the great men and to judge them according to the outcome. Such conduct toward the great betrays a curious mixture of arrogance and wretchedness - arrogance because they feel called to pass judgment, wretchedness because they do not feel their lives are even remotely related to those of the great.
Kierkegaard is usually both witty and withering in his criticism, but these strike me as pretty standard attacks on academics' vanity and isolation from a mythical land known as "the real world." If you've made it to associate professor, then you've faced this and worse. In fact, given the propensity for self-loathing among academics, most of us have said far worse things about own profession.

Thursday, February 3, 2011

Only 16% of success is showing up

As a teacher of a humanities discipline, I’m not inclined to analyze things statistically. Wild claims, thinly supported? Now that’s more my style. But at the end of last semester, I realized that I had a convenient cache of data for helping figure out whether doing the reading really does contribute strongly to getting better grades in my classes – something I often tell my students, though I doubt many give these words much heed.

In both of my courses (each of which had two sections), I administered ten in-class quizzes. Most were unannounced. The quizzes generally tested basic knowledge of the reading assignments due on the day of the quiz. My ideal quiz is one that is extremely easy for a student who did the reading and extremely difficult for a student who didn’t. If the quizzes are at least reasonably well-designed, then I should be able to tell who read and who didn’t just by looking at the students’ grades on the quizzes.

The quizzes also serve as a way to spot-check attendance. Even though my classes are not huge (20-25 per section), I think that taking attendance every day wastes time and smacks a bit too much of high school. (It also requires better record-keeping than I’m really capable of.)

With the quiz data, then, I can tell how closely correlated attendance and reading are with overall performance in class. For each student, I calculated their grade on all written assignments other than quizzes, as I wanted to make quiz performance a truly independent variable. I also excluded class participation, which is partly determined by attendance. Correlating this grade with attendance was simple, as I could just tally up the number of quizzes a student took and compare it with their grade on all other assignments. Here’s what the graph of that looks like: