Skip to content

Rankings, Damned Rankings, and The Nature of Things

Over at Concurring Opinions, Dan Solove has a thoughtful reaction to recent blogospheric angst over Brian Leiter’s citation impact studies of law faculty.  Let me explain.  No, there is too much.  Let me sum up.

As faculty, we hate the rankings, and we’re utterly absorbed by them.

Why?  This portion of Dan’s post caught my eye:

[W]ould [the rankings] be better if they resulted in a list of esoteric scholars we’d never heard of? Brian’s citation rankings work, despite their tremendous flaws, because they come close to approximating our rough sense of things.

Let’s unpack that a bit.

To the first sentence:  “[W]ould [the rankings] be better if they resulted in a list of esoteric scholars we’d never heard of?”  Dan puts two rabbits in a single hat here, and understandably so.  “Esoteric scholars,” to begin with, “we’d never heard of” to finish it off.  As Dan observes, the rankings are what they purport to be: a rationalization of the fact that certain well-known scholars deserve their fame.  Would the rankings be better otherwise?  Obviously not; they would fail on their own terms.  It is perfectly plausible nonetheless to wish for rankings of different things.  For example, I’d like to read a list of “the best scholars of esoterica” — in non-standard categories (privacy, anyone?), or in no categories at all.  Or — and? — a list of “scholars we’ve never heard of, but should have.”  Few scholars seek obscurity; many have obscurity thrust upon them.  Perhaps a set of rankings and lists might emerge to change some of that.

There are all kinds of ways to cut and combine this stuff.  How about an annual list of the best articles published in specialty student-edited journals?  Somewhere, Eugene Volokh (I believe) collected a list of especially well-written (and/or influential) student notes.  Best articles on law written by scholars in other disciplines?  How about the best articles published in journals in other countries?  Not necessarily “the best”; how about “the most interesting” or “most provocative”?  I’m only scratching the surface here; I don’t mean to slight books or book chapters or peer-reviewed or interdisciplinary or empirical work.  And lists of “the best” imply lists of “the worst” (or “least interesting,” “most redundant,” and so forth).  Surely there is room at this banquet for more criticism of some of the truly bad writing and analysis that gets published in law journals.

And then the second sentence:  “Brian’s citation rankings work, despite their tremendous flaws, because they come close to approximating our rough sense of things.”  Because the citation studies come close to approximating our rough sense, the rankings “work” only in the narrow sense that they are persuasive in justifying our intuitions regarding important and less important intellectual categories, regarding measures of significance in the world, and regarding which scholars are “worthy” of note.  The rankings “work” as rationalizations of our pre-existing prestige-and-hierarchy meters.  That’s fine in same sense that the U.S. News law school rankings and law review placements do much of the same thing.  It’s fine in the sense, as Leiter argues himself, that the impact studies measure only what they purport to measure, and in the related sense that there is no good reason to exempt legal academics from the scrutiny to which other academic disciplines are subject.  (Except, of course, for the possibility that legal academics are more attuned than our scholarly cousins to the pernicious effects of hierarchy.  But as I argue here, if anything we are more so.)  Recognize, however, the obvious circularity here:  We are all anxious about our status in the world.  We resolve that anxiety by intuiting a great deal, but as lawyers we resist arguments from intuition unless and until we can express them formally.  The rankings close that circle.

In other words, it is far, far easier and less threatening to the status inquiry to measure our statistical “impact” on our colleagues than it is to ask them (or anyone else) about the quality of the work.  Our absorption in citation impact is the scholarly equivalent of a line of cars slowing to watch the aftermath of a spectacular car wreck on the other other side of the highway.   We know we shouldn’t join that line; someone else’s accident doesn’t tell us anything useful about our own driving skill.  But we join it anyway.  On the one hand, “there but for the grace of God go I.”  And on the other hand, “that could never happen to me.”  Such is the natural order of things.