Skip to content

Rankings, Damned Rankings, and The Nature of Things

Over at Concurring Opinions, Dan Solove has a thoughtful reaction to recent blogospheric angst over Brian Leiter’s citation impact studies of law faculty.  Let me explain.  No, there is too much.  Let me sum up.

As faculty, we hate the rankings, and we’re utterly absorbed by them.

Why?  This portion of Dan’s post caught my eye:

[W]ould [the rankings] be better if they resulted in a list of esoteric scholars we’d never heard of? Brian’s citation rankings work, despite their tremendous flaws, because they come close to approximating our rough sense of things.

Let’s unpack that a bit.

To the first sentence:  “[W]ould [the rankings] be better if they resulted in a list of esoteric scholars we’d never heard of?”  Dan puts two rabbits in a single hat here, and understandably so.  “Esoteric scholars,” to begin with, “we’d never heard of” to finish it off.  As Dan observes, the rankings are what they purport to be: a rationalization of the fact that certain well-known scholars deserve their fame.  Would the rankings be better otherwise?  Obviously not; they would fail on their own terms.  It is perfectly plausible nonetheless to wish for rankings of different things.  For example, I’d like to read a list of “the best scholars of esoterica” — in non-standard categories (privacy, anyone?), or in no categories at all.  Or — and? — a list of “scholars we’ve never heard of, but should have.”  Few scholars seek obscurity; many have obscurity thrust upon them.  Perhaps a set of rankings and lists might emerge to change some of that.

There are all kinds of ways to cut and combine this stuff.  How about an annual list of the best articles published in specialty student-edited journals?  Somewhere, Eugene Volokh (I believe) collected a list of especially well-written (and/or influential) student notes.  Best articles on law written by scholars in other disciplines?  How about the best articles published in journals in other countries?  Not necessarily “the best”; how about “the most interesting” or “most provocative”?  I’m only scratching the surface here; I don’t mean to slight books or book chapters or peer-reviewed or interdisciplinary or empirical work.  And lists of “the best” imply lists of “the worst” (or “least interesting,” “most redundant,” and so forth).  Surely there is room at this banquet for more criticism of some of the truly bad writing and analysis that gets published in law journals.

And then the second sentence:  “Brian’s citation rankings work, despite their tremendous flaws, because they come close to approximating our rough sense of things.”  Because the citation studies come close to approximating our rough sense, the rankings “work” only in the narrow sense that they are persuasive in justifying our intuitions regarding important and less important intellectual categories, regarding measures of significance in the world, and regarding which scholars are “worthy” of note.  The rankings “work” as rationalizations of our pre-existing prestige-and-hierarchy meters.  That’s fine in same sense that the U.S. News law school rankings and law review placements do much of the same thing.  It’s fine in the sense, as Leiter argues himself, that the impact studies measure only what they purport to measure, and in the related sense that there is no good reason to exempt legal academics from the scrutiny to which other academic disciplines are subject.  (Except, of course, for the possibility that legal academics are more attuned than our scholarly cousins to the pernicious effects of hierarchy.  But as I argue here, if anything we are more so.)  Recognize, however, the obvious circularity here:  We are all anxious about our status in the world.  We resolve that anxiety by intuiting a great deal, but as lawyers we resist arguments from intuition unless and until we can express them formally.  The rankings close that circle.

In other words, it is far, far easier and less threatening to the status inquiry to measure our statistical “impact” on our colleagues than it is to ask them (or anyone else) about the quality of the work.  Our absorption in citation impact is the scholarly equivalent of a line of cars slowing to watch the aftermath of a spectacular car wreck on the other other side of the highway.   We know we shouldn’t join that line; someone else’s accident doesn’t tell us anything useful about our own driving skill.  But we join it anyway.  On the one hand, “there but for the grace of God go I.”  And on the other hand, “that could never happen to me.”  Such is the natural order of things.

2 thoughts on “Rankings, Damned Rankings, and The Nature of Things”

  1. Thanks for the interesting post. This brought to mind a post you made a while ago about half-baked (I think you said it more nicely) presentations at the big IP conferences.

    One real benefit to those conferences, though, is at least in that area the scholarship field is much more egalitarian – presentations are grouped by subject and not “impact” and as a result you have the opportunity to learn about some really great ideas and scholarship in areas of interest that you otherwise wouldn’t have seen (a few SSRN subscriptions don’t hurt, either). You also have the opportunity to present your scholarly ideas to others interested in the field, and to make new connections (in many senses of that word).

    Perhaps these benefits are worth the cost of hearing some incomplete ideas and whatever drawbacks there are to the mega-conference.

  2. Mike — we were at a conference a few years ago when I was just a young pup and I remember asking you (and Peggy Radin, actually) what your practices were about reading current law review articles. I was looking for a number per week and a subject matter breakdown. You both didn’t really answer. 🙂

    As you know, I think citation counting and ranking is silly at best and pernicious at worst. Sure, it might bear some relation to quality, but if you want to know who writes good work, you actually read the work and think about it carefully. You don’t count downloads, or citations, or inbound links, or engage in any such popularity contests.

    I think you’re right that rankings are fundamentally a distraction from other, fuller, conversations about what constitutes quality in legal writing, what people are reading and why, and (egads!) the substance of what people are writing about. It’s very simple to count numbers and that’s a big part of the problem.

    In a way, the whole thing seems like a hangover from the heyday of law and econ, when a reductive quantification of very complex phenomena was offered in ways that often overshadowed the proper object of study.

Comments are closed.