Skip to content

Mule Day and Quantificationism

The NYT has an alarming piece today on the simplicity of the DHS algorithm awarding anti-terror funding. Apparently states were awarded funding based on their own count of “terror targets.”

The National Asset Database, as it is known, is so flawed, the inspector general found, that as of January, Indiana, with 8,591 potential terrorist targets, had 50 percent more listed sites than New York (5,687) and more than twice as many as California (3,212), ranking the state the most target-rich place in the nation.

A Tennessee “Mule Day” celebration apparently got as much weight as the Empire State Building.

Certainly we academics wouldn’t fall prey for such shallow measurements? Well, I won’t comment on the SSRN download debate here, but I’m astonished that even the sciences seem to go in for raw citation counts as measures of importance or relevance:

[I]mpact factors have assumed so much power, especially in the past five years, that they are starting to control the scientific enterprise. In Europe, Asia, and, increasingly, the United States, Mr. Garfield’s tool can play a crucial role in hiring, tenure decisions, and the awarding of grants.

We need a name for the ideological exaltation of numerical measurements of quality, importance, or priority. Following Norton Juster’s delightful The Phantom Tollbooth, I nominate “quantificationism.” Juster satirized the pretensions of the Mathemagician, ruler of Digitopolis, in his children’s book. Let’s hope some Swift of our time can help us recognize the emptiness of so many conventional measures of success or importance.

2 thoughts on “Mule Day and Quantificationism”

  1. Frank,

    So is your main concern (a) the desire to rank, (b) the desire to to quantify value, (c) the reliance on poor proxies, or (d) all of the above. I suspect (d), but wonder whether any one of the three is more alarming or whether it depends on the context. With respect to the DHS algorithm, it obviously makes sense to prioritize and allocate funds according to each state’s risk/need, a component of which would include the number of expected targets, but the algorithm was flawed — a poor proxy, shallow measurement, etc. In your view, is the problem in academia the need to rank or the use of poor proxies?

    (Sorry for the delayed comment; I’ve been offline for a week. Thanks for the many excellent posts!)

    Brett

  2. That’s a nice breakdown of the issues. I think that there are different problems in different types of academia. Some are far more amenable to ranking than others. Some are more doomed than others to fall victim to “shallow proxies.”

    In many sciences, I think the “impact factor” could be sufficiently refined to give individuals a sense of how important or insightful individual contributions have been. OF course, Galison’s work in the history of science might lead us to question the indispensability of any particular person there!

    In the humanities, it’s much harder to distinguish “good” from “bad” work, especially because a) a lot of the best work bridges disciplinary boundaries, b) sharpening our sense of division and conflict about values is often just as important as reaching consensus on some “truth”, and c) a lot has an aesthetic dimension, and de gustibus non est disputandum.

    The question for the legal academy is whether it will permit a healthy pluralism of scientific (perhaps scientistic) research paradigms and more humanistic approaches (including ELS, law & ec, law & humanities, doctrinal scholarship, postdisciplinary interventions, etc.) OR will so condition prestige and recognition on things like SSRN downloads, research grants, and quantification that only scholars who model our social science on the natural sciences will be thought of as important or worth reading.

Comments are closed.