Skip to content

Are You Being Evaluated Fairly?

Danielle Citron is giving a talk today at Harvard’s Berkman Center on “Open Code Governance”–the growing movement to render automated processes of judgment more transparent. Here are some interesting targets for reform that she mentioned in a pre-talk interview:

Systems reflect the biases of their programmers. For instance, Helen Nissenbaum studied an automated loan program that assigned negative values to applicants from certain locations, such as high-crime and low-income neighbors. We can imagine a graduate school’s automated system that dilutes the GPAs of applicants from designated community colleges, such as certain rural schools whose student body is disproportionately less affluent or representative of particular minorities. Because the source code for these systems is typically closed, no one can view the programmer’s instructions to the computer. The bias remains hidden from interested individuals.

Citron’s work reminds me of a real asymmetry developing in contemporary life–between ever-eroding expectations of personal privacy, and ever-expanding assertions of corporate (and government) secrecy. Individuals are increasingly expected to be “legible” to authorities (as James Scott’s Seeing Like a State would put it), while the state is putting security before transparency. Moreover, actions vital to our reputation–like credit scores or admissions decisions–are made in a “black box” that only the most dogged investigative reporters have started to illuminate.

Now I am the first to admit that there are sometimes very good reasons for secrecy. Making the criteria behind the NSA “No-fly” list public would not be good for national security. Similarly, the IRS’s criteria for “flagged” returns should probably be kept out of the public eye. But when we move from “preventing lawbreaking” to competing for societal goods (like credit and schooling), doesn’t the need for secrecy dissipate substantially?

Though the automated systems Citron focuses on raise some of the greatest problems of legitimacy, the movement for openness should also find other targets. Philosopher/programmer Samir Chopra warns that we should not try to hold machines up to higher standards than we hold humans (in comments on this post):

[Some skeptical of automated decisions say] “if we don’t know how a decision is made, we might want to limit the range of its effects” in the context of trying to draw a line “between algorithm-driven results and corporation or person-driven results”. I’m not sure there is such a principled distinction to be drawn. First, we assume too much knowledge on our part about the reasons why humans come to certain decisions. Very often human beings confabulate reasons for the decisions they make (there is a large psychological literature on this). And secondly, our own decisions are to a certain extent programmed, by societal expectations, education, culture, upbringing, language, gender, nationality and what have you. Where one kind of decision changes from the “purely algorithmic” to the “decidedly human” is not clear.

Perhaps one of the big challenges for administrative law, and selections systems in general (be they human or computer-powered), is to try to make rationales for distinctions more transparent and compelling. If you want to be really scared by the arbitrariness of a human-driven decision system, look no further than a recent documentary about ICE, A Well-Founded Fear.

2 thoughts on “Are You Being Evaluated Fairly?”

  1. Since the “black boxing” of discretionary decisionmaking long predates the arrival of computer systems, then Danielle’s analysis may counsel *more* automated decisionmaking by bureaucracies — via “transparent” code — rather than less.

    Is that always good for society? My instinct is that it’s not, and that there may be cases other than national security that counsel caution in the drive for transparency. In processes of decisionmaking, distinctions that are “compelling” (or persuasive) and “transparent” raise two questions, rather than one.

  2. Absolutely. One can imagine a bureaucrat who says “I always rule for the person with blue eyes;” totally transparent, totally unpersuasive.

    But I think that transparency is often necessary (if not sufficient) for persuasiveness. Following Matthews v. Eldridge, perhaps we can also say that a decision on a matter that is not of grave importance need not be all that persuasive.

Comments are closed.