CFP: AALS Annual Meeting 2015 Program on Automated Decision-Making

The AALS Section on Internet & Computer Law and the Section on Defamation & Privacy invite paper proposals for our co-sponsored panel on Automated Decision-Making at the 2015 AALS Annual Meeting in Washington, DC. We will be selecting one panelist from the CFP process. One- or two-page proposals, which will be reviewed blindly by members of the two executive committees, are due on or before June 1. Proposals must be for papers that have not yet been published as of the deadline, and submission of a completed draft will be required of the selected panelist by November 1. Submissions from pre-tenure scholars are especially encouraged. To submit a proposal, please send it in Word or PDF format to Annemarie Bridy, abridy@uidaho.edu, with “AALS 2015 Annual Meeting Proposal” in the subject line.

Automated Decision-Making
AALS Section on Internet & Computer Law and Section on Defamation & Privacy
Joint Panel for AALS 2015 Annual Meeting

Proliferating sensors, affordable data storage, indiscriminate personal data collection, and increasingly robust predictive algorithms individually raise issues related to privacy, security, and due process. Combined, however, these technological advancements have created a nearly insatiable appetite for data in order to improve organizational decision-making. The domains across which this voracity reaches include consumer lending, insurance, advertising, legal compliance, national security, and employment. Moreover, due to the massive scale of databases and the wide range of decisions perceived to be amenable to data-driven analysis, decisions affecting individuals are increasingly automated.

Automated decision-making promises accuracy and efficiency. Organizations believe they can use it to avoid errors of human perception and subjective judgment. Manpower resources can be diverted elsewhere when decisions become automated. Yet automated decision-making is also rife with peril. Humans irrationally trust decisions made by computers, even though bias is easily hard-wired into computer systems. The use of personal data to make extremely nuanced and particularized decisions raises a number of privacy concerns. Incorrect inputs risk correspondingly erroneous outputs. Automated decision-making could also have a disparate impact on vulnerable populations that are susceptible to certain kinds of influence or that find it hard to fight back. Compounding this problem is the almost complete lack of meaningful transparency for those subjected to automated decisions. Individuals are left to guess whether any given organizational response might have been at least partially the result of automated decision-making.

Policy makers are struggling to respond to the legal, ethical, and normative challenges posed by automated decision-making. This panel will explore those challenges and will attempt to identify similarities and differences among the varied domains in which automated decision-making operates.

Annemarie Bridy, Chair, Internet & Computer Law (abridy@uidaho.edu)
Woody Hartzog, Chair, Defamation & Privacy (whartzog@samford.edu)