Clay Shirky has recently written “A Speculative Post on the Idea of Algorithmic Authority,” based on a talk at Yale’s recent conference on Journalism & The New Media Ecology. Shirky noted that “people trust new classes of aggregators and filters, whether Google or Twitter or Wikipedia (in its â€˜breaking newsâ€™ mode),” and calls “this tendency algorithmic authority.” He then offers several reflections on the nature of the authority of aggregators, search engines, and peer-production.
After discussing several ways of gathering knowledge, Shirky observes that:
An authoritative source isnâ€™t just a source you trust; itâ€™s a source you and other members of your reference group trust together. This is the non-lawyerâ€™s version of â€œdue diligenceâ€; itâ€™s impossible to be right all the time, but itâ€™s much better to be wrong on good authority than otherwise, because if youâ€™re wrong on good authority, itâ€™s not your fault.
The legal world has many parallel concerns beyond the due diligence example:
a) In the world of legal philosophy, authority is a central topic, often revisited. There are great edited collections on the topic, exploring how legal systems reconcile a commitment to reason-giving with the necessity of finality and submission to duly constituted authority.
b) Evidence law is deeply concerned with epistemology–how we become certain enough of a certain fact or conclusion to deploy the coercive power of the state to, say, imprison someone.
c) Common law systems of precedent distinguish between precedential and merely persuasive authority, delineating exactly how powerful past holdings should be in governing the resolution of current disputes.
As I reflect on these legal ideas, I wish I’d incorporated more of them in my 2005 article Rankings, Reductionism, and Responsibility–where I elided the idea of authority and comprehensiveness, assuming that the most complete collection of data would also be the most authoritative one. I now question that assumption, for reasons I’ll get into in my next post.