“Oh, the humanity,” the now-trite Herbert Morrison radio call of the Hindenburg disaster, came to mind today as I read Jeffrey Rosen’s “The Web Means the End of Forgetting,” a NYTimes Magazine popularization of a theme heard in cyberlaw circles for some time: The Internet never forgets.
Rosen’s piece covers the right ground and the standard moves (anecdotes, a reference to Borges, the history of privacy, prescriptions from cyberlaw scholars, tech experts, and behavioral economists), but for my money he doesn’t really connect with the topic until he gets around to the point that the Internet’s inability to engage in collective forgetting is linked, perhaps inextricably, with the passing of the idea of collective — I’ll say “communal” — forgiveness. Rosen:
In addition to exposing less for the Web to forget, it might be helpful for us to explore new ways of living in a world that is slow to forgive. It’s sobering, now that we live in a world misleadingly called a “global village,” to think about privacy in actual, small villages long ago. In the villages described in the Babylonian Talmud, for example, any kind of gossip or tale-bearing about other people — oral or written, true or false, friendly or mean — was considered a terrible sin because small communities have long memories and every word spoken about other people was thought to ascend to the heavenly cloud. (The digital cloud has made this metaphor literal.) But the Talmudic villages were, in fact, far more humane and forgiving than our brutal global village, where much of the content on the Internet would meet the Talmudic definition of gossip: although the Talmudic sages believed that God reads our thoughts and records them in the book of life, they also believed that God erases the book for those who atone for their sins by asking forgiveness of those they have wronged. In the Talmud, people have an obligation not to remind others of their past misdeeds, on the assumption they may have atoned and grown spiritually from their mistakes. “If a man was a repentant [sinner],” the Talmud says, “one must not say to him, ‘Remember your former deeds.'”
Unlike God, however, the digital cloud rarely wipes our slates clean, and the keepers of the cloud today are
sometimes less forgiving than their all-powerful divine predecessor.
I think that’s the key to what makes so many people uncomfortable with the persistence of Facebook data and search histories. We aren’t bothered so much by the fact that material doesn’t disappear; we are bothered by the fact that the persistence and disappearance of knowledge about us are no longer products of our communal experiences. The technology is, in this sense, fundamentally inhumane. Rosen didn’t cite or quote Neil Postman, but in Technopoly, retelling Plato’s parable of Socrates and the invention of writing, Postman anticipates The End of Forgetting. Writing, Postman’s paradigmatic new technology, was likewise fundamentally inhumane. It gave us — the community — the ability to recollect, but it deprived us of the ability to remember.
In the “solutions” department, Rosen is somewhat dismissive of comparisons to what he calls “self-governing communities like Wikipedia” or “the wisdom of the crowd”; he devotes more time to suggestions by Zittrain (“reputation bankruptcy”) and Ohm (antidiscrimination laws), to market-based private correctives (ReputationDefender), and code (the Mission Impossible-style “self-destructing data”). For a problem that is essentially collective and communal, these are all surprisingly individualistic approaches. I’ll predict that something like what really does happen in Wikipedia much of the time, as opposed to Rosen’s stereotyped presentation of Wikipedia, is likely to emerge in Facebook, and if not in Facebook, then as an adjunct to Facebook: community governance. In fact, if Facebook is smart, it will stop putting so much emphasis on hyper-elaborate privacy settings and policies, each change to which seems to elicit a massive chorus of criticism, and start investing in protocols and policies for collective management and monitoring by groups of Facebook users. Who knows? Maybe there’s a profit opportunity there.
While I’m at the “humanity” theme, I’ll note my strong negative reaction to Richard Thaler’s ineffective and ill-informed effort to analogize problems with soccer (football) refereeing to problems with regulating financial markets, which also appears in today’s Times. Thaler doesn’t know much about soccer, and it shows. He repeatedly refers to the “rules” of soccer and makes a half-hearted and ultimately incorrect effort to describe when a player should be whistled for being offside. I understand that he isn’t really trying to reform soccer officiating, but I couldn’t help thinking that he starts from a false premise: That the outcome of financial markets and the outcome of a soccer game share a common goal — that performance, not regulation, should dictate results.
The problem is that there is no shared vision in soccer of what constitutes “performance”; aside from the scoreline, “performance” in soccer is measured on essentially aesthetic — humane — grounds. If you know soccer, you know that in most competitive contexts, officials are subject to post-match assessments, as I assume they are in other sports. I don’t want to diminish the fact that the recent World Cup finals revealed something that every soccer fan and player already knew: referees are fallible, and their fallibility occasionally leads to missed calls that change the outcomes of games, even very important games. But the performance of a referee as measured in a referee assessment is not merely a question of whether all offenses visible to an objective observer were whistled by the referee, because there are often cases where a referee has the power — judgment, let us say — *not* to call a foul. The question, in other words, is the character of the referee’s judgment. An additional criterion during the assessment is the “flow” of the game: of the 90 minutes of a standard match, what percentage of that time involved the ball actually being in play? (60 minutes out of 90 is a decent benchmark.) And there is a significant degree of communal, self-regulation on the field of a soccer match. The referee has power and judgment, but ultimately it is the players themselves who decide whether the game will be open and flowing (Germany v. Uruguay, for example) or cynical and tactical (Netherlands v. Spain).
All of those things could be changed, of course. Thaler, the economist, gives the impression that he wishes that soccer were more like what he believes basketball to be, with less judgment in the officiating and more “objective” criteria for regulating play. When it comes to financial markets, he may be right on; that’s not my area of expertise. And doubtless there are those who agree with him when it comes to soccer. For myself, I reacted strongly to his piece because I read it as of a piece with Rosen’s take on Facebok: Oh, the humanity!
I’m slowly coming to the conclusion that the assumption that we will stay static and perpetually annoyed as the Internet changes privacy practices is probably wrong. What will happen, I think, is that our notion of how significant it is that negative information about someone is available on the Internet will change once everyone has negative information about themselves on the Internet somewhere.
The fallacy in the above is ” … once everyone …”
There will always be inequalities, and people who are more reserved than others.
Moreover, even if some attitudes do change with time, that’s little comfort to people who are caused great pain along the way (“So you’re life is ruined – a century from now, this wouldn’t matter!”)
Right, nuance is hard in a blog comment. Once it’s common, is what I meant. There will always be hermits and other outliers.
Your second point is a very good one. We are in a moment of transition. Transitions are hard, anxiety-producing affairs, sometimes even catastrophic.
Re Seth’s comment about “people who are more reserved than others”. Are you assuming that the more reserved people will take greater measures to control their personal information online or that those people will feel more aggrieved than less reserved people when others post information about them online? Or both? I suspect that both are probably true.
I meant that “everyone” is not a realistic assumption – some people will be better at being “on guard” against potential embarrassment in the first place, at having the social power to have such incidents removed *significantly*, to have the ability to retaliate to make sure others don’t bring them up, and so on.
There’s an impulse to predict the problem will just go away after a while, because of this conjectured equality. I’m saying the huge inequality in ability to guards one’s privacy and then to manage it, will ensure this remains a problem (though changing in form over time).
Seth, I’ll see my assumption and raise you one of your own. Where is this power to shut down embarrassing information going to come from? It looks to me like you are assuming that economic and status disparities will somehow translate into control over information. But I frankly don’t see that happening to any significant extent. Hollywood celebrities are pretty high-status and wealthy; but that seems to give them zero ability to control damaging information about themselves.
Two fallacies there: 1) Arguing hares aren’t faster than tortoises because in a well-publicized case the latter once won a race over the former 2) Arguing therefore ants are the equal of both tortoises and hares, because they’re all animals. That is, you hear about about the instances where an attempt to suppress information fails – but by definition, you’ll never hear about the examples where the high-status and wealthy *succeed* in suppressing information. Moreover, that one high-status and wealthy interest (celebrities) loses in a fight with some other powerful interest (e.g. tabloid press), hardly means status and wealth are not important factors.
So, you could tell me what the evidence is, but then you’d have to kill me?
More like the people who could tell you are “dead”, so they can’t speak.
Formally, I think the burden of proof is on you do the hard work of establishing the proposition, by dealing with the obvious flaw that a cause-celebre isn’t a general case, rather than having it be assumed true unless _I_ do the hard work and come up with a statistical survey of information that’s been suppressed (which of course will then be taken as self-refuting, because if I could find it, it couldn’t have been really suppressed, right?)
It’s a bit like the death penalty argument that no truly innocent person would ever be executed, because they would be pardoned beforehand. If virtually nobody is ever pardoned, do you conclude that argument is false, or that it’s true and the justice system is perfect?
Burdens of proof? In a blog comment? I’m just trying to understand why you think that privacy norms won’t change. So far, most of what you’ve provided is hypothetical examples of bad arguments.
OK, to recap:
Well, it’s a question of what you mean by “change” – nothing is ever exactly the same over time, but “everyone” is a far-ranging assumption. My point is that I believe this idea is ill-considered: ” … once everyone has negative information about themselves on the Internet somewhere …”
My reply is that there will be persistent massive inequalities in the ability to find, publicize, and inflict damage from such negative information, and beforehand, significant parts of the population will be better at protecting themselves from creating such information in the first place. That is, it will not be any sort of overall (excluding “hermits and other outliers”) equality, but rather still a battle of various types of power.
I also find a subtext in such an “everyone” argument of roughly “Don’t worry about the powerful hurting the weak, because there will be no such thing in the future” (don’t mean to put words in your mouth here, just explaining my thinking).
The “change” will be in tactics and forms, but still profoundly affecting people.
I suppose this supports Seth’s position: Several scholars have noted recently with respect to online reputation management strategies that there is a disparity here in individuals’ respective abilities to protect their online reputations using services like ReputationDefender because of the cost of using those services. There are techniques to bury damaging information online (search engine optimization, astroturfing etc), but they are time consuming and there is a bit of an art to them. So it is arguable that those who can afford to pay for a reputation management company to engage in these activities on their behalf will be privileged vis-a-vis those who have neither the money to hire such a service nor the time or technical wherewithal to protect their own online reputations in this way.
As usual in these sorts of debates I’m not positive we’re taking polar opposite positions. To lay my cards on the table, I am *extremely* skeptical that the activities of services like Reputation Defender will amount to more than a drop in the ocean, in terms of shaping future *general societal norms and expectations*. Even very rich 17- and 18-year-olds and their parents, I suspect, will not take the time and resources to employ such services (which are bound to be imperfect anyway) for the ordinary (within a standard deviation or two) indiscretions of teenagers, and in order to have an impact on general norms and expectations, I think the number of people effectively restricting the flow of negative information about themselves would have to be large, well beyond the very rich. But I suppose this is an empirical question, which means that my suspicions plus 2 dollars buy you a cup of coffee. I’m calling it in the air.