Skip to content

Fast, Ruthless, and Under Control

Given the recent series of posts on the effects of technology on the brain, I thought I might share a little neuroscholarship. As Geert Lovink writes, “the neurological turn in recent web criticism exploits the obsession with anything related to the mind, brain and consciousness,” and I haven’t been immune. My recent foray into bioethics, Cognition-Enhancing Drugs: Can We Say No?, makes the case that the drugs we call “cognition-enhancing” may actually just be creating new cognitive skills that make us better suited for some activities and less well adapted for others. It’s wrong to see either a positive advance or a deterioration in ability as such.

Could the same be said about internet use generally–or particular Web 2.0 technologies that are reshaping social life? I think so, if only because new many internet technologies aim to give (an illusion of) complete awareness of various fields (of friends, news stories, search terms, tweets (see TweetDeck), etc.). That elision between mastery and a sense of mastery is one key to understanding their appeal.

BecomeAFan

Increasingly competitive environments demand that individuals cultivate ever-larger networks and connectivity. Belief in the likely success of the endeavor is essential motivation for continuing it. If people feel overwhelmed by a surfeit of cyberstimuli (or scholarship), drugs are developing that can give them (a sense of) increased mastery. The military appears to be at the cutting edge here:

The US Defense and Advanced Research Projects Agency (DARPA) is . . . developing technology that can “regulate” emotions: “By linking directly into the sense and remotely monitoring a soldier’s performance, feelings of fear, shame or exhaustion could be removed. What was once achieved by issuing soldiers with amphetamines could now be done remotely with greater precision.” With such developments, the “eyes” and “ears” of the military would no longer be susceptible to human error and emotion, not least because computers are not at the mercy of bodily functions even while they do not function without the presence of humans.

Perhaps the new warriors will work better than automated stock-ordering technology. But the larger question is: why would we want them to? And why would we want to (feel that we) master ever-increasing digital streams of information?

In the first issue of American J. Bioethics: Neuroscience, Paolo Benanti suggests some reasons in the essay “Neuroenhancement in Young People: Cultural Need or Medical Technology?” Benanti makes several interesting observations:

Post-human culture is asking science, technology, and medicine to build something new that is not solely human. . . . We live in a post-human society where mastery is the foremost issue at stake. . . . The telos of no telos of our time is, in fact, a telos of teche. . . .

[W]e can recognize three major levels of a culture: 1) artifacts, 2) espoused beliefs and values, and 3) basic underlying assumptions. . . . Rather than speak about a priori boundaries, restrictions, and prohibitions on the use of neurotechnology, we must instead look at neuroenhancements as artifacts of a culture. . . .

I think Benanti’s article is instructive because of the deep tension between the two block quotes above. If we do indeed live in a “telos of techne,” doesn’t that very fact collapse the distinction between artifacts and assumptions? As I have noted in earlier work, isn’t the technology in effect becoming the values? Or, in a subversion of Talcott Parsons’s proposed truce in the social sciences, isn’t the value studied by economics far more important than the values studied by sociologists and philosophers?

If one of Benanti’s points must cede ground to the other, my money is on “the telos of techne” to come out on top. Once the internet interpreted censorship as damage and routed around it; now its dominant social effect may be elevating the prominence of the nonstop tweets, updates, and rapid assessments of those best able to navigate the new digital reality. Deep reflections are routed around; rapid production and aggregation are rewarded. Of course, no one can be sure if that’s the case: the algorithmic authorities behind much information ordering technology are secret and proprietary. Nevertheless, those who want attention and influence know that their digital traces are being watched and weighed, and, as Paulo Virilio puts it, “the watching gaze has long since ceased to be that of the artist or even the scientist, but belongs to the instruments of technological investigation, to the combined industrialization of perception and information.”

Jaron Lanier has warned that “technology criticism shouldn’t be left to the Luddites,” even if one risks objectivity by diving deep into new modes of communication. Lanier’s book You are Not a Gadget worries about situations where “every element in the system–every computer, every person, every bit–comes to depend on relentlessly detailed adherence to a common standard, a common point of exchange” (15). The new neuroenhancement aims to take the mechanization of war all the way down to the level of the soldier, and to standardize affect with scientific precision in an already Taylorized service sector. (Perhaps we’ll evolve from concerns about a “Beauty Bias” to worries about a “Bright Mood Bias,” and Lanier’s follow up will be “You are Not a Gidget.”)

The “neural turn” in cybercriticism worries that internet consumption practices interact with neuroplasticity, turning a brain that could once objectively evaluate the twitterverse into a Pavlonianly demanding consumer of its treats. It boils down to a critique of habit, a plea for an internal interrotron designed to keep the big picture and the impartial spectator in mind. It might lead to more humanely designed technology. But given that technology companies are the most trusted institutions in America, it’s more likely that critics will be written off as uncomprehending Cassandras.