Skip to content

So What’s the Google End-Game?

Nicholas Carr is one of the leading commentators on internet culture, and his article “Is Google Making Us Stupid?” will influence discussion of its effects for a long time. Here’s one conclusion from the piece:

Where does it end? Sergey Brin and Larry Page, the gifted young men who founded Google while pursuing doctoral degrees in computer science at Stanford, speak frequently of their desire to turn their search engine into an artificial intelligence, a HAL-like machine that might be connected directly to our brains. “The ultimate search engine is something as smart as people –or smarter,” Page said in a speech a few years back. “For us, working on search is a way to work on artificial intelligence.” In a 2004 interview with Newsweek, Brin said, “Certainly if you had all the world’s information directly attached to your brain, or an artificial brain that was smarter than your brain, you’d be better off.” . . .

[T]heir easy assumption that we’d all “be better off” if our brains were supplemented, or even replaced, by an artificial intelligence is unsettling. It suggests a belief that intelligence is the output of a mechanical process, a series of discrete steps that can be isolated, measured, and optimized. In Google’s world, the world we enter when we go online, there’s little place for the fuzziness of contemplation. Ambiguity is not an opening for insight but a bug to be fixed. The human brain is just an outdated computer that needs a faster processor and a bigger hard drive.

The Google goals may seem a bit overambitious given AI’s failure to master the emotional dimensions of speech recognition and production. Nevertheless, the merger of man and machine remains a tempting trope for new economy superstars, primarily because the very idea of “superhuman intelligence” can become a self-fulfilling prophecy.

What does it mean for a machine to be “smarter” than people? On the positive side, certainly we can “get” the fact that a computer can do millions more computations per second than a human can. But number-crunching or chess-playing are but small aspects of human intelligence. Cashing out the “smarter than human” claim probably involves an inevitably comparative orientation to results. In other words, the machine would have to aid some human in manipulating or beating other humans in competition. So our consideration of machine smarts may already be limited to areas of human competition (as opposed to cooperation)–a telling focus built into the very idea of comparison here.

For example, consider the claims about an “end of theory” (where the most important scientific results reflect brute number-crunching) predicted by Long Tail author Chris Anderson:

At the petabyte scale, information is not a matter of simple three- and four-dimensional taxonomy and order but of dimensionally agnostic statistics. It calls for an entirely different approach, one that requires us to lose the tether of data as something that can be visualized in its totality. It forces us to view data mathematically first and establish a context for it later. For instance, Google conquered the advertising world with nothing more than applied mathematics. It didn’t pretend to know anything about the culture and conventions of advertising — it just assumed that better data, with better analytical tools, would win the day. And Google was right.

Google’s founding philosophy is that we don’t know why this page is better than that one: If the statistics of incoming links say it is, that’s good enough. No semantic or causal analysis is required. That’s why Google can translate languages without actually “knowing” them (given equal corpus data, Google can translate Klingon into Farsi as easily as it can translate French into German). And why it can match ads to content without any knowledge or assumptions about the ads or the content. . . .

This is a world where massive amounts of data and applied mathematics replace every other tool that might be brought to bear. Out with every theory of human behavior, from linguistics to sociology. Forget taxonomy, ontology, and psychology. Who knows why people do what they do? The point is they do it, and we can track and measure it with unprecedented fidelity. With enough data, the numbers speak for themselves.

In other words, humans are atoms, and the social sciences can be assimilated to the natural sciences. It’s hard to believe that this tired assumption, exhaustively debated and refuted in the 1960s and 70s, is au courant again. Social science is always already a tool for certain purposes. Whether we adopt an external perspective on the types of “behind the back” social coordination that computers can detect, or care more about understanding the lifeworld of social actors, depends entirely on what we’re trying to accomplish. Is it attempts at manipulation or conversations about meaning? Are we concerned about brute numbers on what most people do, or accounts of why they do it?

Anderson is right that new data mining technology will be increasingly important to winning lawsuits and elections. But when politics becomes more a matter of finding out, say, “how many Democratic-leaning Asian Americans making more than $30,000 live in the Austin, Texas, television market” than actual discussions of policy and its implications, democracy gets replaced with demographic slicing and dicing. Moreover, accounts of why elections are won and lost are notoriously slippery and weak, and it’s all too easy to imagine our science-challenged punditocracy celebrating some whiz-bang computer thingy as the new way to win (and thereby entrenching its importance in future elections as others who don’t understand the underlying technology nevertheless buy in and assure the spread of the type of discourses it generates.)

So to return to Google: we should never forget that the success of the AI project depends in large part on its authors’ ability to convince us of the importance of its results. A society that obsesses over the top Google News results has made those results important–and we are ill-advised to assume the reverse (that the results are obsessed over because they are important) without some narrative account of why the algorithm is superior to, say, the “news judgment” of editors at traditional media. If personalized search ever evolves to the point where someone can type into their gmail “what job should I look for,” and gets many relevant results, the searcher should not forget that their very idea of relevance has probably been affected by repeated interactions with the interface and the results themselves. Lastly, as Carr points out (echoing Turkle and Birkerts), we should always remember that tools aren’t just adapting to better serve us–in addition, we are adapting in order to better compete in the environment created by tools:

The Net’s influence doesn’t end at the edges of a computer screen, either. As people’s minds become attuned to the crazy quilt of Internet media, traditional media have to adapt to the audience’s new expectations. Television programs add text crawls and pop-up ads, and magazines and newspapers shorten their articles, introduce capsule summaries, and crowd their pages with easy-to-browse info-snippets. When, in March of this year, The New York Times decided to devote the second and third pages of every edition to article abstracts, its design director, Tom Bodkin, explained that the “shortcuts” would give harried readers a quick “taste” of the day’s news, sparing them the “less efficient” method of actually turning the pages and reading the articles. Old media have little choice but to play by the new-media rules.

So what is the end game? Here’s a cri de coeur from Jaron Lanier:

There is a real chance that evolutionary psychology, artificial intelligence, Moore’s Law fetishizing, and the rest of the package, will catch on in a big way, as big as Freud or Marx did in their times. Or bigger, since these ideas might end up essentially built into the software that runs our society and our lives. If that happens, the ideology of cybernetic totalist intellectuals will be amplified from novelty into a force that could cause suffering for millions of people.

Lanier gives many examples to back up his claim about suffering, but perhaps a sense of numbness and powerlessness in the world he anticipates would be its biggest problem. As data-crunching supplants narrative in crucial areas of our lives, the world takes on increasing opacity. Inscrutable computations become our window on the world. It’s no wonder that at this cultural juncture, the wonderful film WALL-E brings back the HAL trope that inspired Carr’s article.

Cross-posted from Concurring Opinions.
Photo Credit: WALL-E.

1 thought on “So What’s the Google End-Game?”

  1. I don’t know, Frank. This seems a little doom and gloom to me.

    1) As to Carr, I kind of think that he is wrong about the ultimate effects of browsing culture, if there is such a thing. To the extent that there is a change in the way we read, we might end up with more foxes than hedgehogs, but the movement toward increased knowledge and social specialization during the last couple centuries has had its downsides. Maybe we could use a few more browsing foxes. At least they’re better than TV zombies, right?

    2) As to Anderson, I think the science of extracting patterns from large data sets will probably change plenty of things in interesting ways — mostly in the business sector. Mining very large data sets does produce some interesting new forms of insight, even if journalists (especially those from Wired) do use that as a springboard to make some silly claims. Machines cannot, by definition, be smarter than people, but it does make a catchy headline to claim that they will be.

Comments are closed.