Skip to content

Seven Reasons to Doubt Competition in the General Search Engine Market

New books on Google by Randall Stross, Alexander Halavais, and Jeff Jarvis have been getting a good deal of media attention lately. I highly recommend the Stross and Halavais volumes because they recognize that the unique power of Google is likely to be lasting. Stross notes that the company may be using a million computers to index and map the web at this point. If he’s even within an order of magnitude of the real number (a strictly protected trade secret), that ought to give pause to any technolibertarian who thinks a “Google-killer” can be cooked up in some Silicon Valley garage.

A critical mass of factors make it extremely difficult for any serious competitor to emerge in the search space:

1) Trade Secret Protections for the Search Engine Algorithm: Unlike patents, which the patent holder must disclose and which eventually expire, these trade secrets may never enter the public domain.

2) Network Effects in Improving Search Responsiveness: The more searches an engine gets, the better able it is to sharpen and perfect its algorithm. Incumbents with large numbers of users enjoy substantial advantages over smaller entrants. For example, if a search engine finds that everyone in a given area picks the third result instead of the first result in a given day, it can tailor results for that area to increase the salience of what was once merely the third result.

3) Content Licensing Costs: A key to competition in the search market is having a comprehensive database of searchable materials. The ability to obtain exclusive legal rights over searchable materials, however, may substantially increase the cost of obtaining and displaying this data and the metadata needed to organize it. Uncertainty of costs can also be a factor; if the Google Book Search lawsuit settles, Google will have overcome that uncertainty, gaining an advantage over all other potential entrants in that search space.

4) Consumer Habit: Many searchers are accustomed to using a certain number of providers, use them relatively habitually, and are reluctant to switch, despite the existence of alternatives.

5) Personalized Search: As a user queries a search engine, he or she can train it to custom-tailor the results to his or her interests. For example, if a user habitually searches for recipes, the search engine may weigh food sites more heavily than other sites when confronted with an ambiguous term (such as “cake,” which could refer, inter alia, to a confection or to the rock band Cake).

6) Two-sided market dynamics: Given Google’s market share of roughly 70% in search advertising, advertisers vastly prefer it to other, less trafficked alternatives. To see why, imagine you are looking for an internet dating service in a town with 100 people. Even if the service with 10 people signed up told you it had a wonderful new method of matching you with potential dates, wouldn’t you prefer the one with 70 members instead?

7) Non-portable AdSense Data: Harvard Business School Professor Ben Edelman has investigated another self-reinforcing aspect of Google’s market power: the non-portability of AdSense data, which makes it difficult for Google customers to apply what they have learned about their internet customers to ad campaigns designed for other search engines.

Nevertheless, the press is always looking for the next “Google Killer,” as this article shows:

[Google] has always fallen short of its most ambitious goal: getting a computer to answer a simple question, asked in natural language. Now, British scientist Stephen Wolfram claims that he may have the answer, and his new search engine could theoretically remake computing as dramatically as Google did a decade earlier.

However, the story quickly turns to a recharacterization of Wolfram Alpha as less a “killer” than a “complementer:”

[N]ova Spivack, a leader in semantic Web technology, hung out with Wolfram for two hours, playing with the new search engine. And he walked away awfully impressed. “Where Google is a system for FINDING things that we as a civilization collectively publish, Wolfram Alpha is for COMPUTING answers to questions about what we as a civilization collectively know.”

However, Spivack claims that Google and Wolfram Alpha will serve completely different functions in the online world, and users will ultimately use both in equal measure. “Wolfram Alpha, at its heart is quite different from a brute force statistical search engine like Google,” Spivack argues. “And it is not going to replace Google — it is not a general search engine: You would probably not use Wolfram Alpha to shop for a new car, find blog posts about a topic, or to choose a resort for your honeymoon. It is not a system that will understand the nuances of what you consider to be the perfect romantic getaway, for example — ”there is still no substitute for manual human-guided search for that. Where it appears to excel is when you want facts about something, or when you need to compute a factual answer to some set of questions about factual data.”

Expect many more web startups along those lines, particularly in “vertical search” (i.e., particular topical search spaces, such as gaming, B2B transactions, etc.). There’s good reason to think that specialty areas and functions will get their own search and computational organizers. But I find it hard to believe any prediction that a real Google rival will overcome all seven of the competition-impeding dynamics listed above.

PS: The seven reasons above have been developed in my work on search engine rankings and in work co-authored with Oren Bracha.