Fred von Lohmann posted that Google has changed its algorithm. Now “it’ll start generally downranking sites that receive a high volume of copyright infringement notices from copyright holders.” The Verge reports that:
because its existing copyright infringement reporting system generates a massive amount of data about which sites are most frequently reported â€” the company received and processed over 4.3 million URL removal requests in the past 30 days alone, more than all of 2009 combined. Importantly, Google says the search tweaks will not remove sites from search results entirely, just rank them lower in listings. Removal of a listing will still require a formal request under the existing copyright infringement reporting system â€” and Google is quick to point out that those unfairly targeted can still file counter-notices to get their content reinstated into search listings.
The data-driven basis makes sense to me. So what other areas could be monitored and adjusted? I disagree with the idea that search engines should take on policing roles for certain speech that Danielle Citron and others have urged. But this shift may open the door to more arguments for Google to be a gatekeeper and policer of content. Assuming enough data is available, Google or any data-driven service, could make decisions to include or exclude entries (or shift ranking). Those moves already happen. But the difficult question will now be why or why not act on some issues but not others. James Grimmelman has a work in progress on search and speech that gets into this question. I believe the algorithm issues still control. Nonetheless, by nodding to the copyright industry, Google may be opening the door to further calls to be the Internet’s gatekeeper. Of course, if it does that, others will attack Google for doing just that from competition and other angles.