Skip to content

Netflix Prize Won?

It appears that the Netflix Prize has been won. 

Thet Netflix Prize is a $1 million prize being offered by Netflix for development of a “movie recommendation system” that is 10% better than Netflix’s own Cinematch system as predicting the movie preferences of Netflix subscribers.  [Older story from the NYTimes here.]  From the prize website, here are the rules:

Netflix is all about connecting people to the movies they love. To help customers find those movies, we’ve developed our world-class movie recommendation system: CinematchSM. Its job is to predict whether someone will enjoy a movie based on how much they liked or disliked other movies. We use those predictions to make personal movie recommendations based on each customer’s unique tastes. And while Cinematch is doing pretty well, it can always be made better.

Now there are a lot of interesting alternative approaches to how Cinematch works that we haven’t tried. Some are described in the literature, some aren’t. We’re curious whether any of these can beat Cinematch by making better predictions. Because, frankly, if there is a much better approach it could make a big difference to our customers and our business.

So, we thought we’d make a contest out of finding the answer. It’s “easy” really. We provide you with a lot of anonymous rating data, and a prediction accuracy bar that is 10% better than what Cinematch can do on the same training data set. (Accuracy is a measurement of how closely predicted ratings of movies match subsequent actual ratings.) If you develop a system that we judge most beats that bar on the qualifying test set we provide, you get serious money and the bragging rights. But (and you knew there would be a catch, right?) only if you share your method with us and describe to the world how you did it and why it works.

Serious money demands a serious bar. We suspect the 10% improvement is pretty tough, but we also think there is a good chance it can be achieved. It may take months; it might take years. So to keep things interesting, in addition to the Grand Prize, we’re also offering a $50,000 Progress Prize each year the contest runs. It goes to the team whose system we judge shows the most improvement over the previous year’s best accuracy bar on the same qualifying test set. No improvement, no prize. And like the Grand Prize, to win you’ll need to share your method with us and describe it for the world.

A prediction set meeting the 10% threshold was submitted last Friday, which triggers a 30-day period during which other prediction sets are to be submitted for judging.

There are two things worth noting here, though lack of time prevents much elaboration here.  One is the purposive crowd-sourcing of Netflix’s question:  How to improve its business model?  (As I read the rules, competitors agree to a non-exclusive license of their work to Netflix.  I haven’t found a more thorough explanation of what “non-exclusive” means in this context.)  It appears that this is a context in which “the wisdom of crowds” may actually work.  Two is the specific conditions under which “the wisdom of crowds” has operated well.  Not only is “the crowd” oriented toward solving a specific problem, but “the crowd” isn’t really an undifferentiated mass.  It’s a community of individual developers (and teams) that has been in regular and detailed conversation with itself, even during the competition.  Competitors have been sharing details of their work with each other via a “community” forum, maintained by Netflix, and perhaps elsewhere. 

Via Kottke