It appears that the Netflix Prize has been won.Â
Thet Netflix Prize is a $1 million prize being offered by Netflix for development of a “movie recommendation system” that is 10% better than Netflix’s own Cinematch system as predicting the movie preferences of Netflix subscribers.Â [Older storyÂ from the NYTimes here.]Â From the prize website, here are the rules:
Netflix is all about connecting people to the movies they love. To help customers find those movies, weâ€™ve developed our world-class movie recommendation system: CinematchSM. Its job is to predict whether someone will enjoy a movie based on how much they liked or disliked other movies. We use those predictions to make personal movie recommendations based on each customerâ€™s unique tastes. And while Cinematch is doing pretty well, it can always be made better.
Now there are a lot of interesting alternative approaches to how Cinematch works that we havenâ€™t tried. Some are described in the literature, some arenâ€™t. Weâ€™re curious whether any of these can beat Cinematch by making better predictions. Because, frankly, if there is a much better approach it could make a big difference to our customers and our business.
So, we thought weâ€™d make a contest out of finding the answer. Itâ€™s “easy” really. We provide you with a lot of anonymous rating data, and a prediction accuracy bar that is 10% better than what Cinematch can do on the same training data set. (Accuracy is a measurement of how closely predicted ratings of movies match subsequent actual ratings.) If you develop a system that we judge most beats that bar on the qualifying test set we provide, you get serious money and the bragging rights. But (and you knew there would be a catch, right?) only if you share your method with us and describe to the world how you did it and why it works.
Serious money demands a serious bar. We suspect the 10% improvement is pretty tough, but we also think there is a good chance it can be achieved. It may take months; it might take years. So to keep things interesting, in addition to the Grand Prize, weâ€™re also offering a $50,000 Progress Prize each year the contest runs. It goes to the team whose system we judge shows the most improvement over the previous yearâ€™s best accuracy bar on the same qualifying test set. No improvement, no prize. And like the Grand Prize, to win youâ€™ll need to share your method with us and describe it for the world.
A prediction set meeting the 10% threshold was submitted last Friday, which triggers a 30-day period during whichÂ other prediction setsÂ are to be submitted for judging.
There are two things worth noting here, though lack of time prevents much elaboration here.Â One is the purposive crowd-sourcing of Netflix’s question:Â How to improve its business model?Â (As I read the rules, competitors agree to a non-exclusive license of their work to Netflix.Â I haven’t found a more thorough explanation of what “non-exclusive” means in this context.)Â It appears that this is a context in which “the wisdom of crowds” may actually work.Â Two is the specific conditions under which “the wisdom of crowds” has operated well.Â Not only is “the crowd” oriented toward solving a specific problem, but “the crowd” isn’t really an undifferentiated mass.Â It’s a community of individual developers (and teams) that has been in regular and detailed conversation with itself, even during the competition.Â Competitors have been sharing details of their work with each other via a “community” forum, maintainedÂ by Netflix,Â and perhaps elsewhere.Â