Skip to content

Netflix Asked the Wrong Analytics Question

One of the legendary events in the history of analytics was the original Netflix prize. The event led to a terrific example of the need to focus on not only theoretical results, but also pragmatically achievable results, when developing analytic processes.

For those who aren’t familiar with the story, not quite 10 years ago, Netflix was having trouble achieving the desired improvement in its recommendation algorithms. There were smart people working on the problem, but progress had slowed as they used all the tricks and techniques that they knew. As a result, Netflix decided to do something that was, at the time, novel and unexpected.

A SIMPLE ANALYTICS CHALLENGE

Netflix took a large piece of its data, anonymized it and made it available on the web for anyone to access. The analytics question was simple: can you build an algorithm that beats the performance of the Netflix baseline process by at least 10%. Success would be judged based on how the algorithm performed on a holdout sample that only Netflix had access to. Be the first to succeed and a $1 Million prize would come your way!

Many individuals and teams stepped up to the challenge. Over many months, teams experimented and compared results. Performance kept improving until one fateful day when, in a dramatic finish straight out of a movie, two teams submitted winning entries just minutes apart! Netflix paid out the prize money and the contest became a legend. The concept was so well liked that analytic contests are now commonplace. It all sounds great, right? Netflix had the new algorithm driving value for them in short order, didn’t it? Unfortunately, the answer is “no”!

AN UNEXPECTED OUTCOME

Netflix asked contestants to beat its baseline lift by 10%. However, a very important criterion was left out. It ends up that the winning solution was quite complex and didn’t scale well. As a result, Netflix stated that the “additional accuracy gains that we measured did not seem to justify the engineering effort needed to bring them into a production environment.” Wow! While the story leading to the prize is legendary, very few people are aware of how the story really ended. Two accounts of the failure of the algorithm in practice can be found here at Forbes and here at Wired.

The failure stemmed directly from the fact that Netflix omitted a critical constraint from the contest. Namely, to beat the existing algorithm by 10% using a process that would scale for a reasonable cost to the level Netflix required. What the $1 Million bought Netflix was a terrific theoretical solution that couldn’t meet the needs of the enterprise in practice. I have built processes with this same problem in the past.

I am not suggesting that the Netflix prize was a waste of money. After all, Netflix got well over $1 Million of labor for free, learned a lot about how different approaches worked, and achieved a major public relations coup that is still paying dividends today. Certainly Netflix is satisfied with the results even though the solution wasn’t implemented. Even in that case, it doesn’t negate the fact that the initial goal of the contest was not achieved.

BE SURE YOU ASK THE RIGHT QUESTION

The moral to the story is that in today’s world of huge data sources and complex analytics, we can’t live in the realm of the theoretical. We must also apply pragmatic constraints to our analytic processes from the outset. An elegant solution isn’t any good if it can’t be implemented at scale. As the analytics revolution continues to progress, we must make sure that we account for the operational, organizational, and cultural realities that will also be present as we deploy sophisticated analytics within our business processes.

The uncomfortable truth is that we must start to optimize the results we can actually achieve rather than the results we can theoretically achieve. These two standards can be quite divergent in the world of big data. Instead of focusing solely on the power of the analytics themselves, also require that the process will meet other, more pragmatic goals as well. If your solution won’t scale, or your corporate systems can’t make use of the results, or your people will refuse to use the results, then all the theoretical value in the world will yield nothing in practice.

Which would you prefer to bet your career and your company’s future on:

  1. A theoretically awesome analytic process that will only meet a small fraction of its potential due to other constraints?

    OR

  2. A theoretically inferior solution that will meet its full potential and surpass the performance of the theoretically better solution in the real world?

You should choose the latter every time. Whenever you are looking to build a new and innovative analytic process, make sure that the challenge posed to the analytics professionals assigned includes the practical constraints that must be accounted for.