The Netflix Prize, Occam’s Razor and PMML

4 Min Read

I just finished reading an excellent posting on the Netflix Tech Blog by Xavier Amatriain and Justin Basilico. Entitled “Netflix Recommendations: Beyond the 5 stars (Part 1)“, it gives a very nice account of the actual application of the 107 algorithms submitted as the winner solution to the Netflix prize.

I just finished reading an excellent posting on the Netflix Tech Blog by Xavier Amatriain and Justin Basilico. Entitled “Netflix Recommendations: Beyond the 5 stars (Part 1)“, it gives a very nice account of the actual application of the 107 algorithms submitted as the winner solution to the Netflix prize.

I recall reading about the prize when the winner team was finally announced. I basically asked myself: “How on earth will Netflix implement all these algorithms and put them to work?” The obvious answer was “they won’t”. Too complicated, too time consuming.

As pointed out by Xavier and Justin, at the end, two algorithms (SVD and Restricted Boltzmann Machine) were selected, based on performance, and eventually made into production where they are busy making recommendations.
In their posting, Xavier and Justin put it clearly:

“We evaluated some of the new methods offline but the additional accuracy gains that we measured did not seem to justify the engineering effort needed to bring them into a production environment.”

On top of that, they go on to mention that the focus of the recommendations algorithm has changed as business has shifted to on-line streaming. With that, the input data has changed considerably as well as customer behavior. Basically, the proposed solution (an intricate combination of 107 algorithms) no longer applies since the rules of the game have changed. That’s the very nature of business.  

You may be wondering about Occam’s razor at this point. How does it relate to the Netflix Prize? Simple, the razor “asserts that one should proceed to simpler theories until simplicity can be traded for greater explanatory power. The simplest available theory need not be the most accurate” (Wikipedia).

Obviously, the razor does not apply to a contest in which participants are battling each other for the prize. The more accurate, the better, right? But, how about real life? We know that every predictive solution has a cost. They take time to implement and deploy.

But, can we somehow create a data mining contest in which Occam’s razor is taken into account? Is that possible? I believe the answer is “yes” and it involves the use of open standards. For example, if the proposed solutions were to be delivered in PMML (the Predictive Model Markup Language) format, they could be put to work immediately. Also, since PMML is supported by all the top data mining tools, re-creating the same solution using existing software would not take an arm and a leg. And, whenever represented in PMML, it can easily be understood. Given that PMML is XML-based, it contains a verbose but accurate representation of all model details. Whenever a solution is represented in PMML, there is no need for an extra document to explain it. The same file can be used for explaining the solution and for deploying it.

Finally, given the agility introduced by the standard itself, predictive solutions can be easily adjusted to new business requirements. In fact, they can even be used to drive new business opportunities. And, that’s a prize worth winning.


TAGGED:
Share This Article
Exit mobile version