The Ascent of Ranking Algorithms

9 Min Read

Algorithmic ranking is on the rise. Everywhere I turn, something or the other is being ranked analytically.

Ranking web pages based on relevance, pioneered by Google’s PageRank, maybe the best-known example of algorithmic ranking.

Also ubiquitous are ranking algorithms inside recommender systems. Given an individual’s behavior (browsing history, rating history, purchase history and so on), the idea is to rank the huge universe of things (e.g., books, movies, music) out there based on likely appeal to the individual and show the top-rankers. If you are an Amazon or Netflix customer, you have doubtless been at the receiving end of these ranked recommendations for books and movies that you may find of interest.  Plenty of complex and occasionally elegant math goes into quantifying and predicting “likely appeal” (Netflix Prize winning approach).

Despite its age, recommendation ranking is far from mature and different flavors of recommender systems are popping up every day. Just last week, BusinesWeek had a story on The Filter, a new recommendation ranking system that is allegedly leaving the other approaches in the dust…

Algorithmic ranking is on the rise. Everywhere I turn, something or the other is being ranked analytically.

Ranking web pages based on relevance, pioneered by Google’s PageRank, maybe the best-known example of algorithmic ranking.

Also ubiquitous are ranking algorithms inside recommender systems. Given an individual’s behavior (browsing history, rating history, purchase history and so on), the idea is to rank the huge universe of things (e.g., books, movies, music) out there based on likely appeal to the individual and show the top-rankers. If you are an Amazon or Netflix customer, you have doubtless been at the receiving end of these ranked recommendations for books and movies that you may find of interest.  Plenty of complex and occasionally elegant math goes into quantifying and predicting “likely appeal” (Netflix Prize winning approach).

Despite its age, recommendation ranking is far from mature and different flavors of recommender systems are popping up every day. Just last week, BusinesWeek had a story on The Filter, a new recommendation ranking system that is allegedly leaving the other approaches in the dust (aside: One of the founders of The Filter is Peter Gabriel, legendary musician and member of Genesis, one of my favorite rock bands).

So far, I have listed “old” examples of ranking: web pages, books, movies, and music.  But  recently, I came across something new: SpotRank.

Skyhook Wireless, the company that provides location information to Apple devices (when you fire up Google Maps on your iPhone, your exact location is pinpointed using a combination of GPS information and Skyhook’s wifi database – details) announced SpotRank a few months ago.

By tracking the number of “location hits” their servers receive from Apple devices, Skyhook can determine which spots are popular and when they are popular. They capture this in the form of a popularity score and, as the name suggests, SpotRank ranks locations by their popularity score.

Next time you are in a strange part of town, have time to kill and are looking for popular spots, maybe SpotRank can help you (at least if you like hanging out with Apple fans).

Now that places are being ranked, what’s next? Ranking people?

It is already being done. Heard of UserRank?

UserRank was created by Nextjump, a NYC-based company that runs employee discount and reward programs for 90,000 corporations, organizations and affinity groups. Next Jump connects 28,000 retailers and manufacturers to the over 100 million consumers who work in the companies in its network, typically getting the merchants to offer deep discounts.

NextJump calculates a UserRank for every one of the 100m consumers in its database.

The more a user shops on our network, the higher their UserRank™ will be. Users with high UserRank™ are more likely to spend and are typically your best customers.

NextJump creates value by allowing retailers/merchants to use UserRank in offer targeting. For instance, an offer can be targeted only to consumers with a minimum UserRank.

I wonder what my UserRank is?

My final example is from the field of drug discovery. In a recent article, MIT News describes fascinating work done by researchers at MIT and Harvard on applying ranking algorithms to this area.

The drug development process typically starts with identifying a molecule that’s associated with a disease. Depending on the disease, this “target” molecule either needs to be suppressed or promoted. A drug that’s successful in treating the disease is a chemical (which, of course, is just another molecule) that suppresses or promotes the target molecule without causing bad side-effects.

How is such a drug found? Over the years, researchers have amassed a large catalog of chemicals that can help suppress or promote target molecules. From this library, drug developers find the most promising ones to use as drug candidates for further testing and clinical trials. Unfortunately,

majority of drug candidates fail — they prove to be either toxic or ineffective — in clinical trials, sometimes after hundreds of millions of dollars have been spent on them. (For every new drug that gets approved by the U.S. Food and Drug Administration, pharmaceutical companies have spent about $1 billion on research and development.) So selecting a good group of candidates at the outset is critical.

This sounds like a ranking problem: given a target molecule, rank  the chemicals in the database according to their likely effectiveness to being a viable drug for the chosen target.

The drug companies weren’t slow to recognize this, of course. They have been using machine-learning algorithms since the 90s with some success. However, the MIT-Harvard researchers showed that a

rudimentary ranking algorithm can predict drugs’ success more reliably than the algorithms currently in use.

What was the key idea?

At a general level, the new algorithm and its predecessors work in the same way. First, they’re fed data about successful and unsuccessful drug candidates. Then they try out a large variety of mathematical functions, each of which produces a numerical score for each drug candidate. Finally, they select the function whose scores most accurately predict the candidates’ actual success and failure.

The difference lies in how the algorithms measure accuracy of prediction. When older algorithms evaluate functions, they look at each score separately and ask whether it reflects the drug candidate’s success or failure. The MIT researchers’ algorithm, however, looks at scores in pairs, and asks whether the function got their order right.

(italics mine)

Rather than scoring each drug candidate in isolation and then ranking them all, the key idea was to build pairwise ranking into the construction of the matching algorithm itself.

As the data deluge gets larger and larger, finding information most relevant to one’s needs (be they mundane needs like in shopping or profound needs like in drug discovery) gets harder and harder. Perhaps this is why we are seeing ranking algorithms everywhere.

Have you seen any interesting examples of algorithmic ranking at work? Please share in the comments.

Link to original post

TAGGED:
Share This Article
Exit mobile version