The dangers of scores in decision making

5 Min Read

Copyright © 2009 James Taylor. Visit the original article at The dangers of scores in decision making.

Last week I responded to some concerns raised about the dark side of analytics and this prompted a very thoughtful comment from Will Dwinnell who said

My fear is that much of the nuance about what a predictive model is really saying about airline passenger THX1138 is lost, and the security guard at the gate just see that the poor passenger has been rated as “83″ (out of 100) by “the system.” Non-technical people tend to simplify things like this.

And this made me think about an example I came across just recently – the BMI or Body Mass Index. For those of you who are overweight you will have been told, I am sure, that your BMI is “too high.” Yet this article on NPR (found via Evidence Soup and @merigruber) points out that the BMI is a completely bogus measure for an individual. Designed (though this is a generous way to describe the hacking it took) as a measure for a population it has a limited meaning for an individual – someone who is obese is very likely to have a high BMI but someone with a high BMI may or may not be obese. Despite the obvious and clearly described

Copyright © 2009 James Taylor. Visit the original article at The dangers of scores in decision making.

Last week I responded to some concerns raised about the dark side of analytics and this prompted a very thoughtful comment from Will Dwinnell who said

My fear is that much of the nuance about what a predictive model is really saying about airline passenger THX1138 is lost, and the security guard at the gate just see that the poor passenger has been rated as “83″ (out of 100) by “the system.” Non-technical people tend to simplify things like this.

And this made me think about an example I came across just recently – the BMI or Body Mass Index. For those of you who are overweight you will have been told, I am sure, that your BMI is “too high.” Yet this article on NPR (found via Evidence Soup and @merigruber) points out that the BMI is a completely bogus measure for an individual. Designed (though this is a generous way to describe the hacking it took) as a measure for a population it has a limited meaning for an individual – someone who is obese is very likely to have a high BMI but someone with a high BMI may or may not be obese. Despite the obvious and clearly described flaws, the BMI has become institutionalized by insurance companies, government agencies and even doctors.

So Will’s concern is a very real one – a “score,” no matter how well designed or well intentioned, can and will be misused by those who don’t understand it. Equally, of course, decisions that don’ t use analytics have problems, too. People’s snap judgments and use of how someone looks can be inaccurate with things like how people dress, the color of their skin, etc., all overriding more valuable information. A score does not suffer from these problems – indeed way back when FICO ran an ad campaign for credit scoring under the title “Good credit does not always wear a suit and tie.”

So, like all things, the art is in a balance. I also feel strongly that this is a reason for automating the decision not just the score. Then instead of the security guard making a potentially invalid use of the score, in Will’s scenario, she gets a decision (to search or not search someone) recommended to her based on the score and on rules carefully designed to use the score correctly. And while bias and error can still make it into the rules, these rules are documented, auditable and probably the result of several people collaborating so problems are less likely.


Link to original post

Share This Article
Exit mobile version