What is a good classification accuracy in data mining?

April 11, 2010
58 Views

What a good question! Or what a bad question should I say. In fact, this question is not a good one since if we ask it this way, we might expect an answer that is valid for any data mining problem. This is of course not possible. This question may be asked by a data miner, since it’s one way of measuring the quality of the data mining algorithm.  Indeed, you can estimate how good your decision tree or neural networks are by estimating the classification rate of the test set. My point in this article is to highlight the fact that the classification percentage depends on the application in which data mining is used.

Let me explain that with a few examples from my own experience. I have a friend working in the domain of face recognition. According to him, an algorithm (machine learning in his case) is well fitted to the problem when you get a classification accuracy above 97% for example. This may be true, but only in his domain, which is face recognition. In this domain, you apply machine learning to pictures to recognize faces. In this case, you have no outside effect or variables that could influence the output (the class you predict) which is not present in the pixels

What a good question! Or what a bad question should I say. In fact, this question is not a good one since if we ask it this way, we might expect an answer that is valid for any data mining problem. This is of course not possible. This question may be asked by a data miner, since it’s one way of measuring the quality of the data mining algorithm.  Indeed, you can estimate how good your decision tree or neural networks are by estimating the classification rate of the test set. My point in this article is to highlight the fact that the classification percentage depends on the application in which data mining is used.

Let me explain that with a few examples from my own experience. I have a friend working in the domain of face recognition. According to him, an algorithm (machine learning in his case) is well fitted to the problem when you get a classification accuracy above 97% for example. This may be true, but only in his domain, which is face recognition. In this domain, you apply machine learning to pictures to recognize faces. In this case, you have no outside effect or variables that could influence the output (the class you predict) which is not present in the pixels of the picture. Thus, a very high classification accuracy can be reached. Don’t get me wrong, I’m not saying that face recognition is an easy task, rather that with the correct algorithm and the right data preparation, a very high classification rate can be reached.

Let’s take another application: predicting user clicks on some given ads. That’s the current application I’m working on with the FinWEB project. In this case, most of my models reach a classification accuracy of around 70%. Is that bad? Well, according to the application domain, not really. When we predict if the user will click or not on the ad, we don’t have all possible information at our disposal. We only have some data that represent his behavior in a given time frame. We don’t have all the user brain in a data base. There are so many influencing factors, that it is quite satisfying to reach a classification percentage of 70%.

Finally, I will take the example of data mining in finance. When applying data mining to the problem of stock picking, I obtained a classification accuracy range of 55-60%. While it looks to be a poor result, it’s not. We should consider all the influencing factors that can affect the price of a stock. While we may use hundreds of input parameters, they may only represent a very small percentage of all information that could influence the price of a stock. This is very far from the face recognition case with every pixel defined.

My point in this post was to show that there is no definitive answer to this question, which is in fact not a good one. The classification accuracy mainly depends on the application domain. Feel free to share your own experiences by commenting this post!

Link to original post

You may be interested

Big Data Revolution in Agriculture Industry: Opportunities and Challenges
Analytics
25 views
Analytics
25 views

Big Data Revolution in Agriculture Industry: Opportunities and Challenges

Kayla Matthews - July 24, 2017

Big data is all about efficiency. There are many types of data available, and many ways to use that information.…

How SAP Hana is Driving Big Data Startups
Big Data
298 shares3,195 views
Big Data
298 shares3,195 views

How SAP Hana is Driving Big Data Startups

Ryan Kh - July 20, 2017

The first version of SAP Hana was released in 2010, before Hadoop and other big data extraction tools were introduced.…

Data Erasing Software vs Physical Destruction: Sustainable Way of Data Deletion
Data Management
154 views
Data Management
154 views

Data Erasing Software vs Physical Destruction: Sustainable Way of Data Deletion

Manish Bhickta - July 20, 2017

Physical Data destruction techniques are efficient enough to destroy data, but they can never be considered eco-friendly. On the other…