By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
SmartData CollectiveSmartData CollectiveSmartData Collective
  • Analytics
    AnalyticsShow More
    customer experience analytics
    Using Data Analysis to Improve and Verify the Customer Experience and Bad Reviews
    6 Min Read
    data analytics and CRO
    Data Analytics is Crucial for Website CRO
    9 Min Read
    analytics in digital marketing
    The Importance of Analytics in Digital Marketing
    8 Min Read
    benefits of investing in employee data
    6 Ways to Use Data to Improve Employee Productivity
    8 Min Read
    Jira and zendesk usage
    Jira Service Management vs Zendesk: What Are the Differences?
    6 Min Read
  • Big Data
  • BI
  • Exclusive
  • IT
  • Marketing
  • Software
Search
© 2008-23 SmartData Collective. All Rights Reserved.
Reading: When sharing isn’t a good idea
Share
Notification Show More
Font ResizerAa
SmartData CollectiveSmartData Collective
Font ResizerAa
Search
  • About
  • Help
  • Privacy
Follow US
© 2008-23 SmartData Collective. All Rights Reserved.
SmartData Collective > Big Data > Data Mining > When sharing isn’t a good idea
Data Mining

When sharing isn’t a good idea

TimManns
Last updated: 2009/11/24 at 9:01 AM
TimManns
6 Min Read
SHARE

Ensemble models seem to be all the buzz at the moment. The NetFlix prize was won by a conglomerate of various models and approaches that each excelled in subsets of the data.

A number of data miners have presented findings based upon using simple ensembles that use the mean prediction of a number of models. I was surprised that some form of weighting isn’t commonly used, and that a simple mean average of multiple models could yield such an improvement in the global predictive power. It kinda reminds me of Gestalt theory phrase “The whole is greater than the sum of the parts.” It’s got me thinking, when it is best not to share predictive power. What if one model is the best? There is also a ton of considerations regarding scalability and trade-off between additional processing, added business value, and practicality (don’t mention random forests to me…), but we’re pretend those don’t exist for the purpose of this discussion 🙂

So this has got me thinking do ensembles work best in situations where there are clearly different sub-populations of customers. For example, Netflix is in the retail space, with many customers that rent the same popular blockbuster movies, and a moderate .. …



Ensemble models seem to be all the buzz at the moment. The NetFlix prize was won by a conglomerate of various models and approaches that each excelled in subsets of the data.

A number of data miners have presented findings based upon using simple ensembles that use the mean prediction of a number of models. I was surprised that some form of weighting isn’t commonly used, and that a simple mean average of multiple models could yield such an improvement in the global predictive power. It kinda reminds me of Gestalt theory phrase “The whole is greater than the sum of the parts.” It’s got me thinking, when it is best not to share predictive power. What if one model is the best? There is also a ton of considerations regarding scalability and trade-off between additional processing, added business value, and practicality (don’t mention random forests to me…), but we’re pretend those don’t exist for the purpose of this discussion 🙂

So this has got me thinking do ensembles work best in situations where there are clearly different sub-populations of customers. For example, Netflix is in the retail space, with many customers that rent the same popular blockbuster movies, and a moderate number of customers that rent rarer (or far more diverse, i.e., long tail) movies. I haven’t looked at the Netflix data, so I’m guessing that most customers don’t have hundreds of transactions, so generalising the correct behaviour of the masses to specific customers is important. Netflix data on any specific customer could be quite scant (in terms of rents/transactions). In other industries such as telecom, there are parallels; customers can also be differentiated by nature of communication (voice calls, sms calls, data consumption etc) just like types of movies. Telecom is mostly about quantity though (customer x used to make a lot of calls etc). More importantly there is a huge amount of data about each customer, often with many hundreds of transactions per customer. There is therefore relatively lesser reliance upon supporting behaviour of the masses (although it helps a lot) to understand any specific customer.

Following this logic, I’m thinking that ensembles are great at reducing the error of incorrectly applying insights derived from the generalised masses to those weirdos that rent obscure sci-fi movies! Combining models that explain sub-populations very well makes sense, but what if you don’t have many sub-populations (or can identify and model their behaviour with one model).

But you may shout “hey, what about the KDD Cup.” Yes, the recent KDD Cup challenge (anonymous featureless telecom data from Orange) was also a won by an ensemble of over a thousand models created by IBM Research. I’d like to have had some information about what the hundreds of columns respresented, and this might have helped better understand the Orange data and build more insightful and performing models. Aren’t ensemble models used in this way simply a brute force approach to over learn the data? I’d also really like to know how the performance of the winning entry tracks over the subsequent months for Orange.

Well, I haven’t had a lot of success in using ensemble models in the telecom data I work with, and I’m hoping it is more a reflection of the data than any ineptitude on my part. I’ve tried simply building multiple models on the entire dataset and averaging the scores, but this doesn’t generate much additional improvement (granted on already good models, and I already combine K-means and Neural Nets on the whole base). During my free time I’m just starting to try splitting the entire customer base into dozens of small sub-populations and building a Neural Net model on each, then combining the results and seeing if that yields an improvement. It’ll take a while.

Thoughts?

Link to original post

TimManns November 24, 2009 November 24, 2009
Share This Article
Facebook Twitter Pinterest LinkedIn
Share

Follow us on Facebook

Latest News

Cloud-Based Marketing
Smart Video Bloggers Are Leveraging Cloud-Based Marketing Tools
Cloud Computing IT Marketing
technology and security
Technology in Physical Security: A Guide to Business Safety
Exclusive IT Security
ai for stopping credit card theft
AI Can Manage Credit Card Cybersecurity Risks
IT Security
ai can help with nurse burnout
Breakthroughs in AI Are Helping to Prevent Nurse Burnout
Artificial Intelligence Exclusive

Stay Connected

1.2k Followers Like
33.7k Followers Follow
222 Followers Pin

You Might also Like

data mining
Data Mining

Data Mining Technology Helps Online Brands Optimize Their Branding

7 Min Read
data mining helps with offsite SEO
Data Mining

Can Data Mining Aid with Off-Page SEO Strategies?

10 Min Read
using data mining to learn more about customers
Big Data

3 Data Mining Tips for Companies Trying to Understand their Customers

6 Min Read
surveys data
Data Mining

5 Data Mining Tips to Leverage the Benefits of Surveys

11 Min Read

SmartData Collective is one of the largest & trusted community covering technical content about Big Data, BI, Cloud, Analytics, Artificial Intelligence, IoT & more.

ai chatbot
The Art of Conversation: Enhancing Chatbots with Advanced AI Prompts
Chatbots
AI and chatbots
Chatbots and SEO: How Can Chatbots Improve Your SEO Ranking?
Artificial Intelligence Chatbots Exclusive

Quick Link

  • About
  • Contact
  • Privacy
Follow US
© 2008-23 SmartData Collective. All Rights Reserved.
Go to mobile version
Welcome Back!

Sign in to your account

Lost your password?