Cookies help us display personalized product recommendations and ensure you have great shopping experience.

By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
SmartData CollectiveSmartData Collective
  • Analytics
    AnalyticsShow More
    unusual trading activity
    Signal Or Noise? A Decision Tree For Evaluating Unusual Trading Activity
    3 Min Read
    software developer using ai
    How Data Analytics Helps Developers Deliver Better Tech Services
    8 Min Read
    ai for stock trading
    Can Data Analytics Help Investors Outperform Warren Buffett
    9 Min Read
    media monitoring
    Signals In The Noise: Using Media Monitoring To Manage Negative Publicity
    5 Min Read
    data analytics
    How Data Analytics Can Help You Construct A Financial Weather Map
    4 Min Read
  • Big Data
  • BI
  • Exclusive
  • IT
  • Marketing
  • Software
Search
© 2008-25 SmartData Collective. All Rights Reserved.
Reading: PAW: Predictive modeling and today’s growing data challenges
Share
Notification
Font ResizerAa
SmartData CollectiveSmartData Collective
Font ResizerAa
Search
  • About
  • Help
  • Privacy
Follow US
© 2008-23 SmartData Collective. All Rights Reserved.
SmartData Collective > Analytics > Predictive Analytics > PAW: Predictive modeling and today’s growing data challenges
Predictive Analytics

PAW: Predictive modeling and today’s growing data challenges

JamesTaylor
JamesTaylor
6 Min Read
SHARE

Live from Predictive Analytics World

Matt Kramer of Axciom and Jun Zhong of Wells Fargo discussed some of the challenges presented by data in the context of predictive models. Matt began by discussing some of the reasons for modeling – reducing costs, avoiding simplistic decisioning, predict attritition, optimize marketing spend etc. Predictive models help by ranking based on probabilities.

Creating a suitable modeling sample requires enough records (10,000-15,000) that are recent enough to be relevant (especially when times are changing fast). Axciom’s data shows that 1,200 or more instances of what it is you are trying to predict gives you a robust model. Focusing on mature/complete data is important, however, and this has to be balanced against timeliness. Appending internal or external data can make a big difference to quality of models.These samples must be checked carefully for problems. Sample bias can be damaging, so using a model on data that looks like the sample is key.

Jun discussed some marketing business issues related to data. Their two main challenges are to recognize likely purchasers and then to recognize likely purchasers that will be influenced by an action or tre…

More Read

RockSolid Cloud Services Edition
Some thoughts on advanced analytics in 2010
Intelligent Enterprise on R, SAS and REvolution Computing
WOW! Big Data at Google
How Social Media is Changing CRM


Live from Predictive Analytics World

Matt Kramer of Axciom and Jun Zhong of Wells Fargo discussed some of the challenges presented by data in the context of predictive models. Matt began by discussing some of the reasons for modeling – reducing costs, avoiding simplistic decisioning, predict attritition, optimize marketing spend etc. Predictive models help by ranking based on probabilities.

Creating a suitable modeling sample requires enough records (10,000-15,000) that are recent enough to be relevant (especially when times are changing fast). Axciom’s data shows that 1,200 or more instances of what it is you are trying to predict gives you a robust model. Focusing on mature/complete data is important, however, and this has to be balanced against timeliness. Appending internal or external data can make a big difference to quality of models.These samples must be checked carefully for problems. Sample bias can be damaging, so using a model on data that looks like the sample is key.

Jun discussed some marketing business issues related to data. Their two main challenges are to recognize likely purchasers and then to recognize likely purchasers that will be influenced by an action or treatment – who is proactive (will buy anyway) and who is reactive (who will buy only in response).

Propensity to Purchase models predict who is likely to buy, allowing the offers to be made only to those likely to buy. A second model, Propensity to Influence, predicts how likely someone will be influenced by a specific promotion.

To develop this second model you need to have both a treatment and a control group to see what response you get. This allows you to find the buyers in both groups and then to see what kind of people did not buy in the control group but did buy in the treatment group – these are those who only purchased because of the treatment. From this you can build the propensity to influence model. Building these models requires all the usual data cleansing, transformation, initial and ongoing validation etc.

Matt came back to talk about some challenges he sees. The ability to demonstrate incremental value and to persuade business users that modeling is necessary – that just specifying the rules explicitly would not be as useful. There are also growing restrictions on the use of certain data as a result of legal worries.

  • As other speakers have noted, it is really important to have clean control groups so that comparisons are both real and believable. If you can’t show real business benefits in terms of total value/total cost then the “lift” of the model doesn’t matter. It can be hard for companies to hold people out to keep a really clean control group – want to market to everyone.
  • Criteria – rules-  tend to be easier to understand and implement. Showing the value of the model is critical. Models tend to produce more optimal results and generally does not exclude whole groups but rather ranks them based on weighted attributes. Matt was trying to make it seem like these are either / or but of course they are not – they can and should be used in conjunction. Models can help make decision criteria better.
  • Restrictions on use of certain attributes are designed to prevent discrimination. Not much you can do about this except keep working on the modeling to find other ways to build the model. One example he gave had 55% of attributes removed dropping the lift from 3x to 1.2x. New attributes and more careful segmentation rebuilt the lift somewhat but not completely.

More posts and a white paper on predictive analytics and decision management at decisionmanagementsolutions.com/paw

TAGGED:bankingdatadata miningpawpredictive analyticspredictive analytics worldpropensity models
Share This Article
Facebook Pinterest LinkedIn
Share

Follow us on Facebook

Latest News

Hidden AI, a risk?
Hidden AI, Real Risk: A Governance Roadmap For Mid-Market Organizations
Artificial Intelligence Exclusive Infographic
unusual trading activity
Signal Or Noise? A Decision Tree For Evaluating Unusual Trading Activity
Analytics Exclusive Infographic
Ai agents
AI Agent Trends Shaping Data-Driven Businesses
Artificial Intelligence Exclusive Infographic
Why Businesses Are Using Data to Rethink Office Operations
Why Businesses Are Using Data to Rethink Office Operations
Big Data Exclusive

Stay Connected

1.2KFollowersLike
33.7KFollowersFollow
222FollowersPin

You Might also Like

Freakonomics and Your Data

6 Min Read

Risk by risk – a decision-centric approach to risk management

4 Min Read

How to Use Pivot Tables to Mine Your Data

9 Min Read

How are Predictive Analytics related to Performance?

1 Min Read

SmartData Collective is one of the largest & trusted community covering technical content about Big Data, BI, Cloud, Analytics, Artificial Intelligence, IoT & more.

AI and chatbots
Chatbots and SEO: How Can Chatbots Improve Your SEO Ranking?
Artificial Intelligence Chatbots Exclusive
ai chatbot
The Art of Conversation: Enhancing Chatbots with Advanced AI Prompts
Chatbots

Quick Link

  • About
  • Contact
  • Privacy
Follow US
© 2008-25 SmartData Collective. All Rights Reserved.
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?