Cookies help us display personalized product recommendations and ensure you have great shopping experience.

By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
SmartData CollectiveSmartData Collective
  • Analytics
    AnalyticsShow More
    data analytics
    How Data Analytics Can Help You Construct A Financial Weather Map
    4 Min Read
    financial analytics
    Financial Analytics Shows The Hidden Cost Of Not Switching Systems
    4 Min Read
    warehouse accidents
    Data Analytics and the Future of Warehouse Safety
    10 Min Read
    stock investing and data analytics
    How Data Analytics Supports Smarter Stock Trading Strategies
    4 Min Read
    predictive analytics risk management
    How Predictive Analytics Is Redefining Risk Management Across Industries
    7 Min Read
  • Big Data
  • BI
  • Exclusive
  • IT
  • Marketing
  • Software
Search
© 2008-25 SmartData Collective. All Rights Reserved.
Reading: Budgeting Time on a Modeling Project
Share
Notification
Font ResizerAa
SmartData CollectiveSmartData Collective
Font ResizerAa
Search
  • About
  • Help
  • Privacy
Follow US
© 2008-23 SmartData Collective. All Rights Reserved.
SmartData Collective > Analytics > Modeling > Budgeting Time on a Modeling Project
Modeling

Budgeting Time on a Modeling Project

DeanAbbott
DeanAbbott
3 Min Read
SHARE

Within the time allotted for any empirical modeling project, the analyst must decide how to allocate time for various aspects of the process.  As is the case with any finite resource, more time spent on this means less time spent on that.  I suspect that many modelers enjoy the actual modeling part of the job most.  It is easy to try “one more” algorithm: Already tried logistic regression and a neural network?  Try CART next.

Within the time allotted for any empirical modeling project, the analyst must decide how to allocate time for various aspects of the process.  As is the case with any finite resource, more time spent on this means less time spent on that.  I suspect that many modelers enjoy the actual modeling part of the job most.  It is easy to try “one more” algorithm: Already tried logistic regression and a neural network?  Try CART next.

Of course, more time spent on the modeling part of this means less time spent on other things.  An important consideration for optimizing model performance, then, is: Which tasks deserve more time, and which less?

Experimenting with modeling algorithms at the end of a project will no doubt produce some improvements, and it is not argued here that such efforts be dropped.  However, work done earlier in the project establishes an upper limit on model performance.  I suggest emphasizing data clean-up (especially missing value imputation) and creative design of new features (ratios of raw features, etc.) as being much more likely to make the model’s job easier and produce better performance.

More Read

Why Predicting the Future is So Darn Difficult
3 Ways Big Data Is Changing Financial Institutions Forever
How to Personalize the Retail Experience with Data
Do Predictive Modelers Need to Know Math?
Sticking vs Backsliding – It Is So Sad

Consider how difficult it is for a simple 2-input model to discern “healthy” versus “unhealthy” when provided the input variables height and weight alone.  Such a model must establish a dividing line between healthy and unhealthy weights separately for each height.  When the analyst uses instead the ratio of weight to height, this becomes much simpler.  Note that the commonly used BMI (body mass index) is slightly more complicated than this, and would likely perform even better.  Cross categorical variables is another way to simplify the problem for the model.  Though we deal with a process we call “machine learning”, is is a pragmatic matter to make the job as easy as possible for the machine.

The same is true for handling missing values.  Simple global substitution using the non-missing mean or median is a start, but think about the spike that creates in the variable’s distribution.  Doing this over multiple variables creates a number of strange artifacts in the multivariate distribution.  Spending the time and energy to fill in those missing values in a smarter way (possibly by building a small model) cleans up the data dramatically for the downstream modeling process.

— Post by Will Dwinnell
Share This Article
Facebook Pinterest LinkedIn
Share

Follow us on Facebook

Latest News

protecting patient data
How to Protect Psychotherapy Data in a Digital Practice
Big Data Exclusive Security
data analytics
How Data Analytics Can Help You Construct A Financial Weather Map
Analytics Exclusive Infographic
AI use in payment methods
AI Shows How Payment Delays Disrupt Your Business
Artificial Intelligence Exclusive Infographic
financial analytics
Financial Analytics Shows The Hidden Cost Of Not Switching Systems
Analytics Exclusive Infographic

Stay Connected

1.2KFollowersLike
33.7KFollowersFollow
222FollowersPin

You Might also Like

dashboard tool
AnalyticsBig DataBusiness IntelligenceData ManagementData VisualizationModelingSoftware

First Look: Decisions

6 Min Read

How Airlines Measure Loyalty Using Big Data & Analytics

10 Min Read
patient engagement
Big DataExclusiveModelingPredictive Analytics

Learn Why Doctors Look To Data To Increase Patient Engagement

9 Min Read
Image
Big DataBusiness IntelligenceData ManagementData MiningData QualityData WarehousingITModeling

A Better Way to Model Data

5 Min Read

SmartData Collective is one of the largest & trusted community covering technical content about Big Data, BI, Cloud, Analytics, Artificial Intelligence, IoT & more.

giveaway chatbots
How To Get An Award Winning Giveaway Bot
Big Data Chatbots Exclusive
ai is improving the safety of cars
From Bolts to Bots: How AI Is Fortifying the Automotive Industry
Artificial Intelligence

Quick Link

  • About
  • Contact
  • Privacy
Follow US
© 2008-25 SmartData Collective. All Rights Reserved.
Go to mobile version
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?