Cookies help us display personalized product recommendations and ensure you have great shopping experience.

By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
SmartData CollectiveSmartData Collective
  • Analytics
    AnalyticsShow More
    data analytics and truck accident claims
    How Data Analytics Reduces Truck Accidents and Speeds Up Claims
    7 Min Read
    predictive analytics for interior designers
    Interior Designers Boost Profits with Predictive Analytics
    8 Min Read
    image fx (67)
    Improving LinkedIn Ad Strategies with Data Analytics
    9 Min Read
    big data and remote work
    Data Helps Speech-Language Pathologists Deliver Better Results
    6 Min Read
    data driven insights
    How Data-Driven Insights Are Addressing Gaps in Patient Communication and Equity
    8 Min Read
  • Big Data
  • BI
  • Exclusive
  • IT
  • Marketing
  • Software
Search
© 2008-25 SmartData Collective. All Rights Reserved.
Reading: Terabytes of trees
Share
Notification
Font ResizerAa
SmartData CollectiveSmartData Collective
Font ResizerAa
Search
  • About
  • Help
  • Privacy
Follow US
© 2008-23 SmartData Collective. All Rights Reserved.
SmartData Collective > Big Data > Data Mining > Terabytes of trees
Data Mining

Terabytes of trees

DavidMSmith
DavidMSmith
4 Min Read
SHARE

I saw a very interesting talk at hosted by the SF Bay ACM last night. Google engineer Josh Herbach talked about the platform he’d implemented to build boosted and bagged trees on very large data sets using MapReduce. (A longer version of the talk will be presented at VLDB2009 later this month.) The data is distributed amongst many machines in gfs (Google Filesystem): Google Adwords data, with information on each user of Google Search and each click they have made, can run to terabytes and take three days to build a predictive tree. 

The algorithm is quite elegant: after an initialization phase to identify candidate cut-points for continuous predictors and values of categorical variables, the Map step selects a node to add a new chunk of data to, and then calculates a deviance score for a number of candidate splits. The reduce step selects the best split from the various candidates evaluated in the distributed nodes. The process repeats to create a single tree or (as is actually used in practice) a number of bagged and/or boosted trees. One interesting wrinkle: for implementation reasons, the bagged trees use sampling without replacement rather than with …

I saw a very interesting talk at hosted by the SF Bay ACM last night. Google engineer Josh Herbach talked about the platform he’d implemented to build boosted and bagged trees on very large data sets using MapReduce. (A longer version of the talk will be presented at VLDB2009 later this month.) The data is distributed amongst many machines in gfs (Google Filesystem): Google Adwords data, with information on each user of Google Search and each click they have made, can run to terabytes and take three days to build a predictive tree. 

More Read

IBM’s New Retail Tools How you shop: what it…
Podcast Available
Google dashboard: Does it enhance privacy?
Seth Godin talks to Tom H. C. Anderson about Marketing…
Interview: Roger Haddad, Founder of KXEN Automated Modeling Software
The algorithm is quite elegant: after an initialization phase to identify candidate cut-points for continuous predictors and values of categorical variables, the Map step selects a node to add a new chunk of data to, and then calculates a deviance score for a number of candidate splits. The reduce step selects the best split from the various candidates evaluated in the distributed nodes. The process repeats to create a single tree or (as is actually used in practice) a number of bagged and/or boosted trees. One interesting wrinkle: for implementation reasons, the bagged trees use sampling without replacement rather than with replacement (as bagging is usually defined). Given the amount of data, I’m not sure this makes any practical difference though. Interestingly, he did compare the results to heavily sampling the data and building the tree in-memory in R (all of his charts were done in R, too). He was quite adamant that using all of the data is “worth it” compared to sampling — and with Google’s business model of monetizing the long tail, I can believe it. 
Josh mentioned that all of the techniques he’d implemented could also be implemented using Hadoop, the open-source map-reduce application. This got me thinking that some interesting out-of-memory techniques could be implemented in R via Rhipe, using R statistics functions to implement the Map operations, and R data aggregation for the Reduce functions. Hmm, I feel a new project coming on…

SF Bay ACM: PLANET: Massively Parallel Learning of Tree Ensembles with MapReduce

Link to original post

TAGGED:hadoopMapReducer
Share This Article
Facebook Pinterest LinkedIn
Share

Follow us on Facebook

Latest News

data analytics and truck accident claims
How Data Analytics Reduces Truck Accidents and Speeds Up Claims
Analytics Big Data Exclusive
predictive analytics for interior designers
Interior Designers Boost Profits with Predictive Analytics
Analytics Exclusive Predictive Analytics
big data and cybercrime
Stopping Lateral Movement in a Data-Heavy, Edge-First World
Big Data Exclusive
AI and data mining
What the Rise of AI Web Scrapers Means for Data Teams
Artificial Intelligence Big Data Exclusive

Stay Connected

1.2kFollowersLike
33.7kFollowersFollow
222FollowersPin

You Might also Like

A video introduction to R for Excel users

3 Min Read

Energy Department Announces New SunShot Projects to Harness the Power of Big Data

6 Min Read

Interactive stock visualizations with R

3 Min Read
Image
Big DataSoftware

Big Data Sets You Can Use with R

9 Min Read

SmartData Collective is one of the largest & trusted community covering technical content about Big Data, BI, Cloud, Analytics, Artificial Intelligence, IoT & more.

ai chatbot
The Art of Conversation: Enhancing Chatbots with Advanced AI Prompts
Chatbots
data-driven web design
5 Great Tips for Using Data Analytics for Website UX
Big Data

Quick Link

  • About
  • Contact
  • Privacy
Follow US
© 2008-25 SmartData Collective. All Rights Reserved.
Go to mobile version
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?