Terabytes of trees

4 Min Read

I saw a very interesting talk at hosted by the SF Bay ACM last night. Google engineer Josh Herbach talked about the platform he’d implemented to build boosted and bagged trees on very large data sets using MapReduce. (A longer version of the talk will be presented at VLDB2009 later this month.) The data is distributed amongst many machines in gfs (Google Filesystem): Google Adwords data, with information on each user of Google Search and each click they have made, can run to terabytes and take three days to build a predictive tree. 

The algorithm is quite elegant: after an initialization phase to identify candidate cut-points for continuous predictors and values of categorical variables, the Map step selects a node to add a new chunk of data to, and then calculates a deviance score for a number of candidate splits. The reduce step selects the best split from the various candidates evaluated in the distributed nodes. The process repeats to create a single tree or (as is actually used in practice) a number of bagged and/or boosted trees. One interesting wrinkle: for implementation reasons, the bagged trees use sampling without replacement rather than with

I saw a very interesting talk at hosted by the SF Bay ACM last night. Google engineer Josh Herbach talked about the platform he’d implemented to build boosted and bagged trees on very large data sets using MapReduce. (A longer version of the talk will be presented at VLDB2009 later this month.) The data is distributed amongst many machines in gfs (Google Filesystem): Google Adwords data, with information on each user of Google Search and each click they have made, can run to terabytes and take three days to build a predictive tree. 

The algorithm is quite elegant: after an initialization phase to identify candidate cut-points for continuous predictors and values of categorical variables, the Map step selects a node to add a new chunk of data to, and then calculates a deviance score for a number of candidate splits. The reduce step selects the best split from the various candidates evaluated in the distributed nodes. The process repeats to create a single tree or (as is actually used in practice) a number of bagged and/or boosted trees. One interesting wrinkle: for implementation reasons, the bagged trees use sampling without replacement rather than with replacement (as bagging is usually defined). Given the amount of data, I’m not sure this makes any practical difference though. Interestingly, he did compare the results to heavily sampling the data and building the tree in-memory in R (all of his charts were done in R, too). He was quite adamant that using all of the data is “worth it” compared to sampling — and with Google’s business model of monetizing the long tail, I can believe it. 
Josh mentioned that all of the techniques he’d implemented could also be implemented using Hadoop, the open-source map-reduce application. This got me thinking that some interesting out-of-memory techniques could be implemented in R via Rhipe, using R statistics functions to implement the Map operations, and R data aggregation for the Reduce functions. Hmm, I feel a new project coming on…

Link to original post

Share This Article
Exit mobile version