What’s behind your Tree?

3 Min Read

Considering the number of target customer selection projects I do, Direct Mails appear to be a very popular communication and marketing channel amongst retailers.

Almost all the time, I use a combination of RFM, Decision Tree or Logistic Regression techniques for sorting, profiling and/or scoring customers (hopefully, I can post a separate detailed blog on this).

Considering the number of target customer selection projects I do, Direct Mails appear to be a very popular communication and marketing channel amongst retailers.

Almost all the time, I use a combination of RFM, Decision Tree or Logistic Regression techniques for sorting, profiling and/or scoring customers (hopefully, I can post a separate detailed blog on this).

The best thing about a decision tree is that it has very less assumptions or requirements on the data unlike, let’s say, logistic regression. Another thing is that everyone can understand it! Depending on the software you use, there are a number of different Tree algorithms available with the most common being CHAID, CART and C5.

CART can handle only binary splits (produce splits of two child nodes). It uses a measure of impurity called Gini for splitting the nodes. This is a measure of dispersion that depends on the distribution of the outcome variables. Its values range from 1 (worst) to 0 (best). You get a 0 when all records of a node are falling under a single category level (e.g. all 10,000 customers in a terminal node are responders). This is a purely theoretical example, by the way!

In C5, splits are based on the ratio of the information gain. C5 prunes the tree by examining the error rate at each node and assuming that the true error rate is actually substantially worse. If N records arrive at a node, and E of them are classified incorrectly, then the error rate at that node is E/N.

Information gain can also be simply defined as –

Information (Parent Node) – Information (after splitting on a particular variable)

CHAID is an efficient decision tree technique based on the Chi-Square test of independence of 2 categorical fields. CHAID makes use of the Chi-square test in several ways—first to merge classes that do not have significantly different effects on the target variable; then to choose a best split; and finally to decide whether it is worth performing any additional splits on a node.

CHAID and C5 can handle multiple splits unlike CART. And as far as my own experiences go, I prefer CHAID over C5 as C5 tends to produce very bushy trees.

References:
Data Mining Techniques: Michael J.A. Berry & Gordon S. Linoff
Data Mining Techniques (Inside Customer Segmentation): Konstantinos Tsiptsis & Antonios Chorianopoulos

TAGGED:
Share This Article
Exit mobile version