Preventing Customer Churn with Text Analytics
3 Ways You Can Improve Your Lost Customer Analysis
Preventing Customer Churn with Text Analytics
Lapsed Customers, Customer Churn, Customer Attrition, Customer Defection, Lost Customers, Non-Renewals, whatever you call them this kind of customer research is becoming more relevant everywhere, and we are seeing more and more companies turning to text analytics in order to better answer how to retain more customers longer. Why are they turning to text analytics? Because no structured survey data does a better job predicting customer behavior as well as actual voice of customer text comments!
Today’s post will highlight 3 mistakes we often see being made in this kind of research.
1. Most Customer Loss/Churn Analysis is done on the customers who leave, in isolation from customers who stay. Understandable since it would make little sense to ask a customer who is still with you a survey question such as “Why have you stopped buying from us?”. But customer churn analysis can be so much more powerful if you are able to compare customers who are still with you to those who have left. There are a couple of ways to do this:
- Whether or not you conduct a separate lapsed customer survey among those who are no longer purchasing, also consider doing a separate post-hoc analysis of your customer satisfaction survey data. It doesn’t have to be current. Just take a time period of say the last 6-9 months and analyze the comment data from those customers who have left VS those who are still with you. What did the two groups say differently just before the lapsed customers left? Can these results be used to predict who is likely to churn ahead of time? The answer is very likely yes, and in many cases you can do something about it!
- Whenever possible text questions should be asked of all customers, not just a subgroup such as the leavers. Here sampling as well as how you ask the questions both come into play.
Consider expanding your sampling frame to include not just customers who are no longer purchasing from you, but also customers who are still purchasing from you (especially those who are purchasing more) as well as those still purchasing, but purchasing less. What you really want to understand after all is what is driving purchasing – who gives a damn if they claim they are more or less likely to recommend you – promoter and detractor analysis is over hyped!
You may also consider casting an even wider sampling net than just past and current customers. Why not use a panel sample provider and try to include some competitor’s customer as well? You will need to draw the line somewhere for scope and budget, but you get the idea. The survey should be short and concise and should have the text questions up front, starting very broad (top of mind unaided) and then probe.
Begin with a question such as “Q. How, if at all, has your purchasing of Category X changed over the last couple of months?” and/or “Q. You indicated your purchasing of category X has changed, why? (Please be as specific as possible)”. Or perhaps even better, “Q. How if at all has your purchasing of category X changed over the past couple of months? If it has not changed please also explain why it hasn’t changed? (please be as specific as possible)”. As you can see, almost anyone can answer these questions no matter how much or little they have purchased. This is exactly what is needed for predictive text analytics! Having only leaver’s data will be insufficient!
2. Include other structured (real behavior data in the analysis). Some researchers analyze their survey data in isolation. Mixed data usually adds predictive power, especially if it’s real behavior data from your CRM database, and not just stated/recall behavior from your survey. In either case, the key to unlocking meaning and predictability is likely to come from the unstructured comment data. Nothing else can do a better job explaining what happened to them.
3. PLEASE PLEASE, Resist the urge to start your leaver survey with a structured question asking a battery of “check all that apply” reasons for leaving/shopping less. Your various pre-defined reasons, even if you include an “Other Specify_____” will have several negative effects on your data quality.
First, the customer will often forget their primary reason for their change in purchase frequency, they will assume incorrectly that you are most interested in these reasons you have pre-identified. Second there will be no way for you to tell which of these several reasons they are now likely to check, is truly the most important to them. Third, some customers will repeat themselves in the other specify, while others will decide not to answer it at all since they checked so many of your boxes. Either way, you’ve just destroyed the best chance you had in accurately understanding why your customers purchasing has changed!
These are many other ways to improve your insights in lapsed customer survey research by asking fewer yet better comment questions in the right order. I hope the above tips have given you some things to consider. We’re happy to give you additional tips if you like, and we often find that as customers begin using OdinText their use of survey data both structured and unstructured improves greatly along with their understanding of their customers.
Tom H. C. Anderson founded Anderson Analytics in 2005 as the first full service online market research firm to leverage data and text mining with other online research techniques. The firm’s patent pending text analytics platform is called OdinText. Since founding the company Tom and his team have won several awards for their innovative methodologies and groundbreaking work.