Optimizing customer service levels with predictive analytics

7 Min Read

Richard Boire gave a presentation on predictive analytics in customer service at the Canadian Automobile Association. Organizations that successfully adopt analytics are willing, and able to change. Richard’s focus then is on tools and techniques that help create the engagement needed to drive adoption.

CAA is the Canadian equivalent of the US AAA, providing insurance, travel and emergency roadside assistance. For one particular part of CAA, improving roadside assistance has become a corporate imperative. A monthly satisfaction survey sent to those using the service in one CAA club had been declining steadily – dropping to 78% from a target of 84% – and dropping 20% below the average of other clubs. Rather than trying to guess what might help and yelling at people – frankly the most common response to declining customer satisfaction scores – they decided to look at the data.

The basic process here is that a customer has a problem, speaks to a Customer Service Rep who then organizes and orders services. The service is delivered to the customer and then feedback, the survey, is sent.

Richard’s process for improving this is to first carefully define the objective

Richard Boire gave a presentation on predictive analytics in customer service at the Canadian Automobile Association. Organizations that successfully adopt analytics are willing, and able to change. Richard’s focus then is on tools and techniques that help create the engagement needed to drive adoption.

CAA is the Canadian equivalent of the US AAA, providing insurance, travel and emergency roadside assistance. For one particular part of CAA, improving roadside assistance has become a corporate imperative. A monthly satisfaction survey sent to those using the service in one CAA club had been declining steadily – dropping to 78% from a target of 84% – and dropping 20% below the average of other clubs. Rather than trying to guess what might help and yelling at people – frankly the most common response to declining customer satisfaction scores – they decided to look at the data.

The basic process here is that a customer has a problem, speaks to a Customer Service Rep who then organizes and orders services. The service is delivered to the customer and then feedback, the survey, is sent.

Richard’s process for improving this is to first carefully define the objective, then consider the analytic data environment, work out which techniques to use and then deploy the result. In the first stage various CAA stakeholders were interviewed and it became clear that there were three stages – pre-event, during the event and post-event. Potentially CAA could do something to improve satisfaction in each case and this became the objective – take action to reduce dissatisfaction at every stage. In particular this meant identifying those customers at high risk for being dissatisfied.

The next stage – what Richard called a data audit – focused on the available data to see what could be used to build the models needed. This reports on missing data, amount of data, values used in the field, frequency distribution etc. To build the analytic file in this pre-event stage they combined member data with census/Statistics Canada data. Once the event happened the file could also have event and call data. Finally, post the event, they also have the survey data. Overall there were 400 variables pre-event, 500 during-event and 550 post-event.

As they moved into the data mining stage they start doing some analysis, like correlation analysis to see what correlates with what. They also present data for business users like a report showing how different values in the survey correspond to overall dissatisfaction rates – a graphical representation of the correlation in the data. For this project they looked at CHAID to build decision trees and stepwise/logistic regressions. Multiple models of both types were applied at each stage and found similar results. In this case, for instance, the satisfaction with the time estimate (how long they would have to wait) was a huge driver of dissatisfaction. The final solution had 12 key variables in the “during event” phase like age and 3 key variables prior to the event like age, total roadside services and postal area (in terms of proportion of immigrants). While the pre-event phase had the weakest models, they still represent something that can be done. The models were validated against hold-out samples and showed the post-event match was best (with the top 3 deciles including 70% of members) but the others still had useful results.

The models were used to train call center representatives to spot the predictors of dissatisfaction. They saw the importance of estimate time of arrival so they also coordinated/mapped the processes for making these estimates better into the call handling. Finally they have created a process to use this math- or science-based approach to continuously improve. The results included reduced relay calls, more proactive estimated time of arrival updates and fewer members who were gone when the service vehicle arrived. The first stage can be described as operationalizing the learning of the data mining. Next up is a project to embed the scores themselves into the screens of the representatives and combining this with different strategies – rules – that work on the various segments and are driven by the models.

The solution reduced dissatisfaction 30-35%, moved the club into the top tier and saved them $200,000 annually for a 300% ROI. A nice example of deploying modeling results through organizational change.

Link to original post

Share This Article
Exit mobile version