Factual Logic: Why Numbers Count
The funny thing about Decision Management technologies is that we obsess with automation but.
The funny thing about Decision Management technologies is that we obsess with automation but. in many instances, projects lose steam before we get a chance to close the loop. As a result, we hope that we are automating the right business logic but we don’t know for sure until much much much later. We have often seen cases where factual feedback only came weeks after deployment!
No feedback loop?
In some cases, business rules are derived from data but we tend to call those predictive models or segmentation logic rather than business rules. Business rules are, generally speaking, heavily judgmental. As an expert, you want to codify how you make decisions. Although we are used to measuring and monitoring the quality of predictive models, we tend not to do that same for business rules. Why? I think it has to do with:
- Tact: Why would you challenge an expert? He/she knows better than anyone else!
- Development context: Data tends to be absent from the environment where we develop the business rules; so data-driven testing is extra effort
- Lack of time & resources: Getting systems wired to make those measurements is an expense that is not mandatory for the system to function; with project delays and other surprises along the way, this is the first capability to be cut
I would argue though that fact-based validation is vital to the success of many decisioning projects if not all. It is common wisdom that Business Intelligence is vital to strategic decision-making. Would you trust a management team to make the right decision in absence of dashboards, based only on intuition or rumors? Of course not… So why wouldn’t we look for the same insight for our operational decisions?
Where no feedback led this insurance company
I may have mentioned this story before, but it really stuck to my mind as I saw the chain of events invariably lead to project failure. I will not name the Insurance company to avoid any embarrassment. I had a chance to meet with the lead architect, a great guy, very capable. He led the Automated Underwriting project beautifully and implemented all the business rules that came from the team of underwriters without a hiccup. The application deployed to the first series of States on time. The systems always deployed on Friday night to allow for a slow ramp up over the weekend. Per company policy, the team was on-call over the weekend. Within hours, the team was called on site to fix the mess. The business rules, that did exactly what the underwriters wanted, did not achieve the business objectives they were looking for! Taken in isolation, each rule made the accept / decline decision it was supposed to, but more often than not the transactions ended up in the refer pile for manual review. There may have been a genuine lack of trust from the underwriters who did not want, consciouly or not, to release control to the automated system. I do not intend any blame here. It is human nature, especially for experts, to keep some oversight on the decisions. If you do not pay attention to the numbers though, you may end up, like them, referring too many transactions and, as a result, flooding the underwriting team with applications to review manually.
The good news is that, thanks to BRMS technology, they have been able to fix the system in a matter of hours. Had they encoded the logic into spaghetti code, they would not have been able to make the necessary changes in less than months. The project was not the complete failure it could have been, but this first deployment was.
My take-away from this real-life experience is that we need to measure early, we need to measure often, we need to measure continuously.
Let me clarify here that Business Performance testing and monitoring is different from test case testing. In this project, they had individual test cases that were created specifically for each captured rule. The application did what it was supposed to. It was the rate of automation that failed them. One KPI was off and it had disastrous consequences for the project.
By having greater insight in the business performance of decisioning applications at the time of rules authoring, in addition to the measurements after deployment of course, the team would have been able to detect early that the application was to miss the KPI they hoped for. They could have fixed the system way before it hit Production. Everybody could have used a nice and peaceful weekend on deployment day.
Another side effect of having visibility into Business Performance is that you can build this trust with your business users. If they are skeptical about decision automation and have a hard time letting go of control, they may gain reassurance along the way when they see the effect of their decisioning logic applied to historical transactions. They might actually find some opportunities for additional improvement while diving into those performance reports. Business Intelligence dashboards, combined with their expertise, can create a very powerful combination. The business users I have worked with at Sparkling Logic have been seduced by the ability to explore the business impact of policy changes while they craft their decisioning logic. To quote one of them, it is “awesome’.
So do not be afraid to show your business users what the numbers are… Instead of fearing offending them, consider the extra power you give them! and they want it…
You must log in to post a comment.