Integrating Quality Assurance into Your CRM Operations

5 Min Read

The quality industry distinguishes between quality assurance and quality control. The former represents a set of processes designed to assure a quality result, while the latter involves evaluation and testing of the end result to see if quality specifications have been met. Arguably, with quality management software the more successful an organization’s quality assurance will be and the less need there will be for quality control (though you can never avoid it altogether.)

The challenge with an enterprise business application is that quality is not solely derived from processes. In fact, the automation of processes within an application can give a false sense of accomplishment. All the processes in the world won’t deliver a quality result when the data upon which those processes depend has no inherent assurance of quality. The problem is exacerbated within CRM operations, since the data that drives CRM often originates outside the system or is created by members of your organization whose first priority may not be data quality, as was noted in a recent blog posting.

Toward that end, we’ve focused on integrating data quality capabilities within the CRM environment, so that they can be executed more naturally – more natively – by CRM users as part of their daily operations. We recently extended that capability with our recent announcement of the 2.0 version of our integrated Trillium for Microsoft Dynamics CRM offering.

For CRM administrators, key operations like batch record import of new records can automatically invoke data quality functionality as part of the import process. When new records are identified as potential duplicates, they are flagged before a duplicate record is generated. Administrators can then selectively decide whether the record is indeed a duplicate. As administrators become comfortable with these data quality capabilities in duplicate detection and merging, they can choose to delegate the match/merge operation so that it happens automatically. Given the sophistication of Trillium’s underlying detection algorithms (with their ability to span multiple fields simultaneously), that comfort often comes quite quickly.

Similarly, we support “in-line” data quality operations, such that it can be invoked as part of the creation of a new lead, contact or opportunity record by an individual CRM end-user. For example, when a sales rep creates a new record, the system will identify whether that record already exists, avoiding the generation of duplicates. Trillium’s cleansing and matching logic overcomes fat finger errors, the entry of data in the wrong fields and other typical sources of bad data within the system. By pre-empting the generation of incomplete or duplicate records – before they can inflict damage on your operations, you improve day to day activity, as well as forecasting and analytics that are fundamentally dependent on good source data.

Similarly, we support “in-line” data quality operations, such that it can be invoked as part of the creation of a new lead, contact or opportunity record by an individual CRM end-user. For example, when a sales rep creates a new record, the system will identify whether that record already exists, avoiding the generation of duplicates. Trillium’s cleansing and matching logic overcomes fat finger errors, the entry of data in the wrong fields and other typical sources of bad data within the system. By pre-empting the generation of incomplete or duplicate records – before they can inflict damage on your operations, you improve day to day activity, as well as forecasting and analytics that are fundamentally dependent on good source data.

And when that happens, you’ll end up with no quality assurance, but you’ll have a lot of quality control to do.

by Chris Martins, Product Marketing Manager, Trillium Software

TAGGED:
Share This Article
Exit mobile version