My take on why ETL has not always kept up with the integration workload

4 Min Read

Why are companies reaching the limits on the workhouse ETL? Here are some thoughts that occurred to me as I was snowblowing 20+ inches of snow today.

Why are companies reaching the limits on the workhouse ETL? Here are some thoughts that occurred to me as I was snowblowing 20+ inches of snow today. I’ll be discussing this further in the Thursday DM Radio broadcast.

The main reason why ETL is not keeping up is that demand is significantly increasing:

  • The number of data sources, data volumes, update frequencies, need for data consistency (cleansing, conforming, MDM, CDI, PIM, etc.) are all on the rise.
  • Data architecure and workflows have become more extensive and sophisticated to meet business needs.
  • Companies are using ETL for more than data warehousing
  • ETL is not just batch-oriented nightly loads, but rather data integration using various data transports and supporting various data currencies

ETL or data integration tool capabilties have expanded and there are many techniques that have been around forever that still work today but people aren’t taking advantage of these solutions. There are several issues:

First, there is still too much ETL hand-coding. Although most large corporations use ETL tools to load the data warehouse,  departments or functions within the same companies often hand-code data marts, OLAP cubes or other databases used for business intelligence and analytics. In addition many midsize and smaller firms hand code because they think all ETL tools are expensive.

(They’re wrong. There are plenty of cost-effective and capable ETL tools in the marketplace today.)

Second, people don’t use ETL tools properly. Many ETL developers are either using their ETL tools as if they hand-coded, or they are simply using ETL tools to run hand coded SQL scripts or stored procedures. This approach does not use any of the ETL tool capabilities nor leverage ETL best practices. The result: less efficient, less productive data integration.

Third, people don’t understand data integration. Even when ETL developers do attempt to use ETL tool capabilities they often do not really understand data integration processes.  If they have any training it is only tool specific; they haven’t learned basic data integration processes. In this scenario ETL productivity and performance suffers. The real solution would involve learning the skills necessary to leverage the tools more effectively.

Fourth, people don’t know ETL’s secrets. There are many “old” tried and true techniques and utilities that greatly improve the performance and productivity of ETL processes. But ETL develpers don’t know they exist – probably because they’re not sexy enough to be touted in industry literature.  The classic example is extracting data from a source system into a file and then cleansing, transforming and sorting the file prior to loading into a data warehouse. Although this technique is old-school, it’s a very cost-effective and productive way to significantly improve load performance.

Some steps in the right direction

BI vendors have built many of the best practices in data warehousing, such as various slowly changing dimensions (SCD) and change data capture (CDC) techniques, into their data integration products as ETL transforms.  These transforms significantly improve developer productivity and system performance, and are quite cost-effective.

Another positive note is that, finally, there are many advances in ETL processing that can significantly improve performance. You’ve heard about the benefits of in-memory analytics. In-memory ETL processing results in referential integrity checking and faster lookups,  and improves overall load time.

There are many other capabilities that can likewise improve ETL performance. I will be discussing these points and others during the broadcast.

Share This Article
Exit mobile version