Big Data Ingestion… or Indigestion?

5 Min Read

With the exception of those special few with iron stomachs, any rapid ingestion of a wide variety of rich, exotic foods can cause a lot of discomfort for quite a while. The same is true of data ingested into Hadoop.

With the exception of those special few with iron stomachs, any rapid ingestion of a wide variety of rich, exotic foods can cause a lot of discomfort for quite a while. The same is true of data ingested into Hadoop. Organizations are piling high volumes of diverse, disparate data sources into Hadoop at rapid speeds, but their inability to find value amidst all of that information is causing a fair amount of distress and uneasiness among both business and technical leaders.

One way to ease pains following data ingestion into Hadoop is to apply rigorous, scalable data quality methodologies to your Hadoop environment. This will ensure data is reliable for downstream use in business applications. In my last Big Data blog, I talked about how traditional principles of data quality and data governance are a necessity for Big Data and Hadoop. However, given all of the new data sources and applications associated with Big Data, newer approaches to data quality and governance are necessary as well. Here are a few considerations to help you more easily digest all of your business information in a Big Data environment:

A critical first step is to relinquish data quality processing during the ingestion phase. Traditionally, data quality applies during migration processes as data is sent to data warehouses and relational databases. Given the volume of information associated with Big Data, it is no longer operationally efficient to apply data quality during ingestion into Hadoop. The time and cost associated with record-by-record data quality processing would hinder your efforts and detract from the increased processing performance and efficiency that are part of the core value of Hadoop.

Next, Hadoop adopters should shift to data quality processing once data is ingested into Hadoop. Evidence of this is found in a recent TDWI Best Practices Report for Hadoop reported that the most prevalent data quality strategy among Hadoop adopters is to “ingest data immediately into Hadoop, and improve it later as needed” as opposed to improving data before it enters Hadoop. Native data quality processing ensures business rules are applied across all records on all nodes of your cluster, enabling real-time processing and supporting accurate, real-time analytics and business process that Big Data is meant to support. It also ensures that external, third-party reference data sources undergo the same data quality processing as data sets that are ingested from internal data sources.

With so many types of data entering Hadoop, it’s also important to recognize that data quality has to be tailored to different data types, and traditional data quality rules may no longer be widely applicable to all types of data. For example, unstructured data may have more value in its raw, unedited form, and certain data inaccuracies or “errors” might provide useful information about an inefficient process or product. But, if you’re like most organizations, leveraging Big Data to learn more about your customers, both accuracy and data linking are critical steps to extracting value from a wide array of data points. One of the key benefits of data quality for Big Data is the confluence of disparate data points into a single, clearer version of the truth that will help you build a better customer experience, stronger customer interactions, and more targeted marketing campaigns.

By combining traditional data quality concepts to the new nuances and intricacies of Big Data, you can keep Hadoop healthy and minimize the downstream impacts of dirty data.

by Denise Laforgia, Product Marketing Manager, Trillium Software

Share This Article
Exit mobile version