How Your Hadoop Distribution Could Lose Your Data Forever

0 Min Read

Many businesses are depending on Many businesses are depending on Apache Hadoop to run mission critical applications. Even more businesses use it as a method of driving crucial business decisions. In both cases, safeguarding data is a high priority.


Relational database users have long depended on foundational protection techniques, such as data replication and snapshots. Today, both are also implemented in standard Hadoop architectures. Their method of application and effectiveness, however, is not the same across distributions.

Replication

As a baseline measure of data protection, Hadoop replicates your data three times by default. This protects against system failures that are inevitable when working with cluster data processing on commodity hardware. By having your data replicated, Hadoop can continue functioning after the failure of an entire node. The JobTracker will simply reassign the computation to a different node.

Your platform should automate the replication process. For optimal protection it should replicate Hadoop’s file chunks, table regions and metadata. At least one of the replications should also be sent to a different rack.

Replication is a necessary tool to enhance Hadoop’s functionality, but it is not meant to act as a failsafe against data loss. Data loss can occur in many ways that replication cannot resolve. One example can be an error within a Hadoop application. A corrupt application can destroy all data replications while it attempts to process the computation. It will access each replication and destroy all three (example illustrated in Figure 1 below).

Figure 1: Corrupt application deleting all three replications of data


User errors are another common cause for data loss that replication may not undo. Users can accidentally delete or replace data without knowing. In some cases, administrators can use Hadoop’s trash feature to remedy this. Unfortunately, trash has a relatively small time frame in which you can restore your data. That is if you catch the user error in the first place.


These are just a couple of examples that prove there is a misconception that replication will always protect your precious data.


Snapshots


Snapshots should be a complete point-in-time capture of your storage system. They are useful in both the storage and compute layers of Hadoop.


Hadoop’s default HDFS snapshot system is a common offering amongst distributions. It also lacks many key characteristics that you should not live without. Let’s go through some of the problems found in the HDFS snapshot system and look at alternative solutions that you can find in a truly enterprise-grade Hadoop distribution:


True Point-In-Time

HDFS snapshots promote themselves as a point-in-time recovery system. In actuality, HDFS snapshots will only record changes accurately in files you have closed. If you are depending on an automated snapshot backup recovery system, you will have no guarantees that your data was captured in a consistent state. However, a distribution that offers an enterprise snapshot system with true point-in-time consistency can deliver accurate recovery. This means the snapshot captures all files and tables at the time the snapshot was taken regardless of whether they are open or closed.


Supports All Applications

Many Hadoop applications are not built to support HDFS snapshots. This means that you will have to make many of your applications snapshot-aware. You do this by accessing the HDFS API to send up-to-date file length information to the NameNode (SuperSync/SuperFlush).


It is difficult to make these applications work correctly without overwhelming the NameNode. Even more, applications cannot modify files during the creation of a snapshot to ensure the integrity of the data. Your Hadoop distribution should have a snapshot system that supports all applications by default.


No Data Duplication

To increase efficiency, your snapshot system shouldn’t duplicate your data. It should also share the same storage as your live information. This eliminates any impact your snapshot system could have on your performance and scalability. Only one Hadoop distribution boasts a snapshot system that can capture a 1 petabyte cluster in seconds. All because they’ve eliminated the duplication of data.


Snapshots are a fantastic solution in your defense against user and application errors. However, it is critical that you choose a distribution that supports the most comprehensive capabilities of snapshot systems. Here’s a chart comparing the MapR snapshots to that included with other distributions of HDFS:



Data protection techniques like replication and snapshots should be your first line of defense. Evaluate the needs of your business and find a distribution that can ensure the highest degree of protection.

Share This Article
Exit mobile version