Cookies help us display personalized product recommendations and ensure you have great shopping experience.

By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
SmartData CollectiveSmartData Collective
  • Analytics
    AnalyticsShow More
    media monitoring
    Signals In The Noise: Using Media Monitoring To Manage Negative Publicity
    5 Min Read
    data analytics
    How Data Analytics Can Help You Construct A Financial Weather Map
    4 Min Read
    financial analytics
    Financial Analytics Shows The Hidden Cost Of Not Switching Systems
    4 Min Read
    warehouse accidents
    Data Analytics and the Future of Warehouse Safety
    10 Min Read
    stock investing and data analytics
    How Data Analytics Supports Smarter Stock Trading Strategies
    4 Min Read
  • Big Data
  • BI
  • Exclusive
  • IT
  • Marketing
  • Software
Search
© 2008-25 SmartData Collective. All Rights Reserved.
Reading: How Your Hadoop Distribution Could Lose Your Data Forever
Share
Notification
Font ResizerAa
SmartData CollectiveSmartData Collective
Font ResizerAa
Search
  • About
  • Help
  • Privacy
Follow US
© 2008-23 SmartData Collective. All Rights Reserved.
SmartData Collective > IT > Cloud Computing > How Your Hadoop Distribution Could Lose Your Data Forever
AnalyticsBig DataBusiness IntelligenceCloud ComputingData MiningData QualityData VisualizationData WarehousingHadoopITMapReduceOpen SourceSecuritySocial DataSoftwareSQLUnstructured DataWorkforce Data

How Your Hadoop Distribution Could Lose Your Data Forever

MicheleNemschoff
MicheleNemschoff
0 Min Read
SHARE

Many businesses are depending on Many businesses are depending on Apache Hadoop to run mission critical applications. Even more businesses use it as a method of driving crucial business decisions. In both cases, safeguarding data is a high priority.


Relational database users have long depended on foundational protection techniques, such as data replication and snapshots. Today, both are also implemented in standard Hadoop architectures. Their method of application and effectiveness, however, is not the same across distributions.

Replication

More Read

business organizations developing sense of data
How Leading Businesses Organize and Make Sense of Data
Healthcare BI Helps Physicians Improve Patient Care
Recommended reading: The Psychology of Survey Response
Companies Test Possibilities and Limits of AI in Research and Product Development
Training IS a Best Practice – Not Just a Component

As a baseline measure of data protection, Hadoop replicates your data three times by default. This protects against system failures that are inevitable when working with cluster data processing on commodity hardware. By having your data replicated, Hadoop can continue functioning after the failure of an entire node. The JobTracker will simply reassign the computation to a different node.

Your platform should automate the replication process. For optimal protection it should replicate Hadoop’s file chunks, table regions and metadata. At least one of the replications should also be sent to a different rack.

Replication is a necessary tool to enhance Hadoop’s functionality, but it is not meant to act as a failsafe against data loss. Data loss can occur in many ways that replication cannot resolve. One example can be an error within a Hadoop application. A corrupt application can destroy all data replications while it attempts to process the computation. It will access each replication and destroy all three (example illustrated in Figure 1 below).
corrupte appication accessing

Figure 1: Corrupt application deleting all three replications of data


User errors are another common cause for data loss that replication may not undo. Users can accidentally delete or replace data without knowing. In some cases, administrators can use Hadoop’s trash feature to remedy this. Unfortunately, trash has a relatively small time frame in which you can restore your data. That is if you catch the user error in the first place.


These are just a couple of examples that prove there is a misconception that replication will always protect your precious data.


Snapshots


Snapshots should be a complete point-in-time capture of your storage system. They are useful in both the storage and compute layers of Hadoop.


Hadoop’s default HDFS snapshot system is a common offering amongst distributions. It also lacks many key characteristics that you should not live without. Let’s go through some of the problems found in the HDFS snapshot system and look at alternative solutions that you can find in a truly enterprise-grade Hadoop distribution:


True Point-In-Time

HDFS snapshots promote themselves as a point-in-time recovery system. In actuality, HDFS snapshots will only record changes accurately in files you have closed. If you are depending on an automated snapshot backup recovery system, you will have no guarantees that your data was captured in a consistent state. However, a distribution that offers an enterprise snapshot system with true point-in-time consistency can deliver accurate recovery. This means the snapshot captures all files and tables at the time the snapshot was taken regardless of whether they are open or closed.


Supports All Applications

Many Hadoop applications are not built to support HDFS snapshots. This means that you will have to make many of your applications snapshot-aware. You do this by accessing the HDFS API to send up-to-date file length information to the NameNode (SuperSync/SuperFlush).


It is difficult to make these applications work correctly without overwhelming the NameNode. Even more, applications cannot modify files during the creation of a snapshot to ensure the integrity of the data. Your Hadoop distribution should have a snapshot system that supports all applications by default.


No Data Duplication

To increase efficiency, your snapshot system shouldn’t duplicate your data. It should also share the same storage as your live information. This eliminates any impact your snapshot system could have on your performance and scalability. Only one Hadoop distribution boasts a snapshot system that can capture a 1 petabyte cluster in seconds. All because they’ve eliminated the duplication of data.


Snapshots are a fantastic solution in your defense against user and application errors. However, it is critical that you choose a distribution that supports the most comprehensive capabilities of snapshot systems. Here’s a chart comparing the MapR snapshots to that included with other distributions of HDFS:


mapr-hdfs-snapshot-comparison.png


Data protection techniques like replication and snapshots should be your first line of defense. Evaluate the needs of your business and find a distribution that can ensure the highest degree of protection.

Share This Article
Facebook Pinterest LinkedIn
Share

Follow us on Facebook

Latest News

NO-CODE
Breaking down SPARC Emulation Technology: Zero Code Re-write
Exclusive News Software
online business using analytics
Why Some Businesses Seem to Win Online Without Ever Feeling Like They Are Trying
Exclusive News
edi compliance with AI
AI Is Transforming EDI Compliance Services
Exclusive News
companies using big data
5 Industries Driving Big Data Technology Growth
Big Data Exclusive

Stay Connected

1.2KFollowersLike
33.7KFollowersFollow
222FollowersPin

You Might also Like

The Obama Administration – a Catalyst to Implementing Performance Management

4 Min Read
drupal helps create ai-driven websites
Artificial Intelligence

Benefits of Using Drupal to Create a Website with AI Capabilities

6 Min Read
embedding business intelligence into software
Business IntelligenceExclusiveSoftware

5 Questions To Ask Before Embedding Business Intelligence Into Software

7 Min Read

Why BI Doesn’t Work

2 Min Read

SmartData Collective is one of the largest & trusted community covering technical content about Big Data, BI, Cloud, Analytics, Artificial Intelligence, IoT & more.

AI chatbots
AI Chatbots Can Help Retailers Convert Live Broadcast Viewers into Sales!
Chatbots
data-driven web design
5 Great Tips for Using Data Analytics for Website UX
Big Data

Quick Link

  • About
  • Contact
  • Privacy
Follow US
© 2008-25 SmartData Collective. All Rights Reserved.
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?