Big Data New Age: Hadoop vs Spark

October 23, 2017
94 Shares 2,919 Views

Over the past few years, Data Science has matured. With this maturity, the need for a different approach of data and its bigness has also matured.The out performance of Hadoop over the newcomer Spark has been seen in number of business applications but Spark because of its speed and ease of use has its place in big data. This article researches a common set of attributes of each platform that is inclusive of fault tolerance, performance, cost, ease of use, security, compatability and data processing.

Comparability of Hadoop vs Spark is difficult because of the many similarities but in some areas there is also non-overlapping. For example, without file management, Spark must rely on HDFS or Hadoop Distributed File System. Moreover since they are more comparable to data processing engines, the comparison of Hadoop MapReduce to Spark is wise.

The use of Hadoop and Spark is not an either or scenario because they are mutually exclusive and this is the most important thing to remember. Neither is one necessarily a drop in replacement for the other. Both are compatible with each other and this makes the pair an extremely powerful solution for a variety of applications in big data.

Performance of Hadoop vs Spark

Spark is fast compared to MapReduce and the difficulty in comparison of both is that there are differences in the way processing is performed. Since Spark processes all in its memory, it is fast. MapReduce utilizes batch processing. It is not built for speed blinding. Originally, it was setup to gather data from websites continuously. No requirements were there for this data in or near real-time.

Ease of Use

Developers and users alike can use the interactive mode of Spark to have immediate feedback for queries and other actions.  In comparison there is no interactive mode in MapReduce and it makes working with MapReduce easier for adopters with add-ons.


MapReduce and Spark are open source and free software products. Both  MapReduce and Spark are designed to run on white box server systems. The other cost differences include the use of of standard amounts of memory by MapReduce due to its disk based processing. This implies that  faster disks and a lot of disk space has to be purchased by company in order to run on MapReduce.

A lot of memory is required by Spark and this can be  dealt with a standard amount of disk running on standard speeds. Moreover, there have been complaints by some users on cleanup of temporary files which have been kept for a week to speed up any processing on the same data sets. the disk space used can be leveraged SAN or NAS.

Due to large RAM requirement, the cost of Spark systems is more. However, the number of required systems is reduced by Spark’s technology hence significantly less systems cost more. Even with the additional RAM requirement, Spark reduces the costs per unit of computation.

Processing of Data

A batch processing engine, the operation of MapReduce is in sequential steps. Similar operations are performed by Spark in a single step and in memory.


Kerberos authentication is supported by Hadoop that is nearly difficult to manage. Nevertheless organizations have been enabled by third party vendors in order to influence Active Directory Kerberos and LDAP for authentication. Data encryption for in-flight and data at rest has been provided by same third party vendors.

Summary of Hadoop vs Spark  

The default choice for any big data application would be the use of Spark but MapReduce has made its way into big data market for businesses needing huge datasets that are brought under control by commodity systems. MapReduces’ low cost of operation can be compared to Spark’s agility, relative ease of use and speed. There is a symbiotic association between Spark and Hadoop in that Spark provides real-time in-memory processing for those data sets that require it while Hadoop provides features that Spark does not provide.