Google and Apache Hadoop: A Match Made in the Cloud
To the uninitiated, words like “Google” and “Hadoop” sound like the stuff of a futuristic make-believe world. Being that the MapReduce paper published by Google scientists Jeffrey Dean and Sanjay Ghemawat in 2004 inspired Hadoop, the coming together of Hadoop and Google is a match made in the cloud.
To the uninitiated, words like “Google” and “Hadoop” sound like the stuff of a futuristic make-believe world. Being that the MapReduce paper published by Google scientists Jeffrey Dean and Sanjay Ghemawat in 2004 inspired Hadoop, the coming together of Hadoop and Google is a match made in the cloud. And the partnership between MapR and Google to run MapR’s Enterprise Distribution for Hadoop on Google Compute Engine is anything but science fiction. Here’s a look at some of the major benefits of using Hadoop on Google Compute Engine.
Running Hadoop on Google Compute Engine leverages the power and efficiency of Google’s data centers to execute at scale and solve large problems. Utilizing the Google Cloud Platform, enterprises have the flexibility to expand or contract the cluster size on demand to provision precisely the amount of resources required to meet their data processing needs.
World-record speed and performance
With MapR’s Enterprise Distribution for Hadoop on Google Compute Engine, it’s possible to spin up well over a thousand servers in a matter of minutes and run scalable applications at blazing fast speeds. In fact, MapR ran Hadoop on the Google Compute Engine and set a world record for MinuteSort. MapR sorted 15 billion 100-byte records in only 60 seconds. It was done on 2,103 virtual instances, each consisting of four virtual cores and a virtual disk.
The Hadoop/Google virtualized cloud environment set the record using far fewer servers, disks and cores than Yahoo used in setting the prior record. To put it simply, Hadoop on Google Cloud Platform not only does more with less, it does so faster than the best and biggest on on-premise Big Data platforms. This type of performance allows enterprises to tackle large-scale workloads quickly and easily to gain greater business insights and competitive advantage to drive higher ROI.
According to MapR CEO John Schroeder, who discusses Hadoop and Google Compute Engine at Google I/O, the physical hardware that an enterprise would need to approximate what Yahoo used to achieve its 62-second benchmark would conservatively cost $6 million to acquire and several months to install. And those estimates, Schroeder explains, don’t even factor in the costs of all the electrical needed to handle the server load, not to mention the 50-75 tons of air conditioning that would be required to cool the data center. In contrast, Schroeder offers that the cost of running Hadoop on Google Compute Engine for the 54 seconds it took to set the new 1TB Terasort benchmark was a mere $16.
Utilizing Google as the cloud provider eliminates the need for enterprises to pay huge costs for on-premise servers that need to be switched out for newer models every 3 years and may never be used to full capacity. Enterprises only pay Google for the resources they use to meet their data processing demands. And the costs associated with running Enterprise Hadoop on Google Compute Engine are extremely reasonable compared to traditional infrastructure.
In short, if you’re looking for a flexible, fast, and cost effective Big Data platform, MapR’s Hadoop distribution running on Google Compute Engine just might be the right solution for your business.
You must log in to post a comment.