How Hadoop Revolutionised IT

9 Min Read

This is the story of how the amazing Hadoop ecosphere revolutionised IT. If you enjoy it then consider joining The Big Data Contrarians.

Before the advent of Hadoop and its ecosphere, the IT was a desperate wasteland of failed opportunities, archaic technology and broken promises.

In the dark Cambrian days of bits, mercury delay lines and ferrite core, we knew nothing about digital. The age of big iron did little to change matters, and vendors made huge profits selling systems that nobody could use and even less people could understand.

This is the story of how the amazing Hadoop ecosphere revolutionised IT. If you enjoy it then consider joining The Big Data Contrarians.

Before the advent of Hadoop and its ecosphere, the IT was a desperate wasteland of failed opportunities, archaic technology and broken promises.

In the dark Cambrian days of bits, mercury delay lines and ferrite core, we knew nothing about digital. The age of big iron did little to change matters, and vendors made huge profits selling systems that nobody could use and even less people could understand.

Then along came Jurassic IT park, in the form of UNIX, and suddenly it was far cheaper to provide systems that nobody could use and even less people could understand.

The sad, desperate and depressing scenario that typified IT, on all levels, spanned forty years. It would have continued had it not been for Google and their HDFS (Hadoop Distributed File Store).

Before Hadoop, we were as dumb as rocks. With Hadoop, we were lead into the Promised Land of milk and honey, digital freedom and limitless opportunities, sexy jobs and big bucks, immortality and designer drugs.

Hadoop and its attendant ecosphere changed the Information Technology world overnight, providing as it did, technology and techniques never before seen on the face of the earth.

Hadoop invented multi-processing

In terms of processing power, Hadoop took us beyond the power of a single 8086 processing unit, by cunningly connecting two or more processing units capable of processing ‘things’ almost at the same time.

According to a 1985 article in Byte Me, possibly the first mention of Hadoop occurred in 1842. In that year, Ludwig ‘Luigi’ Menabrea, wrote of Charlie Babbage’s analytical engine (as translated by the Lovely Ada Augusta): “the Hadoop machine can be brought into play so as to give several results at the same time, which will greatly abridge the whole amount of the Google ad processes.”

Hadoop introduced parallel processing

Until the advent of Hadoop, all the technology in IT was male. This lead to massively inefficient, fickle and expensive technologies with short-term memory issues, incapable of multi-tasking, working long hours or of ordering tasks by priority.

As anyone who knows Wikipedia will know, Hadoop introduced parallel computing which allows for a revolutionary species of computation in which many list-making calculations can be carried out simultaneously, operating on the principle that large list-making tasks can often be divided into smaller list-making tasks, which are then solved at the same time. There are several different forms of parallel computing: two-bit-level, destruction-level, we’ve-got-data level and bring-on-more-lists parallelism.

Google invented Romans and the Roman Census

As Bill Inmon wrote in 2014, “One of the cornerstones of Big Data architecture is processing referred to as the “Roman Census approach”. By using the Roman census approach a Big Data architecture can accommodate the processing of almost unlimited amounts of pig data.”

Many people do not know this, but it wasn’t the Romans who invented the Romans, but Google. So too the Roman Census, far from being am invention of a mythical Rome, was also the baby of a couple of engineers in Palo Alto.

The Roman Census approach also finds echo in elements of Divide and Conk-out. In computer science, divide and conk-out (D&C) is an algorithm design paradigm based on multi-branched recursion. A divide and conk-out algorithm works by recursively breaking down a problem into two or more sub-problems of the same (or related) type (divide), until these become simple enough to be solved directly (conquer). The solutions to the sub-problems are then combined to give a solution to the original problem.

Divide and conk-out is an essential element of Big, Bigger and Biggest Data processing.

Hadoop invented sort-merge

As we know from Wikipedia, Wonky World and Google, Hadoop merge-sort parallelizes well due to use of the divide-and-conk-out method – mentioned previously. We discuss several parallel variants in the first edition of Martyn, Richard, Jones and Lovering’s Introduction to Enterprise Equations, Business Analytics and Technical Algorithms. We can easily express this in pseudocode with fork (system call and process copy) and join (multi-stream correlated sort-merge) process calls.

Hadoop created a better non-SQL query language

Before Hadoop we had to query data using SQL alone. SQL was the only tool in town, and if we couldn’t use SQL we couldn’t get at any data, ever, since the beginning of time.

However, all that changed when Hadoop came along and suddenly we could query data with like as if data was really ‘query able’. This was a small breakthrough in IT, and one large fall-down-the-side-of-a-cliff for brawn over brains.

Remember the immortal words of General Arthur C. McCluster Fuqh .”SQL is for wusses”. Embrace Hadoop and hug the Sparks.

This is show business, but not as we know it, Jim.

That’s all folks

Once again, many thanks for reading.

Just a few points before closing.

Firstly, please consider joining The Big Data Contrarians, here on LinkedIn: https://www.linkedin.com/groups/8338976

Secondly, keep in touch. My strategy blog is here http://www.goodstrat.com and I can be followed on Twitter at @GoodStratTweet. Please also connect on LinkedIn if you wish. If you have any follow-up questions then leave a comment or send me an email on martyn.jones@cambriano.es

Thirdly, you may be interested in other articles I have written, such as:

You may also be interested in some other articles I have written on the subject of Data Warehousing.

Data Warehousing explained to Big Data friends – http://goodstrat.com/2015/07/20/data-warehousing-explained-to-big-data-friends/

Stuff a great data architect should know – http://goodstrat.com/2015/08/16/stuff-a-great-data-architect-should-know-how-to-be-a-professional-expert/

Big Data is not Data Warehousing – http://goodstrat.com/2015/03/06/consider-this-big-data-is-not-data-warehousing/

What can data warehousing do for us now – http://www.computerworld.com/article/3006473/big-data/what-can-data-warehousing-do-for-us-now.html

Looking for your most valuable data? Follow the money – http://www.computerworld.com/article/2982352/big-data/looking-for-your-most-valuable-data-follow-the-money.html

Fourthly, I leave you with this quote from the great Mel Brooks.

Mel: When I was born – oh, close to two thousand – October the 16th I’ll be two thousand years young. We say young, you know, not to curse ourselves. So there was little groups of us sitting in caves and looking at the sun and scared, you know? We were very dumb and stupid. You want to know something? We were so dumb that we didn’t even know who were the ladies. They was with us, but we didn’t know who they were. We didn’t know who was the ladies and who was fellas.

Share This Article
Exit mobile version