Truly Distributed Analytics

4 Min Read

The growth and success of Hadoop is very interesting.  It is emerging as a highly significant technology for the data scientist.  It is a platform that can scale and accommodate data exploration even across some of the largest datasets that exist today.  Yahoo, I’m told, has a 43,000 node Hadoop cluster.  The mind boggles at the volume of data being crunched with this cluster and ones like it.  Hadoop is distributed.  More specifically, it is a distributed system.

The growth and success of Hadoop is very interesting.  It is emerging as a highly significant technology for the data scientist.  It is a platform that can scale and accommodate data exploration even across some of the largest datasets that exist today.  Yahoo, I’m told, has a 43,000 node Hadoop cluster.  The mind boggles at the volume of data being crunched with this cluster and ones like it.  Hadoop is distributed.  More specifically, it is a distributed system.  A cluster of servers acting together to process a sequence of user initiated jobs. 

While the system may be considered distributed, the data being analyzed is, for all intensive purposes, centralized.  The data at the centre of job analysis jobs must be located within your cluster and directly accessible by your local applications.  This means as the volume of data under the microscope grows the size of the analytics platform grows to accommodate the influx of information. 

However as data science expands external data sources are becoming increasingly relevant for data analytics.  External data being data that is related to your business, but not produced within your organization.  Examples of such data may be environmental data (weather), geographic data (maps, places, addresses etc), shipping & delivery data and so on.  External data can provide insight into irregularity and opportunity within your own datasets that, without it, could be overlooked or misunderstood. 

While I spoke about this the other day somewhat in jest, some silly but simple examples may be the discovery that it is beneficial to increase advertising targeting those in their 30-50’s when “The O.C” is on TV or that it is beneficial to boost the advertising of certain novels in regions where it is currently pouring down outside.  These areas for opportunity couldn’t be discovered until your data is combined with externally sourced data (television scheduling, weather etc).

External data at the moment tends to be quite small and discrete so the current approach is to import external data into the local analytics environment.  And organizations such as Infochimps are doing a great job or organizing these external data sets and providing APIs for importing data into whatever localized analytics platform you are running.  However as the important and volume of external data grows I believe the impact of “importing” this data will grow and the volume of external data may become significantly greater than the local data in certain cases.  Also identifying what external data is relevant will become a role of analytics itself.

While it is early days, one project I am very excited about is focused on how analytics can be distributed between systems and even organizations.  Rather than centralizing large sets of data, the analytics jobs themselves span organizations and data centers.  And of course, when doing so, respecting the security and privacy expectations of all parties in the process.

 

TAGGED:
Share This Article
Exit mobile version