Cookies help us display personalized product recommendations and ensure you have great shopping experience.

By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
SmartData CollectiveSmartData Collective
  • Analytics
    AnalyticsShow More
    data analytics
    How Data Analytics Can Help You Construct A Financial Weather Map
    4 Min Read
    financial analytics
    Financial Analytics Shows The Hidden Cost Of Not Switching Systems
    4 Min Read
    warehouse accidents
    Data Analytics and the Future of Warehouse Safety
    10 Min Read
    stock investing and data analytics
    How Data Analytics Supports Smarter Stock Trading Strategies
    4 Min Read
    predictive analytics risk management
    How Predictive Analytics Is Redefining Risk Management Across Industries
    7 Min Read
  • Big Data
  • BI
  • Exclusive
  • IT
  • Marketing
  • Software
Search
© 2008-25 SmartData Collective. All Rights Reserved.
Reading: New Technology Is Not an Easy Button for Big Data
Share
Notification
Font ResizerAa
SmartData CollectiveSmartData Collective
Font ResizerAa
Search
  • About
  • Help
  • Privacy
Follow US
© 2008-23 SmartData Collective. All Rights Reserved.
SmartData Collective > Data Management > Culture/Leadership > New Technology Is Not an Easy Button for Big Data
AnalyticsBig DataCulture/LeadershipData ManagementDecision Management

New Technology Is Not an Easy Button for Big Data

BillFranks
BillFranks
6 Min Read
SHARE

Easy ButtonIt is good to remember in today’s hype-filled big data world that there is no “easy” button for big data. In fact, in many ways, big data is quite difficult to deal with. Many organizations seem to be falling for the fallacy that simply implementing new tools or platforms will “automagically” solve their big data problems. Unfortunately this isn’t the case.

Easy ButtonIt is good to remember in today’s hype-filled big data world that there is no “easy” button for big data. In fact, in many ways, big data is quite difficult to deal with. Many organizations seem to be falling for the fallacy that simply implementing new tools or platforms will “automagically” solve their big data problems. Unfortunately this isn’t the case.

For example, there is a common belief that MapReduce platforms such as Teradata Aster or Hadoop can tame big data in and of themselves.  In reality they don’t inherently enable new functionality or analytic logic to be executed. Rather, they allow you to scale certain kinds of functionality and analytic logic in a way that makes the functionality and logic much more powerful and widely applicable.

This is an important distinction – and one I want to explore in detail.

More Read

Is the RDBMS doomed (yada yada yada) ?
On-Demand Software Index: A selective bounce
Why Analytics Professionals Are Destined for Leadership
Saving Money and Lives With Predictive Maintenance
Comparing the Cost Continued…

Many organizations seem to be thinking of MapReduce as a magic bullet or “easy” button for handling big data. Just set up a system, and your big data problems are solved, right? Wrong. Once the system is in place, it is still necessary to develop the analytic processes that run against it.  There really is no shortcut here. If you want great analytics, you’re going to have to build your processes just like you always have. Organizations that don’t understand this fact will be disappointed when they realize they aren’t instantly getting the value they expected from their investment.

As I said earlier, MapReduce doesn’t inherently enable new functionality. When you hear about MapReduce environments, you will quickly come to a discussion of leveraging languages such as Java or Python. It just so happens that these languages have been around for quite a while. They had strong followings before the concept of MapReduce came into existence. Most users of these languages have never used, and may never use, a MapReduce architecture as part of their work.  However, they code away day to day developing processes just like their big data focused counterparts.

What many people don’t take the time to think about is that whatever logic you develop today in Java to run in a MapReduce environment is something you could have written in Java years ago. The exact same code, the exact same output for a given piece of data. This is why I said that MapReduce doesn’t directly cause any new analytic logic to come into existence. Rather, MapReduce provides a highly scalable platform so that logic can be executed at a scale far surpassing what was possible in the past.

This last point is the value that MapReduce brings. Having a terrific facial recognition or text parsing algorithm doesn’t do much good if there is no way to scale the process to a big data environment. MapReduce provides that ability.  It lets organizations apply algorithms to a much wider base of problems and a much larger amount of data. It allows logic that wasn’t practical to build into your analytic processes to become practical.

This no different than how parallel database platforms provide value. A Massively Parallel (MPP) database system runs on SQL just like a non-MPP system. An MPP system doesn’t enable new functionality in the absolute sense, but it does provide the ability to scale an SQL process.  As a result it enables far more value to be derived and a much wider set of problems to be practically addressed than when using a non-MPP architecture.

In summary, we can expect MapReduce to continue to be a force behind the taming of big data. But, the onus will still be on the organizations that use it to develop and implement the required analytic processes just as they always have had to do in the past. Many analytics that were theoretically possible, but impractical, will no longer be a problem. That will lead to a lot of value. The key is to understand what the architecture will do for you, and to not underestimate the effort required to use it correctly. It will take work to get the benefits. There is no “easy” button for big data.

To see a video version of this blog, visit my YouTube channel.

Originally published by the International Institute for Analytics

 

Share This Article
Facebook Pinterest LinkedIn
Share
ByBillFranks
Follow:
Bill Franks is Chief Analytics Officer for The International Institute For Analytics (IIA). Franks is also the author of Taming The Big Data Tidal Wave and The Analytics Revolution. His work has spanned clients in a variety of industries for companies ranging in size from Fortune 100 companies to small non-profit organizations. You can learn more at http://www.bill-franks.com.

Follow us on Facebook

Latest News

protecting patient data
How to Protect Psychotherapy Data in a Digital Practice
Big Data Exclusive Security
data analytics
How Data Analytics Can Help You Construct A Financial Weather Map
Analytics Exclusive Infographic
AI use in payment methods
AI Shows How Payment Delays Disrupt Your Business
Artificial Intelligence Exclusive Infographic
financial analytics
Financial Analytics Shows The Hidden Cost Of Not Switching Systems
Analytics Exclusive Infographic

Stay Connected

1.2KFollowersLike
33.7KFollowersFollow
222FollowersPin

You Might also Like

how big data websites can benefit from VPS
Big DataExclusive

VPS Is A Game Changer for Big Data Sites – Here’s Why

11 Min Read

HCIR: Better Than Magic!

6 Min Read

Big Data E-Learning Is Improving Education

5 Min Read

Decision Management and Campaign Management In 2020

2 Min Read

SmartData Collective is one of the largest & trusted community covering technical content about Big Data, BI, Cloud, Analytics, Artificial Intelligence, IoT & more.

data-driven web design
5 Great Tips for Using Data Analytics for Website UX
Big Data
ai in ecommerce
Artificial Intelligence for eCommerce: A Closer Look
Artificial Intelligence

Quick Link

  • About
  • Contact
  • Privacy
Follow US
© 2008-25 SmartData Collective. All Rights Reserved.
Go to mobile version
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?