What’s in Store for Big Data Analytics in 2016
It’s the time of year again for predictions on all sorts of topics. Worthy, solid predictions are often based on the past and present trends and then projecting those trends into the coming year. Since I spend a lot of time studying trends of big data and analytics, I’m going to offer my predictions for the upcoming year.
Big Data will Triumph over Global Troubles
While there were awesome use cases, big data in 2015 was still somewhat a science experiment. This year there is hope for major breakthroughs in solving some of the world’s most challenging problems with big data. Organizations are already doing amazing things, but we’re just scratching the surface of what we can accomplish with big data. I’ve had several conversations with clients who are looking to map the human genome and tackle problems like cancer, Alzheimer’s disease and more by mapping the genes linked to them. I believe there are eminent breakthroughs here credited to our ability to handle huge data volume and perform faster and faster analytics improves.
But that’s not all. People are using big data science for transportation research, making planes, trains and automobiles smarter and more efficient. Non-profits are using big data to drive decisions about conservation and ecology with big data. We have a real opportunity this year to make the world a better place with big data. Data is the new currency in scientific breakthroughs. The capability we now have to crunch through it with our algorithms is the disruptor.
Algorithms will be the New Edge
2016 is sure to be a year for using algorithms, specifically predictive analytics, to boost company revenue. Analysts like Gartner predict that differentiated algorithms alone will help corporations achieve a boost of 5% to 10% in revenue in the near future. Algorithms will make the best use of huge volume of customer-generated data that we get from our phones, devices and the internet of things to formulate more helpful, targeted offers for prospects and customers. New, younger companies will leverage predictive analytics to disrupt their markets and potentially unseat the established leaders. Predictive analytics can serve to update power delivery and consumption, medical research and treatment, and other lofty human problems, in addition to generating new revenue.
It’s difficult to see whether the algorithms themselves will be an emerging market, as some analysts say, or whether we will share most of our algorithms in our communities of data scientists. I think society will benefit more from an open source approach here, and the young minds who develop the algorithms will probably be more willing to take an open approach. Think about it, if you could predict Alzheimer’s disease with your algorithm, wouldn’t you want to share it with the world?
Hybrid Architectures will Rule in 2016
Companies are adopting a strategy where they use the right tool for the right job when it comes to big data analytics. This means that daily analytics and proprietary data is analyzed on-premise in ever growing data warehouse data volumes. Small, short-lived projects are often deployed on the cloud, and Hadoop is often used to keep costs low on data that is important, or data that needs to be farmed for mission-critical information. Finally, technologies like Spark are in their infancy to help with real-time, operational analytics.
It will be up to the vendors and open source community to provide some consistency across these different deployment strategies. Information workers really won’t care where it is running, just that they can use their favorite visualization tools, SQL, R and Python. Sometimes these workloads run in their own environment, but vendors can help reduce the work involved if, for example, you want to move your cloud project to on-premise. By offering a consistent SQL, for example, across these deployment architectures, you can avoid the headaches of a hybrid environment.
Open Source will Attain New Maturity
I’ve written many times about the hype around Hadoop and the maturity of the Hadoop platform by comparison to commercially available software. Let’s face it, many open source solutions for big data analytics were somewhat immature in 2015. As I mentioned in my last post, it’s a matter of taking software that is extremely useful and spending a few years to overcome shortcomings and build out a complete platform for big data analytics. This year, the Hadoop community will build it out to be a more complete platform. My prediction is that we’ll see greater maturity in 2016. With greater maturity will come wider adoption.
That said, I have observed that the open source community tends to focus on the start and not the finish. For example, over the past few years, SQL users have heard about many flavors of SQL on Hadoop. Spark seems to be the latest and coolest new project offering SQL analytics on big data and it show great promise. However, the shift seems to be toward new projects and away from making the legacy projects work better.
Hewlett Packard Enterprise Role
I was inspired to write these predictions by a webinar that I attended in which some of the executives of Hewlett Packard Enterprise and influencers gave their vision of 2016. For more information, watch the replay video here. Hewlett Packard Enterprise (HPE) has a role to play in making these predictions come true. HPE’s vision starts with the understanding that data fuels the new style of business driving the idea economy. Data will distinguish disruptors from the disrupted. Big data promises new customers, better experiences and new revenue streams. But all opportunities come with challenges. The recipe for success is continuously iterating on what questions to ask, which data to analyze and how to use the insights at all levels of your organization.