Two Talks on Data Science, Big Data and R

5 Min Read

On Thursday next week (November 1), I’ll be giving a new webinar on the topic of Big Data, Data Science and R. Titled “The Rise of Data Science in the Age of Big Data Analytics: Why Data Distillation and Machine Learning Aren’t Enough“, this is a provocative look at why data scientists cannot be replaced by technology, and why R is the ideal environment for building data science applications. Here’s the abstract:

On Thursday next week (November 1), I’ll be giving a new webinar on the topic of Big Data, Data Science and R. Titled “The Rise of Data Science in the Age of Big Data Analytics: Why Data Distillation and Machine Learning Aren’t Enough“, this is a provocative look at why data scientists cannot be replaced by technology, and why R is the ideal environment for building data science applications. Here’s the abstract:

The reason why Big Data is important is because we want to use it to make sense of our world. It’s tempting to think there’s some “magic bullet” for analyzing big data, but simple “data distillation” often isn’t enough, and unsupervised machine-learning systems can be dangerous. (Like, bringing-down-the-entire-financial-system dangerous.) Data Science is the key to unlocking insight from Big Data: by combining computer science skills with statistical analysis and a deep understanding of the data and problem we can not only make better predictions, but also fill in gaps in our knowledge, and even find answers to questions we hadn’t even thought of yet.

In this talk, David will:

  • Introduce the concept of Data Science, and give examples of where Data Science succeeds with Big Data … and where automated systems have failed.
  • Describe the Data Scientists’ Toolkit: the systems and technology components Data Scientists need to explore, analyze and create data apps from Big Data.
  • Share some thoughts about the future of Big Data Analytics, and the diverging use cases for computing grids, data appliances, and Hadoop clusters
  • Discuss the skills needed to succeed
  • Talk about the technology stack that a data scientist needs to be effective with Big Data, and describe emerging trends in the use of various data platforms for analytics: specifically, Hadoop for data storage and data “refinement”; data appliances for performance and production, and computing grids for data exploration and model development.

You can register for this free webinar at the Revolution Analytics website.

Also, if you’re attending the Strata / Hadoop World conference in New York this week, be sure to check out Thursday’s talk by Steve Yun from Allstate Insurance and Joe Rickert from Revolution Analytics, which will include some real-world benchmarks of doing big-data predictive modeling with Hadoop and Revolution R Enterprise.

Start Small Before Going Big

The availability of Hadoop and other big data technologies has made it possible to build models with more data than statisticians of even a decade ago would have thought possible. However, the best practices for effectively using massive amounts of data in the construction and evaluation of statistical models are still being invented. As is the case with most difficult complex problems: “If you’re not failing, you’re not trying hard enough”. The majority of ideas tried do not work. Best practices should include keeping failures small and inexpensive, quickly eliminating approaches that are not likely to work out, and keeping track these failures so they won’t be repeated. Every development environment should encourage trying multiple approaches to problem solving.

This talk presents a case study of statistical modeling in the insurance industry and examines the trade-offs between working with all of the data in a Hadoop cluster, dealing with complex programming, significant set-up times and a batch-like programming mentality, versus rapidly iterating through models on smaller data sets in a dynamic R environment at the possible expense of model accuracy. We will examine the benefits and shortcomings of both approaches and include model accuracy, job execution time and overall project time among the performance measures. Technologies examined will include programming a Hadoop cluster from R using the RHadoop interface and the RevoScaleR package from Revolution Analytics.

You can find more details about this talk and the Hadoop World conference (of which Revolution Analytics is a proud sponsor) here.

Share This Article
Exit mobile version