Cookies help us display personalized product recommendations and ensure you have great shopping experience.

By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
SmartData CollectiveSmartData Collective
  • Analytics
    AnalyticsShow More
    data analytics
    How Data Analytics Can Help You Construct A Financial Weather Map
    4 Min Read
    financial analytics
    Financial Analytics Shows The Hidden Cost Of Not Switching Systems
    4 Min Read
    warehouse accidents
    Data Analytics and the Future of Warehouse Safety
    10 Min Read
    stock investing and data analytics
    How Data Analytics Supports Smarter Stock Trading Strategies
    4 Min Read
    predictive analytics risk management
    How Predictive Analytics Is Redefining Risk Management Across Industries
    7 Min Read
  • Big Data
  • BI
  • Exclusive
  • IT
  • Marketing
  • Software
Search
© 2008-25 SmartData Collective. All Rights Reserved.
Reading: Two Talks on Data Science, Big Data and R
Share
Notification
Font ResizerAa
SmartData CollectiveSmartData Collective
Font ResizerAa
Search
  • About
  • Help
  • Privacy
Follow US
© 2008-23 SmartData Collective. All Rights Reserved.
SmartData Collective > Analytics > Predictive Analytics > Two Talks on Data Science, Big Data and R
Big DataPredictive AnalyticsR Programming Language

Two Talks on Data Science, Big Data and R

DavidMSmith
DavidMSmith
5 Min Read
SHARE

On Thursday next week (November 1), I’ll be giving a new webinar on the topic of Big Data, Data Science and R. Titled “The Rise of Data Science in the Age of Big Data Analytics: Why Data Distillation and Machine Learning Aren’t Enough“, this is a provocative look at why data scientists cannot be replaced by technology, and why R is the ideal environment for building data science applications. Here’s the abstract:

On Thursday next week (November 1), I’ll be giving a new webinar on the topic of Big Data, Data Science and R. Titled “The Rise of Data Science in the Age of Big Data Analytics: Why Data Distillation and Machine Learning Aren’t Enough“, this is a provocative look at why data scientists cannot be replaced by technology, and why R is the ideal environment for building data science applications. Here’s the abstract:

The reason why Big Data is important is because we want to use it to make sense of our world. It’s tempting to think there’s some “magic bullet” for analyzing big data, but simple “data distillation” often isn’t enough, and unsupervised machine-learning systems can be dangerous. (Like, bringing-down-the-entire-financial-system dangerous.) Data Science is the key to unlocking insight from Big Data: by combining computer science skills with statistical analysis and a deep understanding of the data and problem we can not only make better predictions, but also fill in gaps in our knowledge, and even find answers to questions we hadn’t even thought of yet.

In this talk, David will:

  • Introduce the concept of Data Science, and give examples of where Data Science succeeds with Big Data … and where automated systems have failed.
  • Describe the Data Scientists’ Toolkit: the systems and technology components Data Scientists need to explore, analyze and create data apps from Big Data.
  • Share some thoughts about the future of Big Data Analytics, and the diverging use cases for computing grids, data appliances, and Hadoop clusters
  • Discuss the skills needed to succeed
  • Talk about the technology stack that a data scientist needs to be effective with Big Data, and describe emerging trends in the use of various data platforms for analytics: specifically, Hadoop for data storage and data “refinement”; data appliances for performance and production, and computing grids for data exploration and model development.

You can register for this free webinar at the Revolution Analytics website.

Also, if you’re attending the Strata / Hadoop World conference in New York this week, be sure to check out Thursday’s talk by Steve Yun from Allstate Insurance and Joe Rickert from Revolution Analytics, which will include some real-world benchmarks of doing big-data predictive modeling with Hadoop and Revolution R Enterprise.

More Read

Is Amazon really that cool as we keep saying?
Data-Driven Marketing Strategies to Supercharge eCommerce Businesses
Analyzing Olympic Success by Country with Data Visualization
Question: Why Are You In Social Channels?
The Role of Data in Automating Healthcare Processes for Improved Patient Results

Start Small Before Going Big

The availability of Hadoop and other big data technologies has made it possible to build models with more data than statisticians of even a decade ago would have thought possible. However, the best practices for effectively using massive amounts of data in the construction and evaluation of statistical models are still being invented. As is the case with most difficult complex problems: “If you’re not failing, you’re not trying hard enough”. The majority of ideas tried do not work. Best practices should include keeping failures small and inexpensive, quickly eliminating approaches that are not likely to work out, and keeping track these failures so they won’t be repeated. Every development environment should encourage trying multiple approaches to problem solving.

This talk presents a case study of statistical modeling in the insurance industry and examines the trade-offs between working with all of the data in a Hadoop cluster, dealing with complex programming, significant set-up times and a batch-like programming mentality, versus rapidly iterating through models on smaller data sets in a dynamic R environment at the possible expense of model accuracy. We will examine the benefits and shortcomings of both approaches and include model accuracy, job execution time and overall project time among the performance measures. Technologies examined will include programming a Hadoop cluster from R using the RHadoop interface and the RevoScaleR package from Revolution Analytics.

You can find more details about this talk and the Hadoop World conference (of which Revolution Analytics is a proud sponsor) here.

Share This Article
Facebook Pinterest LinkedIn
Share

Follow us on Facebook

Latest News

Edge Computing in IoT
Unique Capabilities of Edge Computing in IoT
Exclusive Internet of Things
Turning Geographic Data Into Competitive Advantage
The Rise of Location Intelligence: Turning Geographic Data Into Competitive Advantage
Big Data Exclusive
AI Recruitment Software Solution
The Best AI Recruitment Software Solution: Transforming Hiring with Smarter Tech
Artificial Intelligence Exclusive
real estate data
How Big Data Is Changes How We Buy and Sell Real Estate
Big Data Exclusive

Stay Connected

1.2KFollowersLike
33.7KFollowersFollow
222FollowersPin

You Might also Like

Cloud Computing’s RAIC? What’s that?

31 Min Read
big data in apps
Big DataExclusive

The Growing Importance Of Big Data In Application Monitoring

6 Min Read

New method vs. PESQ for perceptural voice quality testing.

1 Min Read

Jill’s Anti-Predictions for 2011

5 Min Read

SmartData Collective is one of the largest & trusted community covering technical content about Big Data, BI, Cloud, Analytics, Artificial Intelligence, IoT & more.

ai is improving the safety of cars
From Bolts to Bots: How AI Is Fortifying the Automotive Industry
Artificial Intelligence
giveaway chatbots
How To Get An Award Winning Giveaway Bot
Big Data Chatbots Exclusive

Quick Link

  • About
  • Contact
  • Privacy
Follow US
© 2008-25 SmartData Collective. All Rights Reserved.
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?