Cookies help us display personalized product recommendations and ensure you have great shopping experience.

By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
SmartData CollectiveSmartData Collective
  • Analytics
    AnalyticsShow More
    predictive analytics risk management
    How Predictive Analytics Is Redefining Risk Management Across Industries
    7 Min Read
    data analytics and gold trading
    Data Analytics and the New Era of Gold Trading
    9 Min Read
    composable analytics
    How Composable Analytics Unlocks Modular Agility for Data Teams
    9 Min Read
    data mining to find the right poly bag makers
    Using Data Analytics to Choose the Best Poly Mailer Bags
    12 Min Read
    data analytics for pharmacy trends
    How Data Analytics Is Tracking Trends in the Pharmacy Industry
    5 Min Read
  • Big Data
  • BI
  • Exclusive
  • IT
  • Marketing
  • Software
Search
© 2008-25 SmartData Collective. All Rights Reserved.
Reading: Challenges of Working with Big Data: Beyond the 3Vs
Share
Notification
Font ResizerAa
SmartData CollectiveSmartData Collective
Font ResizerAa
Search
  • About
  • Help
  • Privacy
Follow US
© 2008-23 SmartData Collective. All Rights Reserved.
SmartData Collective > Big Data > Data Mining > Challenges of Working with Big Data: Beyond the 3Vs
Data MiningData QualityData VisualizationData WarehousingSocial DataWorkforce Data

Challenges of Working with Big Data: Beyond the 3Vs

Venky Ganti
Venky Ganti
4 Min Read
SHARE

Among many challenges in working with big data, the 3V’s (Volume, Velocity, and Variety) have gotten a lot of attention. Googling yields many results worth reading. Almost all of these focus on technological challenges in managing and processing big data. In this post, I would like to highlight a different set issues that make working with big data challenging, even if the underlying infrastructure is admirably able to handle all three V’s.

Among many challenges in working with big data, the 3V’s (Volume, Velocity, and Variety) have gotten a lot of attention. Googling yields many results worth reading. Almost all of these focus on technological challenges in managing and processing big data. In this post, I would like to highlight a different set issues that make working with big data challenging, even if the underlying infrastructure is admirably able to handle all three V’s.

At Google, I had the opportunity to work within an amazing engineering team. I learnt various aspects of running services at scale as well as developing and launching compelling data products. I worked on the Dynamic Search Ads product which automates the AdWords campaign setup and optimization. Given an advertiser’s website, our goal was to mine relevant keywords, and for each keyword automatically create an advertisement (the ad text as well as the landing page). I worked with data from a variety of data sources, often for improving our product and sometimes for debugging issues.

We all know that Google organizes all of the information on the web and enables users to quickly find relevant information. But, how do many engineers feel about working with data at Google?

More Read

Why Flickr Lost Me as a Pro
The Surgeon, the Survey and Survival
The Good Data
10 Technology Trends that will Define Enterprise Architecture in the 2010s
New Job at FinScore

On the upside, they feel empowered in working with the rich data that Google collects from the huge amount of user activity on its property. Google’s data infrastructure ranks among the best out there. This is the place where many of the modern ideas of storing and processing “big data” originated. Combining these with a high calibre of engineers, a natural outcome is the creation of a massive number of information-rich derivative datasets.

On the down side, I think we could have been more effective and efficient with respect to finding and understanding data. Let me articulate some of the issues that contributed to these inefficiencies.

  • How do I find data that I can use for my current purpose? How do I understand the contents of a dataset after I find something?
  • Who do I ask for more information about the data? Has someone else used this data for a purpose similar to mine?
  • How do I debug unexpected data issues? Can upstream data changes explain such issues?
  • How do I set garbage collection policies for data I generate periodically?

In a couple of posts following this one, I will provide my experience around each of these questions, and how it impacted my efficiency besides raising the motivation bar for working with new data.

Share This Article
Facebook Pinterest LinkedIn
Share

Follow us on Facebook

Latest News

street address database
Why Data-Driven Companies Rely on Accurate Street Address Databases
Big Data Exclusive
predictive analytics risk management
How Predictive Analytics Is Redefining Risk Management Across Industries
Analytics Exclusive Predictive Analytics
data analytics and gold trading
Data Analytics and the New Era of Gold Trading
Analytics Big Data Exclusive
student learning AI
Advanced Degrees Still Matter in an AI-Driven Job Market
Artificial Intelligence Exclusive

Stay Connected

1.2kFollowersLike
33.7kFollowersFollow
222FollowersPin

You Might also Like

First Look – TRIAD 8.5 and Decision Graph

6 Min Read

Findability Inside the Firewall – Still Trying to Find the Information We Need

22 Min Read

ROI of Social Media Mix

5 Min Read

Decision Management focuses on Microdecisions for Macro Impact

3 Min Read

SmartData Collective is one of the largest & trusted community covering technical content about Big Data, BI, Cloud, Analytics, Artificial Intelligence, IoT & more.

ai in ecommerce
Artificial Intelligence for eCommerce: A Closer Look
Artificial Intelligence
giveaway chatbots
How To Get An Award Winning Giveaway Bot
Big Data Chatbots Exclusive

Quick Link

  • About
  • Contact
  • Privacy
Follow US
© 2008-25 SmartData Collective. All Rights Reserved.
Go to mobile version
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?