By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
SmartData CollectiveSmartData Collective
  • Analytics
    AnalyticsShow More
    football analytics
    The Role of Data Analytics in Football Performance
    9 Min Read
    data Analytics instagram stories
    Data Analytics Helps Marketers Make the Most of Instagram Stories
    15 Min Read
    analyst,women,looking,at,kpi,data,on,computer,screen
    What to Know Before Recruiting an Analyst to Handle Company Data
    6 Min Read
    AI analytics
    AI-Based Analytics Are Changing the Future of Credit Cards
    6 Min Read
    data overload showing data analytics
    How Does Next-Gen SIEM Prevent Data Overload For Security Analysts?
    8 Min Read
  • Big Data
  • BI
  • Exclusive
  • IT
  • Marketing
  • Software
Search
© 2008-23 SmartData Collective. All Rights Reserved.
Reading: What Are Accumulators? A Must-Know for Apache Spark
Share
Notification Show More
Aa
SmartData CollectiveSmartData Collective
Aa
Search
  • About
  • Help
  • Privacy
Follow US
© 2008-23 SmartData Collective. All Rights Reserved.
SmartData Collective > Big Data > Data Quality > What Are Accumulators? A Must-Know for Apache Spark
Big DataData QualityData WarehousingUnstructured Data

What Are Accumulators? A Must-Know for Apache Spark

kingmesal
Last updated: 2016/02/27 at 1:00 PM
kingmesal
6 Min Read
Image
SHARE

ImageIf you’ve been using Apache Spark, then you know how awesome the Resilient Distributed Dataset (RDD) is.

Contents
AccumulatorsAvoiding Mutable VariablesUse Case: Log ErrorsBroadcast VariablesConclusion

ImageIf you’ve been using Apache Spark, then you know how awesome the Resilient Distributed Dataset (RDD) is. This data structure is essential to Spark for both its speed and its reliability.

There are a couple of concepts that make Spark even faster and more reliable when run over large clusters: accumulators and broadcast variables.

Accumulators

What exactly are accumulators? Accumulators are simply variables that are meant to count something—hence the name “accumulator.” You can specify an accumulator with a default value. If you’re starting from scratch, the number would typically be from 0.

More Read

analyst,women,looking,at,kpi,data,on,computer,screen

What to Know Before Recruiting an Analyst to Handle Company Data

Tackling Bias in AI Translation: A Data Perspective
The Role of Data in Understanding Demographics for Effective Marketing
7 Ways Data Monetization is Changing the Information Technology Job Market
6 Reasons to Boost Data Security Plan in the Age of Big Data

Avoiding Mutable Variables

Why would you use accumulators instead of a normal variable? Like a lot of Spark installations, yours is probably running over large clusters, either in your data center or in some cloud provider’s machines.

One of the biggest uses for Spark is also computing a result across nodes and aggregating the results.

Normal variables are mutable, which means that it’s possible to modify them. The ability of slave nodes to change data instead of merely computing the results can cause all kinds of problems and side effects, such as race conditions or comprehension difficulties for programmers. Accumulators solve the problem by making it impossible Imagefor slave nodes to change the data. The slave nodes can’t even see the accumulator; they’ll just fetch the data and send it back to the slave node, which can see and change the information in the variables.

Use Case: Log Errors

So what would you use an accumulator for? A good example is for counting items that occur in your data. Searching through text files is another good example. If you’re a sysadmin, you’re probably accustomed to using tools like grep to sift through your log files, looking for things like errors and possible security problems. You might even have created some custom scripts that can look for these errors.

If you manage a large data center, how are you going to look through each node, when there may be hundreds or even thousands of log files? Spark can make this a possibility, but how will you be sure that the number of errors is accurate and that some node didn’t somehow create problems?

Accumulators will come to the rescue here.

We’ll get to see them in action. A similar example for counting in log files is in MapR’s Apache Spark cheat sheet.

We have our log file that contains the terms like “error, warning, info,” etc.

Here’s a contrived example, with just these terms in a text file, each on one line:

error
warning
info
trace
error
info
info

This log is named output.log and saved in the home directory. In real life, it could be a web server log, a system log, or any other kind of log with thousands of lines.

At the Scala prompt, we’ll define the accumulator that will count the number of errors and call it nErrors:

scala> val nErrors=sc.accumulator(0.0)

You’ll notice that it’s a floating-point number. We could have easily just used 0 instead of 0.0, as there isn’t really such a thing as a fractional error. This is merely a way to show that you can choose both integers and floating point for your accumulators.

Next, let’s import our log into Spark and convert it to an RDD:

scala> val logs = sc.textFile(“/Users/ddelony/output.log”)

Now we’ll look through the logs.

The slave nodes will look through a line, add 1 if it finds “error” in it, and send the result back to the master, which will then add them all up:

scala> logs.filter(_.contains(“error”)).foreach(x=>nErrors+=1)

Now let’s see how many errors are in our log:

scala> nErrors.value
Result: Int = 2

We could easily modify this to look for warnings and traces, and with much longer files.

Broadcast Variables

A related idea in Spark is the broadcast variable. A broadcast variable, as the name suggests, is broadcast from a master node to its slaves. Broadcast variables avoid the network bottlenecks when aggregating data. These variables let slaves quickly access RDD data and send the results back to the master.

Broadcast variables are frequently used for mapping operations. You create them with the sc.broadcast() command, as with accumulators, the initial value as an argument.

Conclusion

Both accumulators and broadcast variables can make advanced operations on large clusters faster, safer, and more reliable using Apache Spark. Learn more about real-time security log analytics with Spark.

TAGGED: big data, data mining
kingmesal February 27, 2016
Share This Article
Facebook Twitter Pinterest LinkedIn
Share

Follow us on Facebook

Latest News

Shutterstock Licensed Photo - 1051059293 | Rawpixel.com
QR Codes Leverage the Benefits of Big Data in Education
Big Data
football analytics
The Role of Data Analytics in Football Performance
Analytics Big Data Exclusive
smart home data
7 Mind-Blowing Ways Smart Homes Use Data to Save Your Money
Big Data
ai low code frameworks
AI Can Help Accelerate Development with Low-Code Frameworks
Artificial Intelligence

Stay Connected

1.2k Followers Like
33.7k Followers Follow
222 Followers Pin

You Might also Like

analyst,women,looking,at,kpi,data,on,computer,screen
Analytics

What to Know Before Recruiting an Analyst to Handle Company Data

6 Min Read
data perspective
Big Data

Tackling Bias in AI Translation: A Data Perspective

9 Min Read
demographics big data in marketing
Big Data

The Role of Data in Understanding Demographics for Effective Marketing

7 Min Read
data monetization
Big Data

7 Ways Data Monetization is Changing the Information Technology Job Market

6 Min Read

SmartData Collective is one of the largest & trusted community covering technical content about Big Data, BI, Cloud, Analytics, Artificial Intelligence, IoT & more.

AI and chatbots
Chatbots and SEO: How Can Chatbots Improve Your SEO Ranking?
Artificial Intelligence Chatbots Exclusive
ai in ecommerce
Artificial Intelligence for eCommerce: A Closer Look
Artificial Intelligence

Quick Link

  • About
  • Contact
  • Privacy
Follow US
© 2008-23 SmartData Collective. All Rights Reserved.
Go to mobile version
Welcome Back!

Sign in to your account

Lost your password?