By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
SmartData Collective
  • Analytics
    AnalyticsShow More
    data-driven image seo
    Data Analytics Helps Marketers Substantially Boost Image SEO
    8 Min Read
    construction analytics
    5 Benefits of Analytics to Manage Commercial Construction
    5 Min Read
    benefits of data analytics for financial industry
    Fascinating Changes Data Analytics Brings to Finance
    7 Min Read
    analyzing big data for its quality and value
    Use this Strategic Approach to Maximize Your Data’s Value
    6 Min Read
    data-driven seo for product pages
    6 Tips for Using Data Analytics for Product Page SEO
    11 Min Read
  • Big Data
  • BI
  • Exclusive
  • IT
  • Marketing
  • Software
Search
© 2008-23 SmartData Collective. All Rights Reserved.
Reading: What Are Accumulators? A Must-Know for Apache Spark
Share
Notification Show More
Latest News
ai in software development
3 AI-Based Strategies to Develop Software in Uncertain Times
Software
ai in ppc advertising
5 Proven Tips for Utilizing AI with PPC Advertising in 2023
Artificial Intelligence
data-driven image seo
Data Analytics Helps Marketers Substantially Boost Image SEO
Analytics
ai in web design
5 Ways AI Technology Has Disrupted Website Development
Artificial Intelligence
cloud-centric companies using network relocation
Cloud-Centric Companies Discover Benefits & Pitfalls of Network Relocation
Cloud Computing
Aa
SmartData Collective
Aa
Search
  • About
  • Help
  • Privacy
Follow US
© 2008-23 SmartData Collective. All Rights Reserved.
SmartData Collective > Big Data > Data Quality > What Are Accumulators? A Must-Know for Apache Spark
Big DataData QualityData WarehousingUnstructured Data

What Are Accumulators? A Must-Know for Apache Spark

kingmesal
Last updated: 2016/02/27 at 1:00 PM
kingmesal
6 Min Read
Image
SHARE
- Advertisement -

ImageIf you’ve been using Apache Spark, then you know how awesome the Resilient Distributed Dataset (RDD) is.

Contents
AccumulatorsAvoiding Mutable VariablesUse Case: Log ErrorsBroadcast VariablesConclusion

ImageIf you’ve been using Apache Spark, then you know how awesome the Resilient Distributed Dataset (RDD) is. This data structure is essential to Spark for both its speed and its reliability.

- Advertisement -

There are a couple of concepts that make Spark even faster and more reliable when run over large clusters: accumulators and broadcast variables.

Accumulators

What exactly are accumulators? Accumulators are simply variables that are meant to count something—hence the name “accumulator.” You can specify an accumulator with a default value. If you’re starting from scratch, the number would typically be from 0.

More Read

big data technology has helped improve the state of both the deep web and dark web

What Role Does Big Data Have on the Deep Web?

Use this Strategic Approach to Maximize Your Data’s Value
How Data and Smart Technology Are Helping Hospitalists
Niche Data Tactics to Take Your Business to the Next Level
5 Ways to Utilize Data Analytics to Grow Your Business

Avoiding Mutable Variables

Why would you use accumulators instead of a normal variable? Like a lot of Spark installations, yours is probably running over large clusters, either in your data center or in some cloud provider’s machines.

One of the biggest uses for Spark is also computing a result across nodes and aggregating the results.

Normal variables are mutable, which means that it’s possible to modify them. The ability of slave nodes to change data instead of merely computing the results can cause all kinds of problems and side effects, such as race conditions or comprehension difficulties for programmers. Accumulators solve the problem by making it impossible Imagefor slave nodes to change the data. The slave nodes can’t even see the accumulator; they’ll just fetch the data and send it back to the slave node, which can see and change the information in the variables.

- Advertisement -

Use Case: Log Errors

So what would you use an accumulator for? A good example is for counting items that occur in your data. Searching through text files is another good example. If you’re a sysadmin, you’re probably accustomed to using tools like grep to sift through your log files, looking for things like errors and possible security problems. You might even have created some custom scripts that can look for these errors.

If you manage a large data center, how are you going to look through each node, when there may be hundreds or even thousands of log files? Spark can make this a possibility, but how will you be sure that the number of errors is accurate and that some node didn’t somehow create problems?

Accumulators will come to the rescue here.

We’ll get to see them in action. A similar example for counting in log files is in MapR’s Apache Spark cheat sheet.

We have our log file that contains the terms like “error, warning, info,” etc.

- Advertisement -

Here’s a contrived example, with just these terms in a text file, each on one line:

error
warning
info
trace
error
info
info

This log is named output.log and saved in the home directory. In real life, it could be a web server log, a system log, or any other kind of log with thousands of lines.

At the Scala prompt, we’ll define the accumulator that will count the number of errors and call it nErrors:

scala> val nErrors=sc.accumulator(0.0)

- Advertisement -

You’ll notice that it’s a floating-point number. We could have easily just used 0 instead of 0.0, as there isn’t really such a thing as a fractional error. This is merely a way to show that you can choose both integers and floating point for your accumulators.

Next, let’s import our log into Spark and convert it to an RDD:

scala> val logs = sc.textFile(“/Users/ddelony/output.log”)

Now we’ll look through the logs.

The slave nodes will look through a line, add 1 if it finds “error” in it, and send the result back to the master, which will then add them all up:

- Advertisement -

scala> logs.filter(_.contains(“error”)).foreach(x=>nErrors+=1)

Now let’s see how many errors are in our log:

scala> nErrors.value
Result: Int = 2

We could easily modify this to look for warnings and traces, and with much longer files.

Broadcast Variables

A related idea in Spark is the broadcast variable. A broadcast variable, as the name suggests, is broadcast from a master node to its slaves. Broadcast variables avoid the network bottlenecks when aggregating data. These variables let slaves quickly access RDD data and send the results back to the master.

Broadcast variables are frequently used for mapping operations. You create them with the sc.broadcast() command, as with accumulators, the initial value as an argument.

Conclusion

Both accumulators and broadcast variables can make advanced operations on large clusters faster, safer, and more reliable using Apache Spark. Learn more about real-time security log analytics with Spark.

TAGGED: big data, data mining
kingmesal February 27, 2016
Share this Article
Facebook Twitter Pinterest LinkedIn
Share
- Advertisement -

Follow us on Facebook

Latest News

ai in software development
3 AI-Based Strategies to Develop Software in Uncertain Times
Software
ai in ppc advertising
5 Proven Tips for Utilizing AI with PPC Advertising in 2023
Artificial Intelligence
data-driven image seo
Data Analytics Helps Marketers Substantially Boost Image SEO
Analytics
ai in web design
5 Ways AI Technology Has Disrupted Website Development
Artificial Intelligence

Stay Connected

1.2k Followers Like
33.7k Followers Follow
222 Followers Pin

You Might also Like

big data technology has helped improve the state of both the deep web and dark web
Big Data

What Role Does Big Data Have on the Deep Web?

8 Min Read
analyzing big data for its quality and value
Big Data

Use this Strategic Approach to Maximize Your Data’s Value

6 Min Read
big data and smart technology in healthcare
Big Data

How Data and Smart Technology Are Helping Hospitalists

8 Min Read
niche data tactics for business success
Big Data

Niche Data Tactics to Take Your Business to the Next Level

6 Min Read

SmartData Collective is one of the largest & trusted community covering technical content about Big Data, BI, Cloud, Analytics, Artificial Intelligence, IoT & more.

ai is improving the safety of cars
From Bolts to Bots: How AI Is Fortifying the Automotive Industry
Artificial Intelligence
AI and chatbots
Chatbots and SEO: How Can Chatbots Improve Your SEO Ranking?
Artificial Intelligence Chatbots Exclusive

Quick Link

  • About
  • Contact
  • Privacy
Follow US

© 2008-23 SmartData Collective. All Rights Reserved.

Removed from reading list

Undo
Go to mobile version
Welcome Back!

Sign in to your account

Lost your password?