Cookies help us display personalized product recommendations and ensure you have great shopping experience.

By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
SmartData CollectiveSmartData Collective
  • Analytics
    AnalyticsShow More
    data analytics
    How Data Analytics Can Help You Construct A Financial Weather Map
    4 Min Read
    financial analytics
    Financial Analytics Shows The Hidden Cost Of Not Switching Systems
    4 Min Read
    warehouse accidents
    Data Analytics and the Future of Warehouse Safety
    10 Min Read
    stock investing and data analytics
    How Data Analytics Supports Smarter Stock Trading Strategies
    4 Min Read
    predictive analytics risk management
    How Predictive Analytics Is Redefining Risk Management Across Industries
    7 Min Read
  • Big Data
  • BI
  • Exclusive
  • IT
  • Marketing
  • Software
Search
© 2008-25 SmartData Collective. All Rights Reserved.
Reading: A Look at SparkSQL
Share
Notification
Font ResizerAa
SmartData CollectiveSmartData Collective
Font ResizerAa
Search
  • About
  • Help
  • Privacy
Follow US
© 2008-23 SmartData Collective. All Rights Reserved.
SmartData Collective > Software > SQL > A Look at SparkSQL
SQL

A Look at SparkSQL

kingmesal
kingmesal
6 Min Read
SHARE

If you’ve been reading about Apache Spark, you might be worried about whether you have to relearn all of your skills for using it to interact with databases. With Apache Spark, whether you’re a DBA or a developer, you’ll be able to interact with Apache Spark in the way you’re used to—while solving real problems.

What Is SparkSQL?

If you’ve been reading about Apache Spark, you might be worried about whether you have to relearn all of your skills for using it to interact with databases. With Apache Spark, whether you’re a DBA or a developer, you’ll be able to interact with Apache Spark in the way you’re used to—while solving real problems.

What Is SparkSQL?

More Read

An Analysis of A NASA Dbase Hack-and-Dump
NoSQL, NewSQL and NuoDB
Big Data SQL 3.0 Bridges Multiple Data Platforms Like Never Before
Teradata Aster Standardizes Access to Hadoop with SQL-H
8 Ways to Fine-tune your SQL Queries (for production databases)

SparkSQL, as the name suggests, is a way to use Apache Spark using the SQL language. Apache Spark makes it easy to run complex queries over lots of nodes, something that’s rather difficult with conventional RDBMSs like MySQL.

Unlike a NoSQL database, you don’t have to learn a new query language or database model. It offers the advantage of NoSQL in scalability, and ease of running over a cluster while using the familiar SQL query model. You can import a number of different data formats in SparkSQL, such as Parquet files, JSON data, as well as RDDs (the native data format of Apache Spark).

SparkSQL allows for both interactive and batch operations. You can take advantage of Spark’s speed, running queries in real time. Spark is so fast partly because of lazy evaluation, which means that queries won’t actually be computed until you need some kind of output.

By using a REPL (i.e. interactive shell), you can explore your data using SparkSQL in real time. You can choose either Spark’s native Scala or Python.

If you haven’t noticed, Spark draws on a lot of functional programming concepts from languages like Haskell and Lisp: lazy evaluation, immutable data structures, and an interactive REPL. These concepts aren’t exactly new, as Lisp data back to the late ‘50s.

SchemaRDD

SchemaRDD is a special RDD, or Resilient Distributed Dataset. RDDs are central to understanding Apache Spark. RDDs are immutable data structures, which means that you can’t change them. Operations on RDDs simply return new RDDs. This allows for a degree of safety when dealing with RDDs.

Lineages keep track of all the changes on RDDs, which are known as transformations. In case of some kind of failure, Spark can reconstruct the data from these lineages.

RDDs are also represented in memory, or in at least as much memory as is possible. This gives Spark an extra speed boost.

SchemaRDD is a special RDD that works similarly to a SQL table. You can import your data from a text file into a SchemaRDD.

Queries

You can import your data from text files and then work on it using SQL queries such as SELECT, JOIN, and more. (see a live example)

Spark provides two contexts for queries: SQLContext and HiveContext. The former provides a simple SQL parser, while HiveContext gives you access to a HiveQL cluster for more powerful queries.

Use Case: Customers

You’re probably itching to see all this stuff in action. Let’s borrow an example from MapR’s Apache Spark referece card.

Let’s pretend we run a clothing store in the Dallas, Texas, area, and we want to know a little more about our customers. We have a plain text database showing customer name, age, gender, and address, where the values are separated by a “|”:

 

John Smith|38|M|201 East Heading Way #2203,Irving, TX,75063

Liana Dole|22|F|1023 West Feeder Rd, Plano,TX,75093

Craig Wolf|34|M|75942 Border Trail,Fort Worth,TX,75108

John Ledger|28|M|203 Galaxy Way,Paris, TX,75461

Joe Graham|40|M|5023 Silicon Rd,London,TX,76

Using Scala, we’ll define a schema:

case class Customer(name:String,age:Int,gender:String,address:

String)

 

Next, we’ll import our plain text file and make a SQLContext:

 

val sparkConf = new SparkConf().setAppName(“Customers”)

val sc = new SparkContext(sparkConf)

val sqlContext = new SQLContext(sc)

val r = sc.textFile(“/Users/jim/temp/customers.txt”)

val records = r.map(_.split(‘|’))

val c = records.map(r=>Customer(r(0),r(1).trim.toInt,r(2),r(3)))

c.registerAsTable(“customers”)

 

Suppose management has decided that they’re going to start targeting millennial males as a lucrative market. We might start by looking through our database by age and gender:

 

sqlContext.sql(“select * from customers where gender=’M’ and

age < 30”).collect().foreach(println)

 

Here’s the result:

 

[John Ledger,28,M,203 Galaxy Way,Paris, TX,75461]

It looks like we’re going to have to do a little work in attracting more of these kinds of customers.


Conclusion

For a more in-depth introduction to Spark, read Getting Started with Spark: From Inception to Production, a free interactive eBook by James A. Scott.

Share This Article
Facebook Pinterest LinkedIn
Share

Follow us on Facebook

Latest News

protecting patient data
How to Protect Psychotherapy Data in a Digital Practice
Big Data Exclusive Security
data analytics
How Data Analytics Can Help You Construct A Financial Weather Map
Analytics Exclusive Infographic
AI use in payment methods
AI Shows How Payment Delays Disrupt Your Business
Artificial Intelligence Exclusive Infographic
financial analytics
Financial Analytics Shows The Hidden Cost Of Not Switching Systems
Analytics Exclusive Infographic

Stay Connected

1.2KFollowersLike
33.7KFollowersFollow
222FollowersPin

You Might also Like

Predictive Analytics World New York City Conference Announces Speaker Line-Up

5 Min Read
SaaS Real-time
Best PracticesBig DataBusiness IntelligenceData ManagementData WarehousingITSoftwareSQL

Real-Time Access to SaaS Data

5 Min Read

New Trends in BI, Analytics and Social Media

9 Min Read
data-driven companies need to use the cast function to manage their sql databases
SQL

SQL Server and the Cast Function for Data-Driven Companies

7 Min Read

SmartData Collective is one of the largest & trusted community covering technical content about Big Data, BI, Cloud, Analytics, Artificial Intelligence, IoT & more.

AI and chatbots
Chatbots and SEO: How Can Chatbots Improve Your SEO Ranking?
Artificial Intelligence Chatbots Exclusive
AI chatbots
AI Chatbots Can Help Retailers Convert Live Broadcast Viewers into Sales!
Chatbots

Quick Link

  • About
  • Contact
  • Privacy
Follow US
© 2008-25 SmartData Collective. All Rights Reserved.
Go to mobile version
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?