Spark Tutorial @ DAO download slides: training.databricks.com/workshop/su_dao.pdf
Licensed under a Creative Commons Attribution-
NonCommercial-NoDerivatives 4.0 International License
Welcome + Getting Started
Everyone will receive a username/password for one of the Databricks Cloud shards. Use your laptop and browser to login there.
We find that cloud-based notebooks are a simple way to get started using Apache Spark – as the motto “Making Big Data Simple” states.
Please create and run a variety of notebooks on your account throughout the tutorial. These accounts will remain open long enough for you to export your work.
3
Getting Started: Step 1
4
Open in a browser window, then click on the navigation menu in the top/left corner:
Getting Started: Step 2
5
The next columns to the right show folders, and scroll down to click on databricks_guide
Getting Started: Step 3
6
Scroll to open the 01 Quick Start notebook, then follow the discussion about using key features:
Getting Started: Step 4
7
See /databricks-guide/01 Quick Start
Key Features:
• Workspace / Folder / Notebook
• Code Cells, run/edit/move/comment
• Markdown
• Results
• Import/Export
Getting Started: Step 5
8
Click on the Workspace menu and create your own folder (pick a name):
Getting Started: Step 6
9
Navigate to /_SparkCamp/00.pre-flight-check hover on its drop-down menu, on the right side:
Getting Started: Step 7
10
Then create a clone of this notebook in the folder that you just created:
Getting Started: Step 8
11
Attach your cluster – same as your username:
Getting Started: Step 9
12
Now let’s get started with the coding exercise!
We’ll define an initial Spark app in three lines of code:
Getting Started: Coding Exercise
13
Getting Started: Extra Bonus!!
See also the /learning_spark_book
for all of its code examples in notebooks:
How Spark runs on a Cluster
Driver
Worker
Worker
Worker
block 1
block 2
block 3
cache 1
cache 2
cache 3
15
Clone and run /_SparkCamp/01.log_example in your folder:
Spark Deconstructed: Log Mining Example
Spark Deconstructed: Log Mining Example
16
# load error messages from a log into memory!# then interactively search for patterns! ## base RDD!lines = sc.textFile("/mnt/paco/intro/error_log.txt") \# .map(lambda x: x.split("\t"))# ## transformed RDDs!errors = lines.filter(lambda x: x[0] == "ERROR")#messages = errors.map(lambda x: x[1])# ## persistence!messages.cache()# ## action 1!messages.filter(lambda x: x.find("mysql") > -1).count()# ## action 2!messages.filter(lambda x: x.find("php") > -1).count()
Spark Deconstructed: Log Mining Example
x = messages.filter(lambda x: x.find("mysql") > -1)#print(x.toDebugString())#!!(2) PythonRDD[772] at RDD at PythonRDD.scala:43 []# | PythonRDD[219] at RDD at PythonRDD.scala:43 []# | error_log.txt MappedRDD[218] at NativeMethodAccessorImpl.java:-2 []# | error_log.txt HadoopRDD[217] at NativeMethodAccessorImpl.java:-2 []
Note that we can examine the operator graph for a transformed RDD, for example:
17
Driver
Worker
Worker
Worker
Spark Deconstructed: Log Mining Example
We start with Spark running on a cluster… submitting code to be evaluated on it:
18
# base RDD!lines = sc.textFile("/mnt/paco/intro/error_log.txt") \# .map(lambda x: x.split("\t"))# ## transformed RDDs!errors = lines.filter(lambda x: x[0] == "ERROR")#messages = errors.map(lambda x: x[1])# ## persistence!messages.cache()# ## action 1!messages.filter(lambda x: x.find("mysql") > -1).count()# ## action 2!messages.filter(lambda x: x.find("php") > -1).count()
Spark Deconstructed: Log Mining Example
discussing the other part
19
Driver
Worker
Worker
Worker
# base RDD!lines = sc.textFile("/mnt/paco/intro/error_log.txt") \# .map(lambda x: x.split("\t"))# ## transformed RDDs!errors = lines.filter(lambda x: x[0] == "ERROR")#messages = errors.map(lambda x: x[1])# ## persistence!messages.cache()# ## action 1!messages.filter(lambda x: x.find("mysql") > -1).count()# ## action 2!messages.filter(lambda x: x.find("php") > -1).count()
Spark Deconstructed: Log Mining Example
20
Driver
Worker
Worker
Worker
block 1
block 2
block 3
discussing the other part
# base RDD!lines = sc.textFile("/mnt/paco/intro/error_log.txt") \# .map(lambda x: x.split("\t"))# ## transformed RDDs!errors = lines.filter(lambda x: x[0] == "ERROR")#messages = errors.map(lambda x: x[1])# ## persistence!messages.cache()# ## action 1!messages.filter(lambda x: x.find("mysql") > -1).count()# ## action 2!messages.filter(lambda x: x.find("php") > -1).count()
Spark Deconstructed: Log Mining Example
discussing the other part
21
Driver
Worker
Worker
Worker
block 1
block 2
block 3
# base RDD!lines = sc.textFile("/mnt/paco/intro/error_log.txt") \# .map(lambda x: x.split("\t"))# ## transformed RDDs!errors = lines.filter(lambda x: x[0] == "ERROR")#messages = errors.map(lambda x: x[1])# ## persistence!messages.cache()# ## action 1!messages.filter(lambda x: x.find("mysql") > -1).count()# ## action 2!messages.filter(lambda x: x.find("php") > -1).count()
Spark Deconstructed: Log Mining Example
22
Driver
Worker
Worker
Worker
block 1
block 2
block 3
readHDFSblock
readHDFSblock
readHDFSblock
discussing the other part
# base RDD!lines = sc.textFile("/mnt/paco/intro/error_log.txt") \# .map(lambda x: x.split("\t"))# ## transformed RDDs!errors = lines.filter(lambda x: x[0] == "ERROR")#messages = errors.map(lambda x: x[1])# ## persistence!messages.cache()# ## action 1!messages.filter(lambda x: x.find("mysql") > -1).count()# ## action 2!messages.filter(lambda x: x.find("php") > -1).count()
Spark Deconstructed: Log Mining Example
23
Driver
Worker
Worker
Worker
block 1
block 2
block 3
cache 1
cache 2
cache 3
process,cache data
process,cache data
process,cache data
discussing the other part
# base RDD!lines = sc.textFile("/mnt/paco/intro/error_log.txt") \# .map(lambda x: x.split("\t"))# ## transformed RDDs!errors = lines.filter(lambda x: x[0] == "ERROR")#messages = errors.map(lambda x: x[1])# ## persistence!messages.cache()# ## action 1!messages.filter(lambda x: x.find("mysql") > -1).count()# ## action 2!messages.filter(lambda x: x.find("php") > -1).count()
Spark Deconstructed: Log Mining Example
24
Driver
Worker
Worker
Worker
block 1
block 2
block 3
cache 1
cache 2
cache 3
discussing the other part
# base RDD!lines = sc.textFile("/mnt/paco/intro/error_log.txt") \# .map(lambda x: x.split("\t"))# ## transformed RDDs!errors = lines.filter(lambda x: x[0] == "ERROR")#messages = errors.map(lambda x: x[1])# ## persistence!messages.cache()# ## action 1!messages.filter(lambda x: x.find("mysql") > -1).count()# ## action 2!messages.filter(lambda x: x.find("php") > -1).count()
Spark Deconstructed: Log Mining Example
discussing the other part
25
Driver
Worker
Worker
Worker
block 1
block 2
block 3
cache 1
cache 2
cache 3
# base RDD!lines = sc.textFile("/mnt/paco/intro/error_log.txt") \# .map(lambda x: x.split("\t"))# ## transformed RDDs!errors = lines.filter(lambda x: x[0] == "ERROR")#messages = errors.map(lambda x: x[1])# ## persistence!messages.cache()# ## action 1!messages.filter(lambda x: x.find("mysql") > -1).count()# ## action 2!messages.filter(lambda x: x.find("php") > -1).count()
Spark Deconstructed: Log Mining Example
26
Driver
Worker
Worker
Worker
block 1
block 2
block 3
cache 1
cache 2
cache 3
processfrom cache
processfrom cache
processfrom cache
discussing the other part
# base RDD!lines = sc.textFile("/mnt/paco/intro/error_log.txt") \# .map(lambda x: x.split("\t"))# ## transformed RDDs!errors = lines.filter(lambda x: x[0] == "ERROR")#messages = errors.map(lambda x: x[1])# ## persistence!messages.cache()# ## action 1!messages.filter(lambda x: x.find("mysql") > -1).count()# ## action 2!messages.filter(lambda x: x.find("php") > -1).count()
Spark Deconstructed: Log Mining Example
27
Driver
Worker
Worker
Worker
block 1
block 2
block 3
cache 1
cache 2
cache 3
discussing the other part
Looking at the RDD transformations and actions from another perspective…
action value
RDDRDDRDD
transformations RDD
# base RDD!
lines = sc.textFile("/mnt/paco/intro/error_log.txt") \#
.map(lambda x: x.split("\t"))#
#
# transformed RDDs!
errors = lines.filter(lambda x: x[0] == "ERROR")#
messages = errors.map(lambda x: x[1])#
#
# persistence!
messages.cache()#
#
# action 1!
messages.filter(lambda x: x.find("mysql") > -1).count()#
#
# action 2!
messages.filter(lambda x: x.find("php") > -1).count()
28
Spark Deconstructed: Log Mining Example
RDD
# base RDD!lines = sc.textFile("/mnt/paco/intro/error_log.txt") \# .map(lambda x: x.split("\t"))
29
Spark Deconstructed: Log Mining Example
RDDRDDRDD
transformations RDD
# transformed RDDs!errors = lines.filter(lambda x: x[0] == "ERROR")#messages = errors.map(lambda x: x[1])#!# persistence!messages.cache()
30
Spark Deconstructed: Log Mining Example
action value
RDDRDDRDD
transformations RDD
# action 1!messages.filter(lambda x: x.find("mysql") > -1).count()
31
Spark Deconstructed: Log Mining Example
Ex #3: WC, Joins, Shuffles
A:
stage 1B:
C:
stage 2D:
stage 3
E:
map()map()
map()map()
join()
cachedpartition
RDD
Coding Exercise: WordCount
void map (String doc_id, String text):#
for each word w in segment(text):#
emit(w, "1");#
!!void reduce (String word, Iterator group):#
int count = 0;#
! for each pc in group:#
count += Int(pc);#
! emit(word, String(count));
Definition:
count how often each word appears in a collection of text documents
This simple program provides a good test case for parallel processing, since it:
• requires a minimal amount of code
• demonstrates use of both symbolic and numeric values
• isn’t many steps away from search indexing
• serves as a “Hello World” for Big Data apps
!A distributed computing framework that can run WordCount efficiently in parallel at scale can likely handle much larger and more interesting compute problems
count how often each word appears in a collection of text documents
33
WordCount in 3 lines of Spark
WordCount in 50+ lines of Java MR
34
Coding Exercise: WordCount
35
Clone and run /_SparkCamp/02.wc_example in your folder:
Coding Exercise: WordCount
36
Clone and run /_SparkCamp/03.join_example in your folder:
Coding Exercise: Join
A:
stage 1
B:
C:
stage 2
D:
stage 3
E:map() map()
map() map()
join()
cachedpartition
RDD
37
Coding Exercise: Join and its Operator Graph
DBC Essentials
evaluation
optimization
representation
circa 2010
ETL into
cluster/cloud
data data
visualize,
reporting
Data
Prep
Features
Learners,
Parameters
Unsupervised
Learning
Explore train set
test set
models
Evaluate
Optimize
Scoring
production
data
usecases
data pipelines
actionable resultsdecisions, feedback
bar developersfoo algorithms
39
DBC Essentials: What is Databricks Cloud?
Databricks Platform
Databricks Workspace
Also see FAQ for more details…
40
DBC Essentials: What is Databricks Cloud?
key concepts
Shard an instance of Databricks Workspace
Cluster a Spark cluster (multiple per shard)
Notebook a list of markdown, executable commands, and results
Dashboard a flexible space to create operational visualizations
Also see FAQ for more details…
41
DBC Essentials: Notebooks
• Series of commands (think shell++)
• Each notebook has a language type, chosen at notebook creation:
• Python + SQL
• Scala + SQL
• SQL only
• Command output captured in notebook
• Commands can be…
• edited, reordered, rerun, exported, cloned, imported, etc.
42
DBC Essentials: Clusters
• Open source Spark clusters hosted in the cloud
• Access the Spark UI
• Attach and Detach notebooks to clusters !NB: our training shards use 7 GB cluster configurations
43
DBC Essentials: Team, State, Collaboration, Elastic Resources
Cloud
login
state
attached
Sparkcluster
Shard
Notebook
Sparkcluster
detached
Browser
team
Browserlogin
import/export
LocalCopies
44
DBC Essentials: Team, State, Collaboration, Elastic Resources
Excellent collaboration properties, based on the use of:
• comments
• cloning
• decoupled state of notebooks vs. clusters
• relative independence of code blocks within a notebook
DBC Essentials: Feedback
45
Other feedback, suggestions, etc.?
http://feedback.databricks.com/
http://forums.databricks.com/
UserVoice login in top/right corner…
How to “Think Notebooks”
How to “think” in terms of leveraging notebooks, based on Computational Thinking:
47
Think Notebooks:
“The way we depict space has a great deal to do with how we behave in it.” – David Hockney
48
“The impact of computing extends far beyond science… affecting all aspects of our lives. To flourish in today's world, everyone needs computational thinking.” – CMU
Computing now ranks alongside the proverbial Reading, Writing, and Arithmetic…
Center for Computational Thinking @ CMUhttp://www.cs.cmu.edu/~CompThink/
Exploring Computational Thinking @ Google https://www.google.com/edu/computational-thinking/
Think Notebooks: Computational Thinking
49
Computational Thinking provides a structured way of conceptualizing the problem…
In effect, developing notes for yourself and your team
These in turn can become the basis for team process, software requirements, etc.,
In other words, conceptualize how to leverage computing resources at scale to build high-ROI apps for Big Data
Think Notebooks: Computational Thinking
50
The general approach, in four parts:
• Decomposition: decompose a complex problem into smaller solvable problems
• Pattern Recognition: identify when a known approach can be leveraged
• Abstraction: abstract from those patterns into generalizations as strategies
• Algorithm Design: articulate strategies as algorithms, i.e. as general recipes for how to handle complex problems
Think Notebooks: Computational Thinking
How to “think” in terms of leveraging notebooks, by the numbers:
1. create a new notebook
2. copy the assignment description as markdown
3. split it into separate code cells
4. for each step, write your code under the markdown
5. run each step and verify your results
51
Think Notebooks:
Let’s assemble the pieces of the previous few code examples, using two files:
/mnt/paco/intro/CHANGES.txt /mnt/paco/intro/README.md#
1. create RDDs to filter each line for the keyword Spark
2. perform a WordCount on each, i.e., so the results are (K, V) pairs of (keyword, count)
3. join the two RDDs
4. how many instances of Spark are there in each file?
52
Coding Exercises: Workflow assignment
Tour of Spark API
Cluster Manager
Driver Program
SparkContext
Worker Node
Executor cache
tasktask
Worker Node
Executor cache
tasktask
The essentials of the Spark API in both Scala and Python…
/_SparkCamp/05.scala_api#/_SparkCamp/05.python_api#
!Let’s start with the basic concepts, which are covered in much more detail in the docs:
spark.apache.org/docs/latest/scala-programming-guide.html
Spark Essentials:
54
First thing that a Spark program does is create a SparkContext object, which tells Spark how to access a cluster
In the shell for either Scala or Python, this is the sc variable, which is created automatically
Other programs must use a constructor to instantiate a new SparkContext
Then in turn SparkContext gets used to create other variables
Spark Essentials: SparkContext
55
sc#res0: org.apache.spark.SparkContext = org.apache.spark.SparkContext@6ad51e9c
Spark Essentials: SparkContext
sc#Out[1]: <__main__.RemoteContext at 0x7ff0bfb18a10>
Scala:
Python:
56
The master parameter for a SparkContext determines which cluster to use
Spark Essentials: Master
master description
localrun Spark locally with one worker thread (no parallelism)
local[K]run Spark locally with K worker threads (ideally set to # cores)
spark://HOST:PORTconnect to a Spark standalone cluster; PORT depends on config (7077 by default)
mesos://HOST:PORTconnect to a Mesos cluster; PORT depends on config (5050 by default)
57
Cluster ManagerDriver Program
SparkContext
Worker Node
Executor cache
tasktask
Worker Node
Executor cache
tasktask
spark.apache.org/docs/latest/cluster-overview.html
Spark Essentials: Master
58
Cluster ManagerDriver Program
SparkContext
Worker Node
Executor cache
tasktask
Worker Node
Executor cache
tasktask
The driver performs the following: 1. connects to a cluster manager to allocate
resources across applications 2. acquires executors on cluster nodes –
processes run compute tasks, cache data 3. sends app code to the executors 4. sends tasks for the executors to run
Spark Essentials: Clusters
59
Resilient Distributed Datasets (RDD) are the primary abstraction in Spark – a fault-tolerant collection of elements that can be operated on in parallel
There are currently two types:
• parallelized collections – take an existing Scala collection and run functions on it in parallel
• Hadoop datasets – run functions on each record of a file in Hadoop distributed file system or any other storage system supported by Hadoop
Spark Essentials: RDD
60
• two types of operations on RDDs: transformations and actions
• transformations are lazy (not computed immediately)
• the transformed RDD gets recomputed when an action is run on it (default)
• however, an RDD can be persisted into storage in memory or disk
Spark Essentials: RDD
61
val data = Array(1, 2, 3, 4, 5)#data: Array[Int] = Array(1, 2, 3, 4, 5)#!val distData = sc.parallelize(data)#distData: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[24970]
Spark Essentials: RDD
data = [1, 2, 3, 4, 5]#data#Out[2]: [1, 2, 3, 4, 5]#!distData = sc.parallelize(data)#distData#Out[3]: ParallelCollectionRDD[24864] at parallelize at PythonRDD.scala:364
Scala:
Python:
62
Spark can create RDDs from any file stored in HDFS or other storage systems supported by Hadoop, e.g., local file system, Amazon S3, Hypertable, HBase, etc.
Spark supports text files, SequenceFiles, and any other Hadoop InputFormat, and can also take a directory or a glob (e.g. /data/201404*)
Spark Essentials: RDD
action value
RDDRDDRDD
transformations RDD
63
val distFile = sqlContext.table("readme")#distFile: org.apache.spark.sql.SchemaRDD = #SchemaRDD[24971] at RDD at SchemaRDD.scala:108
Spark Essentials: RDD
distFile = sqlContext.table("readme").map(lambda x: x[0])#distFile#Out[11]: PythonRDD[24920] at RDD at PythonRDD.scala:43
Scala:
Python:
64
Transformations create a new dataset from an existing one
All transformations in Spark are lazy: they do not compute their results right away – instead they remember the transformations applied to some base dataset
• optimize the required calculations
• recover from lost data partitions
Spark Essentials: Transformations
65
Spark Essentials: Transformations
transformation description
map(func)return a new distributed dataset formed by passing each element of the source through a function func
filter(func)
return a new dataset formed by selecting those elements of the source on which func returns true
flatMap(func)
similar to map, but each input item can be mapped to 0 or more output items (so func should return a Seq rather than a single item)
sample(withReplacement, fraction, seed)
sample a fraction fraction of the data, with or without replacement, using a given random number generator seed
union(otherDataset)return a new dataset that contains the union of the elements in the source dataset and the argument
distinct([numTasks]))return a new dataset that contains the distinct elements of the source dataset
66
Spark Essentials: Transformations
transformation description
groupByKey([numTasks])when called on a dataset of (K, V) pairs, returns a dataset of (K, Seq[V]) pairs
reduceByKey(func, [numTasks])
when called on a dataset of (K, V) pairs, returns a dataset of (K, V) pairs where the values for each key are aggregated using the given reduce function
sortByKey([ascending], [numTasks])
when called on a dataset of (K, V) pairs where K implements Ordered, returns a dataset of (K, V) pairs sorted by keys in ascending or descending order, as specified in the boolean ascending argument
join(otherDataset, [numTasks])
when called on datasets of type (K, V) and (K, W), returns a dataset of (K, (V, W)) pairs with all pairs of elements for each key
cogroup(otherDataset, [numTasks])
when called on datasets of type (K, V) and (K, W), returns a dataset of (K, Seq[V], Seq[W]) tuples – also called groupWith
cartesian(otherDataset)when called on datasets of types T and U, returns a dataset of (T, U) pairs (all pairs of elements)
67
val distFile = sqlContext.table("readme").map(_(0).asInstanceOf[String])#distFile.map(l => l.split(" ")).collect()#distFile.flatMap(l => l.split(" ")).collect()
Spark Essentials: Transformations
distFile = sqlContext.table("readme").map(lambda x: x[0])#distFile.map(lambda x: x.split(' ')).collect()#distFile.flatMap(lambda x: x.split(' ')).collect()
Scala:
Python:distFile is a collection of lines
68
Spark Essentials: Transformations
Scala:
Python:closures
val distFile = sqlContext.table("readme").map(_(0).asInstanceOf[String])#distFile.map(l => l.split(" ")).collect()#distFile.flatMap(l => l.split(" ")).collect()
distFile = sqlContext.table("readme").map(lambda x: x[0])#distFile.map(lambda x: x.split(' ')).collect()#distFile.flatMap(lambda x: x.split(' ')).collect()
69
Spark Essentials: Transformations
Scala:
Python:closures
val distFile = sqlContext.table("readme").map(_(0).asInstanceOf[String])#distFile.map(l => l.split(" ")).collect()#distFile.flatMap(l => l.split(" ")).collect()
distFile = sqlContext.table("readme").map(lambda x: x[0])#distFile.map(lambda x: x.split(' ')).collect()#distFile.flatMap(lambda x: x.split(' ')).collect()
70
looking at the output, how would you compare results for map() vs. flatMap() ?
Spark Essentials: Actions
action description
reduce(func)aggregate the elements of the dataset using a function func (which takes two arguments and returns one), and should also be commutative and associative so that it can be computed correctly in parallel
collect()return all the elements of the dataset as an array at the driver program – usually useful after a filter or other operation that returns a sufficiently small subset of the data
count() return the number of elements in the dataset
first() return the first element of the dataset – similar to take(1)
take(n)return an array with the first n elements of the dataset – currently not executed in parallel, instead the driver program computes all the elements
takeSample(withReplacement, fraction, seed)
return an array with a random sample of num elements of the dataset, with or without replacement, using the given random number generator seed
71
Spark Essentials: Actions
action description
saveAsTextFile(path)
write the elements of the dataset as a text file (or set of text files) in a given directory in the local filesystem, HDFS or any other Hadoop-supported file system. Spark will call toString on each element to convert it to a line of text in the file
saveAsSequenceFile(path)
write the elements of the dataset as a Hadoop SequenceFile in a given path in the local filesystem, HDFS or any other Hadoop-supported file system. Only available on RDDs of key-value pairs that either implement Hadoop's Writable interface or are implicitly convertible to Writable (Spark includes conversions for basic types like Int, Double, String, etc).
countByKey() only available on RDDs of type (K, V). Returns a `Map` of (K, Int) pairs with the count of each key
foreach(func)run a function func on each element of the dataset – usually done for side effects such as updating an accumulator variable or interacting with external storage systems
72
val distFile = sqlContext.table("readme").map(_(0).asInstanceOf[String])#val words = f.flatMap(l => l.split(" ")).map(word => (word, 1))#words.reduceByKey(_ + _).collect.foreach(println)
Spark Essentials: Actions
from operator import add#f = sqlContext.table("readme").map(lambda x: x[0])#words = f.flatMap(lambda x: x.split(' ')).map(lambda x: (x, 1))#words.reduceByKey(add).collect()
Scala:
Python:
73
Spark can persist (or cache) a dataset in memory across operations spark.apache.org/docs/latest/programming-guide.html#rdd-persistence
Each node stores in memory any slices of it that it computes and reuses them in other actions on that dataset – often making future actions more than 10x faster
The cache is fault-tolerant: if any partition of an RDD is lost, it will automatically be recomputed using the transformations that originally created it
Spark Essentials: Persistence
74
Spark Essentials: Persistence
transformation description
MEMORY_ONLYStore RDD as deserialized Java objects in the JVM. If the RDD does not fit in memory, some partitions will not be cached and will be recomputed on the fly each time they're needed. This is the default level.
MEMORY_AND_DISKStore RDD as deserialized Java objects in the JVM. If the RDD does not fit in memory, store the partitions that don't fit on disk, and read them from there when they're needed.
MEMORY_ONLY_SERStore RDD as serialized Java objects (one byte array per partition). This is generally more space-efficient than deserialized objects, especially when using a fast serializer, but more CPU-intensive to read.
MEMORY_AND_DISK_SERSimilar to MEMORY_ONLY_SER, but spill partitions that don't fit in memory to disk instead of recomputing them on the fly each time they're needed.
DISK_ONLY Store the RDD partitions only on disk.
MEMORY_ONLY_2, MEMORY_AND_DISK_2, etc
Same as the levels above, but replicate each partition on two cluster nodes.
OFF_HEAP (experimental)Store RDD in serialized format in Tachyon.
75
val distFile = sqlContext.table("readme").map(_(0).asInstanceOf[String])#val words = f.flatMap(l => l.split(" ")).map(word => (word, 1)).cache()#words.reduceByKey(_ + _).collect.foreach(println)
Spark Essentials: Persistence
from operator import add#f = sqlContext.table("readme").map(lambda x: x[0])#w = f.flatMap(lambda x: x.split(' ')).map(lambda x: (x, 1)).cache()#w.reduceByKey(add).collect()
Scala:
Python:
76
Broadcast variables let programmer keep a read-only variable cached on each machine rather than shipping a copy of it with tasks
For example, to give every node a copy of a large input dataset efficiently
Spark also attempts to distribute broadcast variables using efficient broadcast algorithms to reduce communication cost
Spark Essentials: Broadcast Variables
77
val broadcastVar = sc.broadcast(Array(1, 2, 3))#broadcastVar.value!res10: Array[Int] = Array(1, 2, 3)
Spark Essentials: Broadcast Variables
broadcastVar = sc.broadcast(list(range(1, 4)))#broadcastVar.value!Out[15]: [1, 2, 3]
Scala:
Python:
78
Accumulators are variables that can only be “added” to through an associative operation
Used to implement counters and sums, efficiently in parallel
Spark natively supports accumulators of numeric value types and standard mutable collections, and programmers can extend for new types
Only the driver program can read an accumulator’s value, not the tasks
Spark Essentials: Accumulators
79
val accum = sc.accumulator(0)#sc.parallelize(Array(1, 2, 3, 4)).foreach(x => accum += x)#!accum.value!res11: Int = 10
Spark Essentials: Accumulators
accum = sc.accumulator(0)#rdd = sc.parallelize([1, 2, 3, 4])#def f(x):# global accum# accum += x#!rdd.foreach(f)#!accum.value!Out[16]: 10
Scala:
Python:
80
val accum = sc.accumulator(0)#sc.parallelize(Array(1, 2, 3, 4)).foreach(x => accum += x)#!accum.value!res11: Int = 10
Spark Essentials: Accumulators
accum = sc.accumulator(0)#rdd = sc.parallelize([1, 2, 3, 4])#def f(x):# global accum# accum += x#!rdd.foreach(f)#!accum.value!Out[16]: 10
Scala:
Python:driver-side
81
For a deep-dive about broadcast variables and accumulator usage in Spark, see also:
Advanced Spark Features Matei Zaharia, Jun 2012 ampcamp.berkeley.edu/wp-content/uploads/2012/06/matei-zaharia-amp-camp-2012-advanced-spark.pdf
Spark Essentials: Broadcast Variables and Accumulators
82
val pair = (a, b)# # pair._1 // => a# pair._2 // => b
Spark Essentials: (K, V) pairs
Scala:
83
Python:pair = (a, b)# # pair[0] # => a# pair[1] # => b
Spark Essentials: API Details
For more details about the Scala API:
spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.package
For more details about the Python API:
spark.apache.org/docs/latest/api/python/
84
Spark SQL
blurs the lines between RDDs and relational tables
spark.apache.org/docs/latest/sql-programming-guide.html !intermix SQL commands to query external data, along with complex analytics, in a single app:
• allows SQL extensions based on MLlib
• provides the “heavy lifting” for ETL in DBC
86
Spark SQL: Data Workflows
Spark SQL: Manipulating Structured Data Using Spark Michael Armbrust, Reynold Xin databricks.com/blog/2014/03/26/Spark-SQL-manipulating-structured-data-using-Spark.html
The Spark SQL Optimizer and External Data Sources API Michael Armbrust youtu.be/GQSNJAzxOr8
What's coming for Spark in 2015 Patrick Wendell youtu.be/YWppYPWznSQ
Introducing DataFrames in Spark for Large Scale Data Science Reynold Xin, Michael Armbrust, Davies Liu databricks.com/blog/2015/02/17/introducing-dataframes-in-spark-for-large-scale-data-science.html
87
Spark SQL: Data Workflows
Parquet is a columnar format, supported by many different Big Data frameworks
http://parquet.io/
Spark SQL supports read/write of parquet files, automatically preserving the schema of the original data (HUGE benefits)
See also:
Efficient Data Storage for Analytics with Parquet 2.0 Julien Le Dem @Twitter slideshare.net/julienledem/th-210pledem
88
Spark SQL: Data Workflows – Parquet
89
Demo of /_SparkCamp/demo_sql_scala by the instructor:
Spark SQL: SQL Demo
90
Next, we’ll review the following sections in the Databricks Guide: /databricks_guide/Importing Data#/databricks_guide/Databricks File System#
!Key Topics:
• JSON, CSV, Parquet
• S3, Hive, Redshift
• DBFS, dbutils
Spark SQL: Using DBFS
Visualization
92
For any SQL query, you can show the results as a table, or generate a plot from with a single click…
Visualization: Built-in Plots
93
Several of the plot types have additional options to customize the graphs they generate…
Visualization: Plot Options
94
For example, series groupings can be used to help organize bar charts…
Visualization: Series Groupings
95
See /databricks-guide/05 Visualizations for details about built-in visualizations and extensions…
Visualization: Reference Guide
96
The display() command:
• programmatic access to visualizations
• pass a SchemaRDD to print as an HTML table
• pass a Scala list to print as an HTML table
• call without arguments to display matplotlib figures
Visualization: Using display()
97
The displayHTML() command:
• render any arbitrary HTML/JavaScript
• include JavaScript libraries (advanced feature)
• paste in D3 examples to get a sense for this…
Visualization: Using displayHTML()
98
Clone the entire folder /_SparkCamp/Viz D3 into your folder and run its notebooks:
Demo: D3 Visualization
99
Clone and run /_SparkCamp/07.sql_visualization in your folder:
Coding Exercise: SQL + Visualization
Training Survey
We appreciate your feedback about the DBC workshop. Please let us know how best to improve this material:
http://goo.gl/forms/oiA7YeO7VH
Also, if you’d like to sign-up for our monthly newsletter: go.databricks.com/newsletter-sign-up
GraphX examples
cost
4
node
0
node
1
node
3node
2
cost
3
cost
1
cost
2
cost
1
102
GraphX:
spark.apache.org/docs/latest/graphx-programming-guide.html !Key Points: !• graph-parallel systems • importance of workflows • optimizations
103
PowerGraph: Distributed Graph-Parallel Computation on Natural Graphs J. Gonzalez, Y. Low, H. Gu, D. Bickson, C. Guestrin graphlab.org/files/osdi2012-gonzalez-low-gu-bickson-guestrin.pdf
Pregel: Large-scale graph computing at Google Grzegorz Czajkowski, et al. googleresearch.blogspot.com/2009/06/large-scale-graph-computing-at-google.html
GraphX: Unified Graph Analytics on Spark Ankur Dave, Databricksdatabricks-training.s3.amazonaws.com/slides/graphx@sparksummit_2014-07.pdf
Advanced Exercises: GraphX databricks-training.s3.amazonaws.com/graph-analytics-with-graphx.html
GraphX: Further Reading…
// http://spark.apache.org/docs/latest/graphx-programming-guide.html!!import org.apache.spark.graphx._#import org.apache.spark.rdd.RDD#!case class Peep(name: String, age: Int)#!val nodeArray = Array(# (1L, Peep("Kim", 23)), (2L, Peep("Pat", 31)),# (3L, Peep("Chris", 52)), (4L, Peep("Kelly", 39)),# (5L, Peep("Leslie", 45))# )#val edgeArray = Array(# Edge(2L, 1L, 7), Edge(2L, 4L, 2),# Edge(3L, 2L, 4), Edge(3L, 5L, 3),# Edge(4L, 1L, 1), Edge(5L, 3L, 9)# )#!val nodeRDD: RDD[(Long, Peep)] = sc.parallelize(nodeArray)#val edgeRDD: RDD[Edge[Int]] = sc.parallelize(edgeArray)#val g: Graph[Peep, Int] = Graph(nodeRDD, edgeRDD)#!val results = g.triplets.filter(t => t.attr > 7)#!for (triplet <- results.collect) {# println(s"${triplet.srcAttr.name} loves ${triplet.dstAttr.name}")#}
104
GraphX: Example – simple traversals
105
GraphX: Example – routing problems
cost4
node0
node1
node3
node2
cost3
cost1
cost2
cost1
What is the cost to reach node 0 from any other node in the graph? This is a common use case for graph algorithms, e.g., Djikstra
106
Clone and run /_SparkCamp/08.graphx in your folder:
GraphX: Coding Exercise
Further Resources + Q&A
Spark Developer Certification
• go.databricks.com/spark-certified-developer
• defined by Spark experts @Databricks
• assessed by O’Reilly Media
• establishes the bar for Spark expertise
• 40 multiple-choice questions, 90 minutes
• mostly structured as choices among code blocks
• expect some Python, Java, Scala, SQL
• understand theory of operation
• identify best practices
• recognize code that is more parallel, less memory constrained
!
Overall, you need to write Spark apps in practice
Developer Certification: Overview
109
community:
spark.apache.org/community.html
events worldwide: goo.gl/2YqJZK
YouTube channel: goo.gl/N5Hx3h
!video+preso archives: spark-summit.org
resources: databricks.com/spark/developer-resources
workshops: databricks.com/spark/training
MOOCs:
Anthony Joseph UC Berkeley begins Jun 2015 edx.org/course/uc-berkeleyx/uc-berkeleyx-cs100-1x-introduction-big-6181
Ameet Talwalkar UCLA begins Jun 2015 edx.org/course/uc-berkeleyx/uc-berkeleyx-cs190-1x-scalable-machine-6066
Resources: Spark Packages
112
Looking for other libraries and features? There are a variety of third-party packages available at:
http://spark-packages.org/
confs: Big Data Tech Con Boston, Apr 26-28 bigdatatechcon.com
Strata EULondon, May 5-7 strataconf.com/big-data-conference-uk-2015
GOTO Chicago Chicago, May 11-14 gotocon.com/chicago-2015
Spark Summit 2015 SF, Jun 15-17 spark-summit.org
books+videos:
Fast Data Processing with Spark Holden Karau Packt (2013) shop.oreilly.com/product/9781782167068.do
Spark in Action Chris FreglyManning (2015) sparkinaction.com/
Learning Spark Holden Karau, Andy Konwinski,Parick Wendell, Matei ZahariaO’Reilly (2015) shop.oreilly.com/product/0636920028512.do
Intro to Apache Spark Paco NathanO’Reilly (2015) shop.oreilly.com/product/0636920036807.do
Advanced Analytics with Spark Sandy Ryza, Uri Laserson,Sean Owen, Josh WillsO’Reilly (2014) shop.oreilly.com/product/0636920035091.do
Top Related