Can’t We All Just Get Along? Sandy Ryza. Introductions Software engineer at Cloudera MapReduce,...

24
Can’t We All Just Get Along? Sandy Ryza

Transcript of Can’t We All Just Get Along? Sandy Ryza. Introductions Software engineer at Cloudera MapReduce,...

Page 1: Can’t We All Just Get Along? Sandy Ryza. Introductions Software engineer at Cloudera MapReduce, YARN, Resource management Hadoop committer.

Can’t We All Just Get Along?Sandy Ryza

Page 2: Can’t We All Just Get Along? Sandy Ryza. Introductions Software engineer at Cloudera MapReduce, YARN, Resource management Hadoop committer.

Introductions

• Software engineer at Cloudera• MapReduce, YARN, Resource management

• Hadoop committer

Page 3: Can’t We All Just Get Along? Sandy Ryza. Introductions Software engineer at Cloudera MapReduce, YARN, Resource management Hadoop committer.

Bringing Computation to the Data

• Users want to• ETL a dataset with Pig and MapReduce• Fit a model to it with Spark• Have BI tools query it with Impala

• Same set of machines that hold data must also host these frameworks

Page 4: Can’t We All Just Get Along? Sandy Ryza. Introductions Software engineer at Cloudera MapReduce, YARN, Resource management Hadoop committer.

Cluster Resource Management

• Hadoop brings generalized computation to big data• More processing frameworks

• MapReduce, Impala, Spark• Some workloads are more important than others• A cluster has finite resources

• Limited CPU, memory, disk and network bandwidth• How do we make sure each workload gets the

resources it deserves?

Page 5: Can’t We All Just Get Along? Sandy Ryza. Introductions Software engineer at Cloudera MapReduce, YARN, Resource management Hadoop committer.

Introduction

this talk is about our vision for Spark on Hadoopwe see it as a first class data processing framework alongside MR and Impala

our goal is to get it to run seamlessly with themmost of the talk is going to be about how it can do this already

the rest will be about what we need in the future to make it better

Page 6: Can’t We All Just Get Along? Sandy Ryza. Introductions Software engineer at Cloudera MapReduce, YARN, Resource management Hadoop committer.

How We See It

HDFS

Impala MapReduce Spark

Page 7: Can’t We All Just Get Along? Sandy Ryza. Introductions Software engineer at Cloudera MapReduce, YARN, Resource management Hadoop committer.

How They Want to See It

Engineering - 50% Finance - 30% Marketing - 20%

Spark MR

Impala

Spark

MR

Impala

Spark

MR

Impala

HDFS

Page 8: Can’t We All Just Get Along? Sandy Ryza. Introductions Software engineer at Cloudera MapReduce, YARN, Resource management Hadoop committer.

Central Resource Management

HDFS

Impala MapReduce Spark

YARN

Page 9: Can’t We All Just Get Along? Sandy Ryza. Introductions Software engineer at Cloudera MapReduce, YARN, Resource management Hadoop committer.

YARN

• Resource manager and scheduler for Hadoop• “Container” is a process scheduled on the cluster

with a resource allocation (amount MB, # cores)• Each container belongs to an “Application”

Page 10: Can’t We All Just Get Along? Sandy Ryza. Introductions Software engineer at Cloudera MapReduce, YARN, Resource management Hadoop committer.

YARN Application Masters

• Each YARN app has an “Application Master” (AM) process running on the cluster

• AM responsible for requesting containers from YARN• AM creation latency is much higher than resource

acquisition

Page 11: Can’t We All Just Get Along? Sandy Ryza. Introductions Software engineer at Cloudera MapReduce, YARN, Resource management Hadoop committer.

YARN

ResourceManager

NodeManager NodeManager

Container

Map Task

Container

Application Master

Container

Reduce Task

JobHistoryServer Client

Page 12: Can’t We All Just Get Along? Sandy Ryza. Introductions Software engineer at Cloudera MapReduce, YARN, Resource management Hadoop committer.

YARN Queues

• Cluster resources allocated to “queues”• Each application belongs to a queue• Queues may contain subqueues

RootMem Capacity: 12 GB

CPU Capacity: 24 cores

MarketingFair Share Mem: 4 GB

Fair Share CPU: 8 cores

R&DFair Share Mem: 4 GB

Fair Share CPU: 8 cores

SalesFair Share Mem: 4 GB

Fair Share CPU: 8 cores

Jim’s TeamFair Share Mem: 2 GB

Fair Share CPU: 4 cores

Bob’s TeamFair Share Mem: 2 GB

Fair Share CPU: 4 cores

Page 13: Can’t We All Just Get Along? Sandy Ryza. Introductions Software engineer at Cloudera MapReduce, YARN, Resource management Hadoop committer.

YARN app models

• Application master (AM) per job• Most simple for batch• Used by MapReduce

• Application master per session• Runs multiple jobs on behalf of the same user• Recently added in Tez

• Long-lived• Permanently on, waits around for jobs to come in• Used for Impala

Page 14: Can’t We All Just Get Along? Sandy Ryza. Introductions Software engineer at Cloudera MapReduce, YARN, Resource management Hadoop committer.

Spark Usage Modes

Mode Long-Lived Multiple Users

Batch No No

Interactive Yes No

Server Yes Yes

Page 15: Can’t We All Just Get Along? Sandy Ryza. Introductions Software engineer at Cloudera MapReduce, YARN, Resource management Hadoop committer.

Spark on YARN

• Developed at Yahoo• Application Master per SparkContext• Container per Spark executor• Currently useful for Spark Batch jobs

• Requests all resources up front

Page 16: Can’t We All Just Get Along? Sandy Ryza. Introductions Software engineer at Cloudera MapReduce, YARN, Resource management Hadoop committer.

Long-Lived Goals

• Hang on to few resources when we’re not running work

• Use lots of the cluster (over fair share) when it’s not being used by others

• Give back resources gracefully when preempted• Get resources quickly when we need them

Page 17: Can’t We All Just Get Along? Sandy Ryza. Introductions Software engineer at Cloudera MapReduce, YARN, Resource management Hadoop committer.

Mesos Fine-Grained Mode

• Allocate static chunks of memory at Spark app start time

• Schedule CPU dynamically when running tasks

Page 18: Can’t We All Just Get Along? Sandy Ryza. Introductions Software engineer at Cloudera MapReduce, YARN, Resource management Hadoop committer.

Long-Lived Approach

• A YARN application master per Spark application (SparkContext)

• One executor per application per node• One YARN container per executor

Page 19: Can’t We All Just Get Along? Sandy Ryza. Introductions Software engineer at Cloudera MapReduce, YARN, Resource management Hadoop committer.

Long-Lived: YARN work

• YARN-1197 - resizable containers• YARN-896 - long lived YARN

Page 20: Can’t We All Just Get Along? Sandy Ryza. Introductions Software engineer at Cloudera MapReduce, YARN, Resource management Hadoop committer.

Long-Lived: Spark Work

• YARN fine-grained mode• Changes to support adjusting resources in Spark AM• Memory?

Page 21: Can’t We All Just Get Along? Sandy Ryza. Introductions Software engineer at Cloudera MapReduce, YARN, Resource management Hadoop committer.

• We want to be able to have memory allocations preempted and keep running

• RDDs stored in JVM memory• JVMs don’t give back memory

The Memory Problem

Page 22: Can’t We All Just Get Along? Sandy Ryza. Introductions Software engineer at Cloudera MapReduce, YARN, Resource management Hadoop committer.

The Memory Solutions

• Rewrite Spark in C++• Off-heap cache

• Hold RDDs in executor processes in off-heap byte buffers• These can be freed and returned to the OS

• Tachyon• Executor processes don’t hold RDDs• Store data in Tachyon• Punts off-heap problem to Tachyon• Has other advantages, like not losing data when executor

crashes

Page 23: Can’t We All Just Get Along? Sandy Ryza. Introductions Software engineer at Cloudera MapReduce, YARN, Resource management Hadoop committer.

Multiple User Challenges

• A single Spark application wants to run work on behalf of multiple parties

• Applications are typically billed to a single queue• We’d want to bill jobs to different queues

TODO: picture here

Page 24: Can’t We All Just Get Along? Sandy Ryza. Introductions Software engineer at Cloudera MapReduce, YARN, Resource management Hadoop committer.

Multiple Users with Impala

• Impala has same exact problem• Solution: Llama (Low Latency Application MAster)

• Adapter between YARN and Impala• Runs multiple AMs in a single process• Submits resource requests on behalf of relevant AM