MapReduce Online Tyson Condie and Neil Conway UC Berkeley Joint work with Peter Alvaro, Rusty Sears,...

42
MapReduce Online Tyson Condie and Neil Conway UC Berkeley Joint work with Peter Alvaro, Rusty Sears, Khaled Elmeleegy (Yahoo! Research), and Joe Hellerstein

Transcript of MapReduce Online Tyson Condie and Neil Conway UC Berkeley Joint work with Peter Alvaro, Rusty Sears,...

Page 1: MapReduce Online Tyson Condie and Neil Conway UC Berkeley Joint work with Peter Alvaro, Rusty Sears, Khaled Elmeleegy (Yahoo! Research), and Joe Hellerstein.

MapReduce Online

Tyson Condie and Neil ConwayUC Berkeley

Joint work with Peter Alvaro, Rusty Sears, Khaled Elmeleegy (Yahoo! Research), and Joe Hellerstein

Page 2: MapReduce Online Tyson Condie and Neil Conway UC Berkeley Joint work with Peter Alvaro, Rusty Sears, Khaled Elmeleegy (Yahoo! Research), and Joe Hellerstein.

MapReduce Programming Model

• Programmers think in a data-centric fashion– Apply transformations to data sets

• The MR framework handles the Hard Stuff:– Fault tolerance– Distributed execution, scheduling, concurrency– Coordination– Network communication

Page 3: MapReduce Online Tyson Condie and Neil Conway UC Berkeley Joint work with Peter Alvaro, Rusty Sears, Khaled Elmeleegy (Yahoo! Research), and Joe Hellerstein.

MapReduce System Model

• Designed for batch-oriented computations over large data sets– Each operator runs to completion before

producing any output– Operator output is written to stable storage• Map output to local disk, reduce output to HDFS

• Simple, elegant fault tolerance model: operator restart– Critical for large clusters

Page 4: MapReduce Online Tyson Condie and Neil Conway UC Berkeley Joint work with Peter Alvaro, Rusty Sears, Khaled Elmeleegy (Yahoo! Research), and Joe Hellerstein.

Life Beyond Batch Processing

• Can we apply the MR programming model outside batch processing?

• Two domains of interest:1. Interactive data analysis• Enabled by high-level MR query languages, e.g. Hive,

Pig, Jaql• Batch processing is a poor fit

2. Continuous analysis of data streams• Batch processing adds massive latency• Requires saving and reloading analysis state

Page 5: MapReduce Online Tyson Condie and Neil Conway UC Berkeley Joint work with Peter Alvaro, Rusty Sears, Khaled Elmeleegy (Yahoo! Research), and Joe Hellerstein.

MapReduce Online

• Pipeline data between operators as it is produced– Decouple computation schedule (logical) from data

transfer schedule (physical)• Hadoop Online Prototype (HOP): Hadoop with

pipelining support– Preserving the Hadoop interfaces and APIs– Challenge: retain elegant fault tolerance model

• Enables approximate answers and stream processing– Can also reduce the response times of jobs

Page 6: MapReduce Online Tyson Condie and Neil Conway UC Berkeley Joint work with Peter Alvaro, Rusty Sears, Khaled Elmeleegy (Yahoo! Research), and Joe Hellerstein.

Outline

1. Hadoop Background2. HOP Architecture3. Online Aggregation4. Stream Processing with MapReduce5. Future Work and Conclusion

Page 7: MapReduce Online Tyson Condie and Neil Conway UC Berkeley Joint work with Peter Alvaro, Rusty Sears, Khaled Elmeleegy (Yahoo! Research), and Joe Hellerstein.

Hadoop Architecture

• Hadoop MapReduce– Single master node, many worker nodes– Client submits a job to master node– Master splits each job into tasks (map/reduce),

and assigns tasks to worker nodes• Hadoop Distributed File System (HDFS)– Single name node, many data nodes– Files stored as large, fixed-size (e.g. 64MB) blocks– HDFS typically holds map input and reduce output

Page 8: MapReduce Online Tyson Condie and Neil Conway UC Berkeley Joint work with Peter Alvaro, Rusty Sears, Khaled Elmeleegy (Yahoo! Research), and Joe Hellerstein.

Job Scheduling

• One map task for each block of the input file– Applies user-defined map function to each record in the

block– Record = <key, value>

• User-defined number of reduce tasks– Each reduce task is assigned a set of record groups

• Record group = all records with same key– For each group, apply user-defined reduce function to the

record values in that group• Reduce tasks read from every map task– Each read returns the record groups for that reduce task

Page 9: MapReduce Online Tyson Condie and Neil Conway UC Berkeley Joint work with Peter Alvaro, Rusty Sears, Khaled Elmeleegy (Yahoo! Research), and Joe Hellerstein.

Dataflow in Hadoop

• Map tasks write their output to local disk– Output available after map task has completed

• Reduce tasks write their output to HDFS– Once job is finished, next job’s map tasks can be

scheduled, and will read input from HDFS

• Therefore, fault tolerance is simple: simply re-run tasks on failure– No consumers see partial operator output

Page 10: MapReduce Online Tyson Condie and Neil Conway UC Berkeley Joint work with Peter Alvaro, Rusty Sears, Khaled Elmeleegy (Yahoo! Research), and Joe Hellerstein.

Dataflow in Hadoop

Submit job

schedulemapmap

mapmap

reducereduce

reducereduce

Page 11: MapReduce Online Tyson Condie and Neil Conway UC Berkeley Joint work with Peter Alvaro, Rusty Sears, Khaled Elmeleegy (Yahoo! Research), and Joe Hellerstein.

Dataflow in Hadoop

HDFSHDFS

Block 1

Block 2

mapmap

mapmap

reducereduce

reducereduce

Read Input File

Page 12: MapReduce Online Tyson Condie and Neil Conway UC Berkeley Joint work with Peter Alvaro, Rusty Sears, Khaled Elmeleegy (Yahoo! Research), and Joe Hellerstein.

Dataflow in Hadoop

mapmap

mapmap

reducereduce

reducereduce

Local FS

Local FS

Local FS

Local FS

Finished Finished + Location

Page 13: MapReduce Online Tyson Condie and Neil Conway UC Berkeley Joint work with Peter Alvaro, Rusty Sears, Khaled Elmeleegy (Yahoo! Research), and Joe Hellerstein.

Dataflow in Hadoop

mapmap

mapmap

reducereduce

reducereduce

Local FS

Local FS

Local FS

Local FS

HTTP GET

Page 14: MapReduce Online Tyson Condie and Neil Conway UC Berkeley Joint work with Peter Alvaro, Rusty Sears, Khaled Elmeleegy (Yahoo! Research), and Joe Hellerstein.

Dataflow in Hadoop

reducereduce

reducereduce

HDFSHDFS

Write Final Answer

Page 15: MapReduce Online Tyson Condie and Neil Conway UC Berkeley Joint work with Peter Alvaro, Rusty Sears, Khaled Elmeleegy (Yahoo! Research), and Joe Hellerstein.

Hadoop Online Prototype (HOP)

Page 16: MapReduce Online Tyson Condie and Neil Conway UC Berkeley Joint work with Peter Alvaro, Rusty Sears, Khaled Elmeleegy (Yahoo! Research), and Joe Hellerstein.

Hadoop Online Prototype

• HOP supports pipelining within and between MapReduce jobs: push rather than pull– Preserve simple fault tolerance scheme– Improved job completion time (better cluster utilization)– Improved detection and handling of stragglers

• MapReduce programming model unchanged– Clients supply same job parameters

• Hadoop client interface backward compatible– No changes required to existing clients

• E.g., Pig, Hive, Sawzall, Jaql– Extended to take a series of job

Page 17: MapReduce Online Tyson Condie and Neil Conway UC Berkeley Joint work with Peter Alvaro, Rusty Sears, Khaled Elmeleegy (Yahoo! Research), and Joe Hellerstein.

Pipelining Batch Size

• Initial design: pipeline eagerly (for each row)– Prevents use of combiner– Moves more sorting work to mapper– Map function can block on network I/O

• Revised design: map writes into buffer– Spill thread: sort & combine buffer, spill to disk– Send thread: pipeline spill files => reducers

• Simple adaptive algorithm

Page 18: MapReduce Online Tyson Condie and Neil Conway UC Berkeley Joint work with Peter Alvaro, Rusty Sears, Khaled Elmeleegy (Yahoo! Research), and Joe Hellerstein.

Fault Tolerance

• Fault tolerance in MR is simple and elegant– Simply recompute on failure, no state recovery

• Initial design for pipelining FT:– Reduce treats in-progress map output as tentative

• Revised design:– Pipelining maps periodically checkpoint output– Reducers can consume output <= checkpoint– Bonus: improved speculative execution

Page 19: MapReduce Online Tyson Condie and Neil Conway UC Berkeley Joint work with Peter Alvaro, Rusty Sears, Khaled Elmeleegy (Yahoo! Research), and Joe Hellerstein.

Pipeline request

Dataflow in HOP

Schedule Schedule + Location

mapmap

mapmap

reducereduce

reducereduce

Page 20: MapReduce Online Tyson Condie and Neil Conway UC Berkeley Joint work with Peter Alvaro, Rusty Sears, Khaled Elmeleegy (Yahoo! Research), and Joe Hellerstein.

Online Aggregation• Traditional MR: poor UI for data analysis• Pipelining means that data is available at

consumers “early”– Can be used to compute and refine an approximate

answer– Often sufficient for interactive data analysis,

developing new MapReduce jobs, ...• Within a single job: periodically invoke reduce

function at each reduce task on available data• Between jobs: periodically send a “snapshot” to

consumer jobs

Page 21: MapReduce Online Tyson Condie and Neil Conway UC Berkeley Joint work with Peter Alvaro, Rusty Sears, Khaled Elmeleegy (Yahoo! Research), and Joe Hellerstein.

Intra-Job Online Aggregation• Approximate answers published to HDFS by each

reduce task• Based on job progress: e.g. 10%, 20%, …• Challenge: providing statistically meaningful

approximations– How close is an approximation to the final answer?– How do you avoid biased samples?

• Challenge: reduce functions are opaque– Ideally, computing 20% approximation should reuse

results of 10% approximation– Either use combiners, or HOP does redundant work

Page 22: MapReduce Online Tyson Condie and Neil Conway UC Berkeley Joint work with Peter Alvaro, Rusty Sears, Khaled Elmeleegy (Yahoo! Research), and Joe Hellerstein.

Online Aggregation in HOP

HDFSHDFS

Write SnapshotAnswer

HDFSHDFS

Block 1

Block 2

Read Input File

mapmap

mapmap

reducereduce

reducereduce

Page 23: MapReduce Online Tyson Condie and Neil Conway UC Berkeley Joint work with Peter Alvaro, Rusty Sears, Khaled Elmeleegy (Yahoo! Research), and Joe Hellerstein.

Inter-Job Online Aggregation

Write Answer

HDFSHDFS

mapmap

mapmap

Job 2 Mappers

reducereduce

reducereduce

Job 1 Reducers

Page 24: MapReduce Online Tyson Condie and Neil Conway UC Berkeley Joint work with Peter Alvaro, Rusty Sears, Khaled Elmeleegy (Yahoo! Research), and Joe Hellerstein.

Inter-Job Online Aggregation

• Like intra-job OA, but approximate answers are pipelined to map tasks of next job– Requires co-scheduling a sequence of jobs

• Consumer job computes an approximation– Can be used to feed an arbitrary chain of

consumer jobs with approximate answers

• Challenge: how to avoid redundant work– Output of reduce for 10% progress vs. for 20%

Page 25: MapReduce Online Tyson Condie and Neil Conway UC Berkeley Joint work with Peter Alvaro, Rusty Sears, Khaled Elmeleegy (Yahoo! Research), and Joe Hellerstein.

Example Scenario

• Top K most-frequent-words in 5.5GB Wikipedia corpus (implemented as 2 MR jobs)

• 60 node EC2 cluster

Page 26: MapReduce Online Tyson Condie and Neil Conway UC Berkeley Joint work with Peter Alvaro, Rusty Sears, Khaled Elmeleegy (Yahoo! Research), and Joe Hellerstein.

Stream Processing

• MapReduce is often applied to streams of data that arrive continuously– Click streams, network traffic, web crawl data, …

• Traditional approach: buffer, batch process1.Poor latency2.Analysis state must be reloaded for each batch

• Instead, run MR jobs continuously, and analyze data as it arrives

Page 27: MapReduce Online Tyson Condie and Neil Conway UC Berkeley Joint work with Peter Alvaro, Rusty Sears, Khaled Elmeleegy (Yahoo! Research), and Joe Hellerstein.

Why?

• Why use MapReduce for stream processing?1. Many existing MR use cases are a good fit2. Ability to run user-defined code• Machine learning, graph analysis, unstructured data

3. Massive scale + low-latency analysis4. Use existing MapReduce tools and libraries

Page 28: MapReduce Online Tyson Condie and Neil Conway UC Berkeley Joint work with Peter Alvaro, Rusty Sears, Khaled Elmeleegy (Yahoo! Research), and Joe Hellerstein.

Stream Processing with HOP

• Map and reduce tasks run continuously• Reduce function divides stream into windows– “Every 30 seconds, compute the 1, 5, and 15

minute average network utilization; trigger an alert if …”

– Window management done by user (reduce)

Page 29: MapReduce Online Tyson Condie and Neil Conway UC Berkeley Joint work with Peter Alvaro, Rusty Sears, Khaled Elmeleegy (Yahoo! Research), and Joe Hellerstein.

Stream Processing Challenges

1. How to store stream input?– HDFS is not ideal

2. Fault tolerance for long-running tasks– Operator restart increasingly expensive

3. Elastic scale-up / scale-down during MR job

Page 30: MapReduce Online Tyson Condie and Neil Conway UC Berkeley Joint work with Peter Alvaro, Rusty Sears, Khaled Elmeleegy (Yahoo! Research), and Joe Hellerstein.

#1: Storing Stream Input

• Current approach: colocate map task and data producer– Apply map function, partition => reduce task– Fault tolerance: fate share– “Pushdown” predicates and scalar transforms– Total order = single reduce task

• User-defined code at data producer = bad?– Fault-tolerant “buffer” (map task), coordination

Page 31: MapReduce Online Tyson Condie and Neil Conway UC Berkeley Joint work with Peter Alvaro, Rusty Sears, Khaled Elmeleegy (Yahoo! Research), and Joe Hellerstein.

#2: Fault Tolerance for Streams

• Operator restart for long-running reduces: too expensive

• Hence, window-oriented fault tolerance– Reducers label windows with IDs– Mappers use window IDs to garbage collect spills

• Probably need fault-tolerant Job Tracker and HDFS Name Node

Page 32: MapReduce Online Tyson Condie and Neil Conway UC Berkeley Joint work with Peter Alvaro, Rusty Sears, Khaled Elmeleegy (Yahoo! Research), and Joe Hellerstein.

#3: Intra-Job Elasticity

• Peak load != average load– Increasingly important as job duration grows

• Solution: consistent hashing over reduce key space– Job Tracker manages reduce key => task mapping

• Useful for regular Hadoop as well

Page 33: MapReduce Online Tyson Condie and Neil Conway UC Berkeley Joint work with Peter Alvaro, Rusty Sears, Khaled Elmeleegy (Yahoo! Research), and Joe Hellerstein.

Other HOP Benefits

• Shorter job completion time via improved cluster utilization: reduce work starts early– Important for high-priority jobs, interactive jobs

• Adaptive load management– Better detection and handling of “straggler” tasks– Elastic scale-up/scale-down: better pre-emption– Decouple unit of data transfer from unit of

scheduling• E.g. Yahoo! Petasort: 15GB/map task

Page 34: MapReduce Online Tyson Condie and Neil Conway UC Berkeley Joint work with Peter Alvaro, Rusty Sears, Khaled Elmeleegy (Yahoo! Research), and Joe Hellerstein.

Sort Performance: Blocking

• 60 node EC2 cluster, 5.5GB input file• 40 map tasks, 59 reduce tasks

Page 35: MapReduce Online Tyson Condie and Neil Conway UC Berkeley Joint work with Peter Alvaro, Rusty Sears, Khaled Elmeleegy (Yahoo! Research), and Joe Hellerstein.

Sort Performance: Pipelining

• 927 seconds vs. 610 seconds

Page 36: MapReduce Online Tyson Condie and Neil Conway UC Berkeley Joint work with Peter Alvaro, Rusty Sears, Khaled Elmeleegy (Yahoo! Research), and Joe Hellerstein.

Future Work

1. Basic pipelining– Performance analysis at scale (e.g. PetaSort)– Job scheduling is much harder

2. Online Aggregation– Statically-robust estimation– Better UI for approximate results

3. Stream Processing– Develop into full-fledged stream processing engine– Stream support for high-level query languages– Online machine learning

Page 37: MapReduce Online Tyson Condie and Neil Conway UC Berkeley Joint work with Peter Alvaro, Rusty Sears, Khaled Elmeleegy (Yahoo! Research), and Joe Hellerstein.

Thanks!

Questions?

Source code and technical report: http://code.google.com/p/hop/

Contact: [email protected]

Page 38: MapReduce Online Tyson Condie and Neil Conway UC Berkeley Joint work with Peter Alvaro, Rusty Sears, Khaled Elmeleegy (Yahoo! Research), and Joe Hellerstein.

Map Task Execution

1. Map phase– Read the assigned input split from HDFS

• Split = file block by default– Parses input into records (key/value pairs)– Applies map function to each record

• Returns zero or more new records

2. Commit phase– Registers the final output with the slave node

• Stored in the local filesystem as a file• Sorted first by bucket number then by key

– Informs master node of its completion

Page 39: MapReduce Online Tyson Condie and Neil Conway UC Berkeley Joint work with Peter Alvaro, Rusty Sears, Khaled Elmeleegy (Yahoo! Research), and Joe Hellerstein.

Reduce Task Execution

1. Shuffle phase– Fetches input data from all map tasks

• The portion corresponding to the reduce task’s bucket

2. Sort phase– Merge-sort *all* map outputs into a single run

3. Reduce phase– Applies user reduce function to the merged run

• Arguments: key and corresponding list of values– Write output to a temp file in HDFS

• Atomic rename when finished

Page 40: MapReduce Online Tyson Condie and Neil Conway UC Berkeley Joint work with Peter Alvaro, Rusty Sears, Khaled Elmeleegy (Yahoo! Research), and Joe Hellerstein.

Design Implications

1. Fault Tolerance– Tasks that fail are simply restarted– No further steps required since nothing left the task

2. “Straggler” handling– Job response time affected by slow task– Slow tasks get executed redundantly

• Take result from the first to finish• Assumes slowdown is due to physical components (e.g.,

network, host machine)

• Pipelining can support both!

Page 41: MapReduce Online Tyson Condie and Neil Conway UC Berkeley Joint work with Peter Alvaro, Rusty Sears, Khaled Elmeleegy (Yahoo! Research), and Joe Hellerstein.

Fault Tolerance in HOP

• Traditional fault tolerance algorithms for pipelined dataflow systems are complex

• HOP approach: write to disk and pipeline– Producers write data into in-memory buffer– In-memory buffer periodically spilled to disk– Spills sent to consumers– Consumers treat pipelined data as “tentative” until

producer is known to complete– Fault tolerance via task restart, tentative output

discarded

Page 42: MapReduce Online Tyson Condie and Neil Conway UC Berkeley Joint work with Peter Alvaro, Rusty Sears, Khaled Elmeleegy (Yahoo! Research), and Joe Hellerstein.

Refinement: Checkpoints

• Problem: Treating output as tentative inhibits parallelism

• Solution: Producers periodically “checkpoint” with Hadoop master node– “Output split x corresponds to input offset y”– Pipelined data <= split x is now non-tentative– Also improves speculation for straggler tasks,

reduces redundant work on task failure