Intro to Hadoop Presentation at Carnegie Mellon - Silicon Valley

download Intro to Hadoop Presentation at Carnegie Mellon - Silicon Valley

of 45

  • date post

    19-Aug-2014
  • Category

    Engineering

  • view

    188
  • download

    19

Embed Size (px)

description

Introduction to Hadoop presentation at Carnegie Mellon University, Silicon Valley Campus.

Transcript of Intro to Hadoop Presentation at Carnegie Mellon - Silicon Valley

  • 2010 2015 Cloudera, Inc. All Rights Reserved Introduc=on to Apache Hadoop and its Ecosystem Mark Grover | Intro to Cloud Compu=ng, Carnegie Mellon SV github.com/markgrover/hadoop-intro-fast Copyright 2010-2014 Cloudera, Inc. All rights reserved.
  • 2010 2015 Cloudera, Inc. All Rights Reserved About Me CommiNer on Apache Bigtop, commiNer and PPMC member on Apache Sentry (incuba=ng). Contributor to Apache Hadoop, Hive, Spark, Sqoop, Flume. SoUware developer at Cloudera @mark_grover www.linkedin.com/in/grovermark
  • 2010 2015 Cloudera, Inc. All Rights Reserved Co-author OReilly book @hadooparchbook hadooparchitecturebook.com To be released early 2015
  • 2010 2015 Cloudera, Inc. All Rights Reserved About the Presenta=on Whats ahead Fundamental Concepts HDFS: The Hadoop Distributed File System Data Processing with MapReduce Demo Conclusion + Q&A
  • 2010 2015 Cloudera, Inc. All Rights Reserved Fundamental Concepts Why the World Needs Hadoop
  • 2010 2015 Cloudera, Inc. All Rights Reserved Whats the craze about Hadoop? Volume More and more data being generated Machine generated data increasing Velocity Data coming it at higher speed Variety Audio, video, images, log les, web pages, social network connec=ons, etc.
  • 2010 2015 Cloudera, Inc. All Rights Reserved We Need a System that Scales Too much data for tradi=onal tools Two key problems How to reliably store this data at a reasonable cost How to we process all the data weve stored
  • 2010 2015 Cloudera, Inc. All Rights Reserved What is Apache Hadoop? Scalable data storage and processing Distributed and fault-tolerant Runs on standard hardware Two main components Storage: Hadoop Distributed File System (HDFS) Processing: MapReduce Hadoop clusters are composed of computers called nodes Clusters range from a single node up to several thousand nodes
  • 2010 2015 Cloudera, Inc. All Rights Reserved How Did Apache Hadoop Originate? Heavily inuenced by Googles architecture Notably, the Google Filesystem and MapReduce papers Other Web companies quickly saw the benets Early adop=on by Yahoo, Facebook and others 2002 2003 2004 2005 2006 Google publishes MapReduce paper Nutch rewritten for MapReduce Hadoop becomes Lucene subproject Nutch spun off from Lucene Google publishes GFS paper
  • 2010 2015 Cloudera, Inc. All Rights Reserved Comparing Hadoop to Other Systems Monolithic systems dont scale Modern high-performance compu=ng systems are distributed They spread computa=ons across many machines in parallel Widely-used used for scien=c applica=ons Lets examine how a typical HPC system works
  • 2010 2015 Cloudera, Inc. All Rights Reserved Architecture of a Typical HPC System Storage System Compute Nodes Fast Network
  • 2010 2015 Cloudera, Inc. All Rights Reserved Architecture of a Typical HPC System Storage System Compute Nodes Step 1: Copy input data Fast Network
  • 2010 2015 Cloudera, Inc. All Rights Reserved Architecture of a Typical HPC System Storage System Compute Nodes Step 2: Process the data Fast Network
  • 2010 2015 Cloudera, Inc. All Rights Reserved Architecture of a Typical HPC System Storage System Compute Nodes Step 3: Copy output data Fast Network
  • 2010 2015 Cloudera, Inc. All Rights Reserved You Dont Just Need Speed The problem is that we have way more data than code $ du -ks code/ 1,087 $ du ks data/ 854,632,947,314
  • 2010 2015 Cloudera, Inc. All Rights Reserved You Need Speed At Scale Storage System Compute Nodes Bottleneck
  • 2010 2015 Cloudera, Inc. All Rights Reserved Hadoop Design Fundamental: Data Locality This is a hallmark of Hadoops design Dont bring the data to the computa=on Bring the computa=on to the data Hadoop uses the same machines for storage and processing Signicantly reduces need to transfer data across network
  • 2010 2015 Cloudera, Inc. All Rights Reserved Other Hadoop Design Fundamentals Machine failure is unavoidable embrace it Build reliability into the system More is usually beNer than faster Throughput maNers more than latency
  • 2010 2015 Cloudera, Inc. All Rights Reserved The Hadoop Distributed Filesystem HDFS
  • 2010 2015 Cloudera, Inc. All Rights Reserved HDFS: Hadoop Distributed File System Inspired by the Google File System Reliable, low-cost storage for massive amounts of data Similar to a UNIX lesystem in some ways Hierarchical UNIX-style paths (e.g., /sales/alice.txt) UNIX-style le ownership and permissions
  • 2010 2015 Cloudera, Inc. All Rights Reserved HDFS: Hadoop Distributed File System There are also some major devia=ons from UNIX lesystems Highly-op=mized for processing data with MapReduce Designed for sequen=al access to large les Cannot modify le content once wriNen Its actually a user-space Java process Accessed using special commands or APIs No concept of a current working directory
  • 2010 2015 Cloudera, Inc. All Rights Reserved Copying Local Data To and From HDFS Remember that HDFS is dis=nct from your local lesystem hadoop fs put copies local les to HDFS hadoop fs get fetches a local copy of a le from HDFS $ hadoop fs -put sales.txt /reports Hadoop Cluster Client Machine $ hadoop fs -get /reports/sales.txt
  • 2010 2015 Cloudera, Inc. All Rights Reserved HDFS Demo I will now demonstrate the following 1. How to list the contents of a directory 2. How to create a directory in HDFS 3. How to copy a local le to HDFS 4. How to display the contents of a le in HDFS 5. How to remove a le from HDFS
  • 2010 2015 Cloudera, Inc. All Rights Reserved A Scalable Data Processing Framework Data Processing with MapReduce
  • 2010 2015 Cloudera, Inc. All Rights Reserved What is MapReduce? MapReduce is a programming model Its a way of processing data You can implement MapReduce in any language
  • 2010 2015 Cloudera, Inc. All Rights Reserved Understanding Map and Reduce You supply two func=ons to process data: Map and Reduce Map: typically used to transform, parse, or lter data Reduce: typically used to summarize results The Map func=on always runs rst The Reduce func=on runs aUerwards, but is op=onal Each piece is simple, but can be powerful when combined
  • 2010 2015 Cloudera, Inc. All Rights Reserved MapReduce Benets Scalability Hadoop divides the processing job into individual tasks Tasks execute in parallel (independently) across cluster Simplicity Processes one record at a =me Ease of use Hadoop provides job scheduling and other infrastructure Far simpler for developers than typical distributed compu=ng
  • 2010 2015 Cloudera, Inc. All Rights Reserved MapReduce in Hadoop MapReduce processing in Hadoop is batch-oriented A MapReduce job is broken down into smaller tasks Tasks run concurrently Each processes a small amount of overall input MapReduce code for Hadoop is usually wriNen in Java This uses Hadoops API directly You can do basic MapReduce in other languages Using the Hadoop Streaming wrapper program Some advanced