A Scalable Workflow Scheduling and Gridding of CALIPSO Lidar/Infrared Data PI: Prof. Yelena Yesha...

23
A Scalable Workflow Scheduling and Gridding of CALIPSO Lidar/Infrared Data PI: Prof. Yelena Yesha and Prof. Milton Halem Sponsored by NASA Presented by: Phuong Nguyen and Frank Harris IAB Meeting Research Report Dec 18, 2012 1

Transcript of A Scalable Workflow Scheduling and Gridding of CALIPSO Lidar/Infrared Data PI: Prof. Yelena Yesha...

Page 1: A Scalable Workflow Scheduling and Gridding of CALIPSO Lidar/Infrared Data PI: Prof. Yelena Yesha and Prof. Milton Halem Sponsored by NASA Presented by:

1

A Scalable Workflow Scheduling and Gridding of CALIPSO Lidar/Infrared Data

PI: Prof. Yelena Yesha and Prof. Milton HalemSponsored by NASA

Presented by: Phuong Nguyen and Frank Harris

IAB Meeting Research Report

Dec 18, 2012

Page 2: A Scalable Workflow Scheduling and Gridding of CALIPSO Lidar/Infrared Data PI: Prof. Yelena Yesha and Prof. Milton Halem Sponsored by NASA Presented by:

2

Project objectives• Make use of the a scalable workflow scheduling system

developed by CHMPR/MC2 (implemented on top of Hadoop) on a real Big Data scientific use case– perform analysis of global climate change from decadal

satellite data infrared radiance records stored in two distinct archives obtained from AIRS and MODIS instruments.

– perform gridding and subsequent monthly, seasonal and annual trend inter-comparisons with Surface Temperatures from ground station records and compare with model output reanalysis.

• Gridding other satellite data such as CALIPSO Lidar aerosols and delivery gridded data products

Page 3: A Scalable Workflow Scheduling and Gridding of CALIPSO Lidar/Infrared Data PI: Prof. Yelena Yesha and Prof. Milton Halem Sponsored by NASA Presented by:

A Scalable Workflow Scheduling System

3

Page 4: A Scalable Workflow Scheduling and Gridding of CALIPSO Lidar/Infrared Data PI: Prof. Yelena Yesha and Prof. Milton Halem Sponsored by NASA Presented by:

A Scalable Workflow Scheduling System We have developed a A Scalable Workflow Scheduling and Implemented as a

workflow system on top of Apache Hadoop Expresses and dynamically schedules parallel data intensive workflow

computations:

data flows in Directed Acyclic Graph rather than control flow

optimizes the level of concurrency

shares cluster resources using fine grain scheduling (HybS)

support scientific data format (e.g HDF) and computation using float arrays

performance predictive model• Available JAVA APIs: DagJob, DagBuilder, Graphs … and• Libraries: gridding, statistic routines, statistic model• Available HybS Hadoop plug in scheduler – configurable to work as Hadoop

scheduler in current Hadoop distribution 1.0.1

Page 5: A Scalable Workflow Scheduling and Gridding of CALIPSO Lidar/Infrared Data PI: Prof. Yelena Yesha and Prof. Milton Halem Sponsored by NASA Presented by:

5

Use case: global climate changes from AIRS and MODIS

• Atmospheric Infrared Sounder (AIRS)– 14 - 40km Footprint– 2378 IR Spectral Channels– 5.5 TB / year (L1B)– 55 Terabytes; 876,000 HDF files, each file 135x90x2378 (28,892,700

elements) 60MB

• Moderate Resolution Imaging Spectroradiometer (MODIS)– 1 - 4km Footprint (Infrared)– 16 IR Spectral Channels– 17 TB / year (L1B)– 170 Terabytes; 1,051,200 HDF files

• Produces data product 10 year AIRS FCDR anomalies At 0.50x1.00 lon-lat (100km) from 2002-2012

Page 6: A Scalable Workflow Scheduling and Gridding of CALIPSO Lidar/Infrared Data PI: Prof. Yelena Yesha and Prof. Milton Halem Sponsored by NASA Presented by:

Global climate changes from AIRS and MODIS

6

Other data storage system

NASA’s FTP servers

NASA’s FTP servers

Downloading schedulers

Input granule data filesScience gridded data archived files

Cloud’s storage

Gridding

Scientific computational algorithmsScientific analysis, visualization, plot tools

Cloud Computing Enviroment

282

282.2

282.4

282.6

282.8

283

283.2

283.4

2003 2004 2005 2006 2007

Airs global avearge BT

global

Page 7: A Scalable Workflow Scheduling and Gridding of CALIPSO Lidar/Infrared Data PI: Prof. Yelena Yesha and Prof. Milton Halem Sponsored by NASA Presented by:

7

AIRS gridding using MapReduce approaches

• Step 1: Parallel upload AIRS/ MODIS HDF files from NSF/PVFS into Hadoop HDFS

• Step2: Run gridding AIRS/MODIS using MapReduce jobs Output written to HBASE tables

• Step3: − Analysis on gridded data from HBASE tables or− Loading data out of HBASE/HDFS to store HDF files in NFS/PVFS for

other analysis• Gridding using MapReduce

− input for Map function a HDF file and output (key, value). key grid cell (latxlon) value is array of sum and count of radiances for all spectral channel

− Reduce function avg all values with the same key and output into Hbase tables

Page 8: A Scalable Workflow Scheduling and Gridding of CALIPSO Lidar/Infrared Data PI: Prof. Yelena Yesha and Prof. Milton Halem Sponsored by NASA Presented by:

Spatial data locality

• Bounding box is implemented • Reduce local before shuffle • Output stores in Hbase tables for

queries e.g monthly, seasonal and annual trend inter-comparisons

8

Image source: David Chapman

Page 9: A Scalable Workflow Scheduling and Gridding of CALIPSO Lidar/Infrared Data PI: Prof. Yelena Yesha and Prof. Milton Halem Sponsored by NASA Presented by:

9

Improvement of AIRS gridding

• Estimated based on daily gridding• Used bounding box for spatial data locality• Gridding: compare with regular method, embarrassing parallel, gain 35%

improvement in total processing time• Benefits: scaling, failure handling, gridding at high resolutions, queries

by random data access on Hbase tables.

Page 10: A Scalable Workflow Scheduling and Gridding of CALIPSO Lidar/Infrared Data PI: Prof. Yelena Yesha and Prof. Milton Halem Sponsored by NASA Presented by:

Gridding CALIPSO Lidar aerosolsBackground

● Cloud-Aerosol LIDAR Infrared Pathfinder Satellite Observations (CALIPSO) is an Infrared/Lidar satellite, joint project between NASA and CNES (France)

● Fourth satellite in the A-train formation, follows CloudSat by 15 s, and Aqua by 165 s

● Launched in 2006

Page 11: A Scalable Workflow Scheduling and Gridding of CALIPSO Lidar/Infrared Data PI: Prof. Yelena Yesha and Prof. Milton Halem Sponsored by NASA Presented by:

Instruments● Cloud-Aerosol Lidar with Orthogonal

Polarization (CALIPSO)– Detects reflectance of 20 ns laser pulses at

1064 nm (IR) and 532 nm (vis)– 333 m footprints at full spatial resolution

● Imaging Infrared Interferometer (IIR)– Provides a 3-channel infrared product at 8.65,

10.6, and 12.05 μm at 1 km spatial resolution● Wide Field Camera (WFC)

– 1-channel visible product at 1 km resolution

Page 12: A Scalable Workflow Scheduling and Gridding of CALIPSO Lidar/Infrared Data PI: Prof. Yelena Yesha and Prof. Milton Halem Sponsored by NASA Presented by:

Progress● Developed serial gridder in C, tested

on subset of IIR data● Acquired 14 months of IIR data, 333

days, average 1.5 GB per day, total ~ 500 GB

● In addition, 2 months of CALIPSO data downloaded, for a total of 625 GB, for a total of 3.7 TB/year

Page 13: A Scalable Workflow Scheduling and Gridding of CALIPSO Lidar/Infrared Data PI: Prof. Yelena Yesha and Prof. Milton Halem Sponsored by NASA Presented by:

Gridded ProductFull 360x180 degree image●At full-resolution, image is 36000x18000 pixels and 2.4 GB in size●Shows expected swath path for sun-synchronous satellite●Shows limited coverage ofnadir imaging

Subset of gridded image●Shows high detail within individual swaths●Also shows significant moiré interference as a result of my gridding algorithm●Plan to improve gridding via inverse distance weighting interpolation in the near future

Page 14: A Scalable Workflow Scheduling and Gridding of CALIPSO Lidar/Infrared Data PI: Prof. Yelena Yesha and Prof. Milton Halem Sponsored by NASA Presented by:

What's Next● Acquire rest of dataset (3 TB IIR, 22 TB CALIOP)

● When naïve sequential approach done, process using map-reduce

● Interference, sparse coverage and file size problems can be dealt with by significantly lowering resolution of product to 1°x1°

● Use NCAR Graphics library instead of reusing built-from-scratch internal code

● Produce gridded products, monthly and yearly averages

● Possible scientific applications: Solar reflectances to generate cloud maps, using altimetry data from CALIOP as correction for existing datasets

Page 15: A Scalable Workflow Scheduling and Gridding of CALIPSO Lidar/Infrared Data PI: Prof. Yelena Yesha and Prof. Milton Halem Sponsored by NASA Presented by:

15

Project Status• Have developed AIRS and MODIS gridding and analysis

using MapReduce approaches (make use of the workflow system)

• Showed gridding CALIPSO using serial approach• Future work

– work on gridding CALIPSO using the MapReduce approach – test, evaluate and produce data products– Phuong Nguyen Working on Open source workflow system

and Hybs Hadoop plug-in scheduler.

Page 16: A Scalable Workflow Scheduling and Gridding of CALIPSO Lidar/Infrared Data PI: Prof. Yelena Yesha and Prof. Milton Halem Sponsored by NASA Presented by:

16

Publications • Phuong Nguyen, Tyler Simon, Milton Halem, David Chapman, and Quang Le, "A Hybrid

Scheduling Algorithm for Data Intensive Workloads in a MapReduce Environment”, The 5th IEEE/ACM International Conference on Utility and Cloud Computing 2012

• Phuong Nguyen, David Chapman, Jeff Avery, and Milton Halem, “A near fundamental decadal data record of AIRS Infred Brightness Temperatures” IEEE Geoscience and Remote Sensing Symposium 2012

• Phuong Nguyen PhD dissertation “Data intensive scientific compute model for multiple core clusters” submitted to UMBC Dec 3, 2012

• Phuong Nguyen, Milton Halem,“A MapReduce Workflow System for Architecting Scientific Data Intensive Applications”, ACM International Workshop on Software Engineering for Cloud Computing proceeding of ICSE 2011

Page 17: A Scalable Workflow Scheduling and Gridding of CALIPSO Lidar/Infrared Data PI: Prof. Yelena Yesha and Prof. Milton Halem Sponsored by NASA Presented by:

17

Questions?

Page 18: A Scalable Workflow Scheduling and Gridding of CALIPSO Lidar/Infrared Data PI: Prof. Yelena Yesha and Prof. Milton Halem Sponsored by NASA Presented by:

18

• Back up

Page 19: A Scalable Workflow Scheduling and Gridding of CALIPSO Lidar/Infrared Data PI: Prof. Yelena Yesha and Prof. Milton Halem Sponsored by NASA Presented by:

19

Difference between Scientific Workflows and business workflows

• BPEL is primarily targeted at business workflows

• Scientific workflows differ in a number of ways

• The main difference is one of scale along several dimensions

UCL Department of Computer Science

Scientific workflows

Business workflows

Thousands of service instances (partners)

<< service instances

Thousands of basic service invocations; ten thousands of SOAP messages

<< invocations and SOAP messages

Large numbers of sub-workflows for parallel execution

<< opportunities for parallel execution

Very large amounts of data to be analysed routinely

<< amount of data to be analysed

Page 20: A Scalable Workflow Scheduling and Gridding of CALIPSO Lidar/Infrared Data PI: Prof. Yelena Yesha and Prof. Milton Halem Sponsored by NASA Presented by:

20

Background: MapReduce/Hadoop• Distributed computation on large

cluster• Each job consists of Map and Reduce

tasks• Job stages

1. Map tasks run computations in parallel2. Shuffle combines intermediate Map

outputs3. Reduce tasks run computations in

parallel

M M

M

M M

R

R R

Source slide: Brian Cho UIUC

Page 21: A Scalable Workflow Scheduling and Gridding of CALIPSO Lidar/Infrared Data PI: Prof. Yelena Yesha and Prof. Milton Halem Sponsored by NASA Presented by:

21

Background: MapReduce/Hadoop• Distributed computation on large

cluster• Each job consists of Map and Reduce

tasks• Job stages

1. Map tasks run computations in parallel

2. Shuffle combines intermediate Map outputs

3. Reduce tasks run computations in parallel

• Map input/Reduce output stored in distributed file system (e.g. HDFS)

• Scheduling: Which task to run on empty resources (slots)

M M

M

M M

R

R R

R

M

M M

M M

M

R R

M

M

M

M M

R R

Job 1

Job 2

Job 3

Source slide: Brian Cho UIUC

Page 22: A Scalable Workflow Scheduling and Gridding of CALIPSO Lidar/Infrared Data PI: Prof. Yelena Yesha and Prof. Milton Halem Sponsored by NASA Presented by:

Why new workflow scheduling system?• Characteristics of data intensive scientific apps

– Repeat experiments on the different data– Computations on high dimension arrays: spatial, temporal, spectral– Variety of data formats, need math libraries– Complex components e.g. model prediction

• Lack of a scientific workflow system to deal with scale− scalability, reliability, scheduling, data management, provenance, low

overhead

• Current limitation: Hadoop is a scalable systems built to run at large scales (e.g. runs on 8000 cores) commodity clusters– Still need to improve key performance metrics – Limited support for scientific apps

22

Page 23: A Scalable Workflow Scheduling and Gridding of CALIPSO Lidar/Infrared Data PI: Prof. Yelena Yesha and Prof. Milton Halem Sponsored by NASA Presented by:

HBASE design for multiple satellites: gridded data

• (Table, RowKey, Family, Column, Timestamp) → Value• Hbase Index on rowKey value • The rowKey design for multiple satellite instruments

• <InstrumentID>_ <DateTime>_<SpectralChannel>_<Spatial Index>• Column families

– e.g. Resolution-Statistics column1_100km, column_1Km. Spatial Index lat, lon bounding box

• Index by Instruments, Date Time, Spatial Index and Spectral Channel • Scan rows (which columns) into MapReduce computation

23

RowKey Column family

rowKeyId Resolution-Statistics GeolocationData

100km_AvgBT 100km_Stdev 1km_AvgBT 250m_AvgBT Lat

AIRS_20050101_ch528_20N20S10E10W

…data… …data… …data… …data… …data…

Scan into MapReduce computation