Achieving Application Performance on the Information Power Grid Francine Berman U. C. San Diego and...

31
Achieving Application Performance on the Information Power Grid Francine Berman U. C. San Diego and NPACI
  • date post

    19-Dec-2015
  • Category

    Documents

  • view

    215
  • download

    1

Transcript of Achieving Application Performance on the Information Power Grid Francine Berman U. C. San Diego and...

Achieving Application Performance on the

Information Power Grid

Francine Berman

U. C. San Diego and NPACI

IPG = “Distributed Computer”

• comprising

– clusters of workstations

– MPPs

– remote instruments

– visualization sites

– data archives

• for users, performance is key criteria in evaluating platform

Program Performance

• Current grid programs achieve performance

by – dedicating resources– careful staging of computation and data– considerable coordination

• It must be possible to achieve program

performance on the IPG by ordinary users on

ordinary days ...

Achieving Performance

• On ordinary days, many users share system resources

– load and availability of resources vary

– application behavior hard to predict

– poor predictions make scheduling hard

• Challenge: Develop application schedules which can leverage deliverable performance of system at execution time.

Whose Job Is It?

• Application scheduling can be performed by many entities

– Resource scheduler

– Job Scheduler

– Programmer or User

– System Administrator

– Application Scheduler

Scheduling and Performance

• Goal of scheduling application is to promote application performance

• Achieving application performance can conflict with achieving performance for other system components

– Resource Scheduler -- perf measure is utilization

– Job Scheduler -- perf measure is throughput

– System Administrator -- focuses on system perf

– Programmer or User -- may miss most current info

– Application Scheduler -- can access most current info

• Everything in the system is evaluated in terms of its impact on the application.

– performance of each system component can be considered as a measurable quantity

– forecasts of quantities relevant to the application can be manipulated to determine schedule

• This simple paradigm forms the basis for AppLeS.

Self-Centered Scheduling

AppLeS

Joint project with Rich Wolski

• AppLeS = Application-Level Scheduler

• Each application has its own self-centered AppLeS

• Schedule achieved through– selection of potentially efficient resource sets

– performance estimation of dynamic system parameters and application performance for execution time frame

– adaptation to perceived dynamic conditions

AppLeS Architecture• AppLeS incorporates

– application-specific information– dynamic information– prediction

• Schedule developed to optimize user’s performance measure– minimal execution time

– turnaround time = staging/waiting time + execution time

– other measures: precision, resolution, speedup, etc.

NWS(Wolski)

UserPrefs

AppPerf

Model

PlannerResource Selector

Application

Act.IPG resources/

infrastructure

Network Weather Service (Wolski)

• The NWS provides dynamic resource information for AppLeS

• NWS – monitors current system

state

– provides best forecast of resource load from multiple models

Sensor Interface

Reporting Interface

Forecaster

Model ModelModel

SARA: An AppLeS-in-Progress

• SARA = Synthetic Aperture Radar Atlas

– application developed at JPL and SDSC

• Goal: Assemble/process files for user’s desired image

– thumbnail image shown to user

– user selects desired bounding box within image for more detailed viewing

– SARA provides detailed image in variety of formats

Focusing in with SARA

Thumbnail image Bounding box

Simple Sara• Focuses on obtaining remote data quickly• Code developed by Alan Su

ComputeServer

DataServer

DataServer

DataServer

Computation servers

and data servers are

logical entities, not

necessarily different

nodes

Network shared by variable number of users

Computation assumed to be done at compute servers

Simple SARA AppLeS

• Focus on resource selection problem: Which site can deliver data the fastest?

– Data for image accessed over shared networks

– Data sets 1.4 - 3 megabytes, representative of SARA file sizes

– Servers used for experiments• lolland.cc.gatech.edu

• sitar.cs.uiuc

• perigee.chpc.utah.edu

• mead2.uwashington.edu

• spin.cacr.caltech.edu

via vBNS

via general Internet

Which is “Closer”?

• Sites on the east coast or sites on the west coast?

• Sites on the vBNS or sites on the general Internet?

• Consistently the same site or different sites at different times?

Which is “Closer”?

• Sites on the east coast or sites on the west coast?

• Sites on the vBNS or sites on the general Internet?

• Consistently the same site or different sites at different times?

Depends a lot on traffic ...

Preliminary Results• Experiment with larger data set (3 Mbytes)

• During this time-frame, general Internet provides data mostly faster than vBNS

9/21/98 Experiments• Clinton Grand Jury webcast commenced at iteration

62

• Experiment with smaller data set (1.4 Mbytes)• During this time frame, east coast sites provide

data mostly faster than west coast sites

More Preliminary Results

Distributed Data Applications

• SARA representative of larger class of distributed data applications

• Simple SARA template being extended to accommodate– replicated data sources– multiple files per image– parallel data acquisition– intermediate compute sites– web interface, etc.

SARA AppLeS -- Phase 2

Client, servers are“logical” nodes, which servers should the client use?

Client Comp.Server

Comp.Server

Comp.Server

DataServer

DataServer

DataServer

DataServer

. . .

Move the computationor move the data?

Computation, dataservers may “live” atthe same nodes

Data serversmay access thesame storage media. How long will data accesstake when data isneeded?

A Bushel of AppLeS … almost

• During the first “phase” of the project, we’ve focused on getting experience building AppLeS

– Jacobi2D, DOT, SRB, Simple SARA, Genetic Algorithm, Tomography, ...

• Using this experience, we are beginning to build AppLeS “templates”/tools for

– master/slave applications– parameter sweep applications– distributed data applications– proudly parallel applications, etc.

• What have we learned ...

Lessons Learned from AppLeS

• Dynamic information is critical

Lessons Learned from AppLeS

• Program execution and parameters may exhibit a range of performance

Lessons Learned from AppLeS

• Knowing something about performance predictions can improve scheduling

Lessons Learned from AppLeS

• Performance of scheduling policy sensitive to application, data, and system characteristics

A First IPG AppLeS

• Focus on class of parameter sweep applications

• Building AppLeS template for INS2D that can be used with other applications from class

• AppLeS INS2D scheduler

– first phase focuses on interactive clusters

– second phase will target clusters and batch-scheduled platforms

– goal is to minimize turnaround time

Parameter Sweep AppLeS Architecture

• Being developed by Dmitrii Zagorodnov• AppLeS schedules work on interactive resources• AppLeS tuned to leverage underlying resource

management system

AppLe S

AP

I

Resources

App-specific

case

gen.

Exp

Act

ActSched.

Act

Exp Exp

INS2D AppLeS Project Goals

• Complete design and deployment of INS2D AppLeS for interactive cluster– focus on socket design for first phase

• Conduct experiments to assess AppLeS performance on interactive cluster and to compare with batch system performance

• Expand INS2D AppLeS to target both batch and interactive systems – target to evolving IPG resource management

system

AppLeS and the IPG

Usability,Integration

development ofbasic IPG infrastructure

Performance

“grid-aware”programming

Short-term Medium-term Long-term

Application schedulingResource schedulingThroughput scheduling

Multi-schedulingResource economy

Integration of schedulers and other tools, performanceinterfacesYou are

here

Integration of multiplegrid constituencies

architectural models whichsupport multiple constituencies

automation of programexecution

Project Information• Thanks to NSF, NPACI,

Darpa, DoD, NASA

• AppLeS Corps:– Francine Berman

– Rich Wolski

– Walfredo Cirne

– Marcio Faerman

– Jaime Frey

– Jim Hayes

– Graziano Obertelli

• AppLeS Home Page: http://www-cse.ucsd.edu/groups/hpcl/apples.html

– Jenny Schopf

– Gary Shao

– Neil Spring

– Shava Smallen

– Alan Su

– Dmitrii Zagorodnov