Grid Resource Brokering and Cost-based Scheduling With Nimrod-G and Gridbus Case Studies Rajkumar...
-
Upload
francis-terry -
Category
Documents
-
view
214 -
download
0
Transcript of Grid Resource Brokering and Cost-based Scheduling With Nimrod-G and Gridbus Case Studies Rajkumar...
Grid Resource Brokering and Cost-based Scheduling
With Nimrod-G and Gridbus Case Studies
Rajkumar BuyyaCloud Computing and Distributed Systems (CLOUDS)
Lab. The University of MelbourneMelbourne, Australiawww.cloudbus.org
2
Agenda
Introduction to Grid Scheduling Application Models and Deployment Approaches Economy-based “Computational” Grid Scheduling
Nimrod-G -- Grid Resource Broker Scheduling Algorithms and Experiments on World
Wide Grid testbed Economy-based “Data Intensive” Grid Scheduling
Gridbus -- Grid Service Broker Scheduling Algorithms and Experiments on Australian
Belle Data Grid testbed
Scheduling Economics
Grid
Grid Economy
Grid Scheduling: Introduction
4
Grid Resources and Scheduling
2100 2100 2100 2100
2100 2100 2100 2100
Single CPU(Time Shared Allocation)
SMP(Time Shared Allocation)
Clusters(Space Shared Allocation)
Grid Resource Broker
User Application
Grid Information Service
Local Resource ManagerLocal Resource Manager Local Resource Manager
5
Grid Scheduling
Grid scheduling: Resources distributed over multiple administrative
domains Selecting 1 or more suitable resources (may
involve co-scheduling) Assign tasks to selected resources and monitoring
execution. Grid schedulers are Global Schedulers
They have no ownership or control over resources Jobs are submitted to local resource managers
(LRMs) as user LRMs take care of actual execution of jobs
6
Example Grid Schedulers
Nimrod-G - Monash University Computational Grid & Economic-based
Condor-G – University of Wisconsin Computational Grid & System-centric
AppLeS–University of California@San Diego Computational Grid & System centric
Gridbus Broker – University of Melbourne Data Grid & Economic based
7
Key Steps in Grid Scheduling
1. Authorization Filtering
3. Min. Requirement Filtering
2. Application Definition
Phase I-Resource Discovery
5. System Selection
4. Information Gathering
Phase II - Resource Selection
7. Job Submission
6. Advance Reservation
9. Monitoring Progress
8. Preparation Tasks
11. Clean-up Tasks
10 Job Completion
Phase III- Job Execution
Source: J. Schopf, Ten Actions When SuperScheduling, OGF Document, 2003.
8
Movement of Jobs: Between the Scheduler and a Resource
Push Model Manager pushes jobs from Queue to a resource. Used in Clusters, Grids
Pull Model P2P Agent request for a job for processing from job-pool Commonly used in P2P systems such as Alchemi and
SETI@Home Hybrid Model (both push and pull)
Broker deploys an agent on resources, which pulls jobs from a resource.
May use in Grid (e.g., Nimrod-G system). Broker also pulls data from user host or separate data
host (distributed datasets) (e.g., Gridbus Broker).
9
Example Systems
JobDispatch
Architecture
Push Pull Hybrid
Centralised PBS, SGE, Condor,Alchemi (when in dedicated mode)
Windmill from CERN (used in Physics ATLAS experiment)
Condor (as it supports non-dedicated owner specified policies)
Decentralised Nimrod-G, AppLeS, Condor-G, Gridbus Broker
Alchemi, SETI@Home, UnitedDevice,P2P Systems, Aneka
Nimrod-G (push Grid Agent, which pulls jobs)
Application Models and their Deployment on Global Grids
11
Grid Applications and Parametric Computing
Grid Applications and Parametric Computing
Bioinformatics: Bioinformatics: Drug Design / Protein Drug Design / Protein
ModellingModelling
SensitivitySensitivityexperiments experiments
on smog formationon smog formation
Natural Language Natural Language EngineeringEngineering
Ecological Modelling: Ecological Modelling: Control Strategies Control Strategies
for Cattle Tickfor Cattle Tick
Electronic CAD: Electronic CAD: Field Programmable Field Programmable
Gate ArraysGate ArraysComputer Graphics: Computer Graphics: Ray TracingRay Tracing
High Energy High Energy Physics: Physics:
Searching for Searching for Rare EventsRare Events
Finance: Finance: Investment Risk AnalysisInvestment Risk Analysis
VLSI Design: VLSI Design: SPICE SimulationsSPICE Simulations
Aerospace: Aerospace: Wing DesignWing Design
Network SimulationNetwork SimulationAutomobile:Automobile:
Crash Simulation Crash Simulation
Data MiningData Mining
Civil Engineering:Civil Engineering:Building Design Building Design
astrophysics astrophysics
12
How to Construct and Deploy Applications on Global Grids ?
Three Options/Solutions: Manual Scheduling - Use pure Globus commands Application Level Scheduling - Build your own
Distributed App & Scheduler Application Independent Scheduling – Grid Brokers
Decouple App Construction from Scheduling
Perform parameter sweep (bag of tasks) (utilising distributed resources) within “T” hours or early and cost not exceeding $M.
13
Using Pure Globus commands
Do all yourself! (manually)
Total Cost:$???
14
Build Distributed Application & Application-Level Scheduler
Build App and scheduler case by case basis
E.g., MPI Approach Total Cost:$???
15
Compose and Deploy using Brokers – Nimrod-G and Gridbus Approach
•Compose Apps and Submit to the Broker• Define QoS requirements•Aggregate View
0
10
20
30
40
50
60
70
80
90
1st Qtr 2nd Qtr 3rd Qtr 4th Qtr
East
West
North
South
Compose, Submit & Play!
The Nimrod-G Grid Resource Broker and Economy-based Grid Scheduling
[Buyya, Abramson, Giddy, 1999-2001]
Deadline and Budget Constrained Algorithms for Scheduling
Applications on “Computational” Grids
17
A resource broker (implemented using Python) for managing, steering, and executing task farming (parameter sweep) applications on global Grids.
It allows dynamic leasing of resources at runtime based on their quality, cost, and availability, and users’ QoS requirements (deadline, budget, etc.)
Key Features A declarative parameter programming language A single window to manage & control experiment Persistent and Programmable Task Farming Engine Resource Discovery Resource Trading (User-Level) Scheduling & Predications Generic Dispatcher & Grid Agents Transportation of data & results Steering & data management Accounting
Nimrod-G : A Grid Resource Broker
18
A Glance at Nimrod-G Broker
Grid Middleware
Nimrod/G Client Nimrod/G ClientNimrod/G Client
Grid Information Server(s)
Schedule Advisor
Trading Manager
Nimrod/G Engine
GridStore
Grid Explorer
GE GISTM TS
RM & TS
Grid Dispatcher
RM: Local Resource Manager, TS: Trade Server
Globus, Legion, Condor, etc.
G
G
CL
Globus enabled node.Legion enabled node.
GL
Condor enabled node.
RM & TSRM & TS
C LSee HPCAsia 2000 paper!
$
$$
19
Globus Legion
Fabric
Nimrod-G Broker
Nimrod-G ClientsP-Tools (GUI/Scripting)
(parameter_modeling)
Legacy Applications
P2P GTS
Farming Engine
Dispatcher & Actuators
Schedule Advisor
Trading Manager
Grid Explorer
Customised Apps(Active Sheet)
Monitoring and Steering Portals
Algorithm1
AlgorithmN
Middleware
. . .
Computers Storage Networks InstrumentsLocal Schedulers
G-Bank. . .
Agents
Resources
Programmable Entities Management
Jobs Tasks
. . .
AgentScheduler JobServer
PC/WS/Clusters Radio TelescopeCondor/LL/NQS . . .Database
Meta-Scheduler
Nimrod/G Grid Broker Architecture
Channels
. . .
Database
Condor GMD
IP hourglass!
Condor-AGlobus-A Legion-A P2P-A
20
A Nimrod/G Monitor
A Nimrod/G MonitorCostCost
DeadlineDeadline
Legion hosts
Globus Hosts
Bezek is in both Globus and Legion Domains
Arlington
Alexandria
Richmond
HamptonNorfolk
Virginia BeachChesapeakePortsmouth
Newport News
Roanoke
Ap p om a toxRive r
Ja m esRive r
Shena nd oa hRive r
Ra p p a ha nnoc kRive r
Potom a cRive r
VIRGINIA77
81
64
64
66
85
21
User Requirements: Deadline/Budget User Requirements: Deadline/Budget
22
Nimrod/G Interactions
Grid InfoServer
ProcessServer
UserProcess
File accessFileServer
Grid Node
NimrodAgent
Compute NodeUser Node
GridDispatcher
Grid Trade Server
GridScheduler
Local Resource Manager
Nimrod-G Grid Broker
TaskFarmingEngine
Grid ToolsAnd
Applications
Do this in 30 min. for $10?
23
Discover Discover ResourcesResources
Distribute JobsDistribute Jobs
Establish Establish RatesRates
Meet requirements ? Remaining Meet requirements ? Remaining Jobs, Deadline, & Budget ?Jobs, Deadline, & Budget ?
Evaluate & Evaluate & RescheduleReschedule
Discover Discover More More
ResourcesResources
Adaptive Scheduling Steps
Compose & Compose & ScheduleSchedule
24
Deadline and Budget Constrained Scheduling Algorithms
Algorithm/Strategy
Execution Time(Deadline, D)
Execution Cost(Budget, B)
Cost Opt Limited by D Minimize
Cost-Time Opt Minimize when possible
Minimize
Time Opt Minimize Limited by B
Conservative-Time Opt
Minimize Limited by B, but all unprocessed jobs have guaranteed minimum budget
25
Deadline and Budget-based Cost Minimization Scheduling
1. Sort resources by increasing cost.2. For each resource in order, assign as
many jobs as possible to the resource, without exceeding the deadline.
3. Repeat all steps until all jobs are processed.
Scheduling Algorithms and Experiments
27
World Wide Grid (WWG)WW Grid
Globus+LegionGRACE_TS
Australia
Melbourne U. : Cluster
VPAC: Alpha
Solaris WS
Nimrod-G+Gridbus
Globus +GRACE_TS
Europe
ZIB: T3E/OnyxAEI: Onyx Paderborn: HPCLineLecce: Compaq SCCNR: ClusterCalabria: Cluster CERN: ClusterCUNI/CZ: OnyxPozman: SGI/SP2Vrije U: ClusterCardiff: Sun E6500Portsmouth: Linux PCManchester: O3K
Globus +GRACE_TS
Asia
Tokyo I-Tech.: Ultra WSAIST, Japan: Solaris ClusterKasetsart, Thai: ClusterNUS, Singapore: O2K
Globus/LegionGRACE_TS
North America
ANL: SGI/Sun/SP2USC-ISI: SGIUVa: Linux ClusterUD: Linux clusterUTK: Linux clusterUCSD: Linux PCsBU: SGI IRIX
Internet
Globus +GRACE_TS South America
Chile: Cluster
WW Grid
28
Application Composition Using Nimrod Parameter Specification
Language
#Parameters Declarationparameter X integer range from 1 to 165 step 1;parameter Y integer default 5;
#Task Definitiontask main #Copy necessary executables depending on node type copy calc.$OS node:calc #Execute program with parameter values on remote node node:execute ./calc $X $Y #Copy results file to use home node with jobname as extension copy node:output ./output.$jobnameendtask
calc 1 5 output.j1calc 2 5 output.j2calc 3 5 output.j3
…calc 165 5 output.j165
29
Experiment Setup
Workload: 165 jobs, each need 5 minute of CPU time
Deadline: 2 hrs. and budget: 396000 G$ Strategies: 1. Minimise cost 2. Minimise time Execution:
Optimise Cost: 115200 (G$) (finished in 2hrs.) Optimise Time: 237000 (G$) (finished in 1.25 hr.) In this experiment: Time-optimised scheduling run
costs double that of Cost-optimised. Users can now trade-off between Time Vs. Cost.
30
Resources Selected & Price/CPU-sec.
Resource & Location
Grid services & Fabric
Cost/CPU sec.or unit
No. of Jobs Executed
Time_Opt Cost_Opt.
Linux Cluster-Monash, Melbourne, Australia
Globus, GTS, Condor
2 64 153
Linux-Prosecco-CNR, Pisa, Italy
Globus, GTS, Fork 3 7 1
Linux-Barbera-CNR, Pisa, Italy
Globus, GTS, Fork 4 6 1
Solaris/Ultas2
TITech, Tokyo, Japan
Globus, GTS, Fork 3 9 1
SGI-ISI, LA, US Globus, GTS, Fork 8 37 5
Sun-ANL, Chicago,US Globus, GTS, Fork 7 42 4Total Experiment Cost (G$) 237000 115200
Time to Complete Exp. (Min.) 70 119
31
Deadline and Budget Constraint (DBC) Time Minimization
Scheduling1. For each resource, calculate the next
completion time for an assigned job, taking into account previously assigned jobs.
2. Sort resources by next completion time.3. Assign one job to the first resource for
which the cost per job is less than the remaining budget per job.
4. Repeat all steps until all jobs are processed. (This is performed periodically or at each scheduling-event.)
32
Resource Scheduling for DBC Time Optimization
0
2
4
6
8
10
12
Time (in Minute)
No.
of
Tas
ks i
n E
xecu
tion
Condor-Monash Linux-Prosecco-CNR Linux-Barbera-CNR
Solaris /Ultas2-TITech SGI-ISI Sun-ANL
33
Resource Scheduling for DBC Cost Optimization
0
2
4
6
8
10
12
14
Time (in Minute)
No.
of
Tas
ks i
n E
xecu
tion
Condor-Monash Linux-Prosecco-CNR Linux-Barbera-CNR
Solaris /Ultas2-TITech SGI-ISI Sun-ANL
34
Nimrod-G Summary
One of the “first” and most successful Grid Resource Brokers world-wide!
Project continues to be active and being used in many e-Science applications.
For recent developments, please see: http://messagelab.monash.edu.au/Nimrod
Gridbus Broker
“Distributed” Data-Intensive Application Scheduling
36
A Java-based resource broker for Data Grids (Nimrod-G focused on Computational Grids).
It uses computational economy paradigm for optimal selection of computational and data services depending on their quality, cost, and availability, and users’ QoS requirements (deadline, budget, & T/C optimisation)
Key Features A single window to manage & control experiment Programmable Task Farming Engine Resource Discovery and Resource Trading Optimal Data Source Discovery Scheduling & Predications Generic Dispatcher & Grid Agents Transportation of data & sharing of results Accounting
Gridbus Grid Service Broker (GSB)
37
Core Middleware
Gridbus User Console/Portal/Application Interface
Grid Info Server
Schedule Advisor
Trading Manager
Gridbus Farming Engine
RecordKeeper
Grid Explorer
GE GIS, NWSTM TS
RM & TS
Grid Dispatcher
G
G
CU
Globus enabled node.
AL
DataCatalog
DataNode
Amazon EC2/S3 Cloud.
$
$
$
App, T, $, Optimization Preference
workload
Gridbus Broker
38
Gridbus Broker: Separating “applications” from “different” remote service access
enablers and schedulers
Aneka
AMI
Amazon EC2Data Store
Access Technology
Grid FTPSRB
-PBS-Condor-SGE
Globus
Job manager
fork() batch()
Gridbusagent
Data Catalog
-PBS-Condor-SGE-XGrid
SSH
fork()
batch()
Gridbusagent
Single-sign on securityHome Node/Portal
GridbusBroker
fork()
batch() -PBS-Condor-SGE-Aneka-XGrid
Application Development Interface
Sch
ed
ulin
gIn
terfa
ces
Alogorithm1
AlogorithmN
Plugin Actuators
39
Gridbus Services for eScience applications
Application Development Environment: XML-based language for composition of task farming
(legacy) applications as parameter sweep applications. Task Farming APIs for new applications. Web APIs (e.g., Portlets) for Grid portal development. Threads-based Programming Interface Workflow interface and Gridbus-enabled workflow engine. … Grid Superscalar – in cooperation with BSC/UPC
Resource Allocation and Scheduling Dynamic discovery of optional computational and data
nodes that meet user QoS requirements. Hide Low-Level Grid Middleware interfaces
Globus (v2, v4), SRB, Aneka, Unicore, and ssh-based access to local/remote resources managed by XGrid, PBS, Condor, SGE.
40
Drug DesignMade Easy!
Click Here for Demo
41
s
A Sample List of Gridbus Broker UsersA Sample List of Gridbus Broker UsersA Sample List of Gridbus Broker Users
http://www.gridbus.org
Molecular docking for drug design on Australian National Grid
Molecular docking for drug design on Australian National Grid
High Energy Physics: Particle Discovery
High Energy Physics: Particle Discovery
Melbourne University
NeuroScience: Brain Activity Analysis
NeuroScience: Brain Activity Analysis
EU Data Mining GridEU Data Mining Grid
DaimlerChrysler, Technion, U. Ljubljana, U. Ulster
Kidney/Human Physiome Modelling
Kidney/Human Physiome Modelling
Melbourne Medical Faculty, Université d'Evry, France
Finance /Investment Risk Studies: Spanish Stock Market
Finance /Investment Risk Studies: Spanish Stock Market
Universidad Complutense de Madrid, Spain
42
Case Study: High Energy Physics and Data Grid
The Belle Experiment KEK B-Factory,
Japan Investigating
fundamental violation of symmetry in nature (Charge Parity) which may help explain “why do we have more antimatter in the universe OR imbalance of matter and antimatter in the universe?”.
Collaboration 1000 people, 50 institutes
100’s TB data currently
43
Case Study: Event Simulation and Analysis
B0->D*+D*-Ks
• Simulation and Analysis Package - Belle Analysis Software Framework (BASF)• Experiment in 2 parts – Generation of Simulated Data and Analysis of the distributed data
Analyzed 100 data files (30MB each) that were distributed among the five nodes within Australian Belle DataGrid platform.
44
Australian Belle Data Grid Testbed
Grid Service Broker
Replica Catalog
AARNET
NWS NameServer
VirtualOrganization
Analysis Request
Analysis Results
CertificateAuthority
NWSSensor
GridFTPGRIS
GlobusGatekeeper
Dual Intel Xeon 2.8 Ghz, 2 GB RAM
NWSSensor
GridFTPGRIS
GlobusGatekeeper
Dual Intel Xeon 2.8 Ghz, 2 GB RAM
NWSSensor
GridFTPGRIS
GlobusGatekeeper
Dual Intel Xeon 2.8 Ghz, 2 GB RAM
GRIDS Lab, University of Melbourne
Dept. of Physics,University of Sydney
ANU, Canberra
Dept. of Computer Science, University of Adelaide
NWSSensor
GridFTPGRIS
GlobusGatekeeper
Intel Pentium 2.0 Ghz, 512 MB RAM
Dept. of Physics,University of Melbourne
NWSSensor
GridFTPGRIS
GlobusGatekeeper
Dual Intel Xeon 2.8 Ghz, 2 GB RAM
VPACMelbourne
45
Belle Data Grid (GSP CPU Service Price: G$/sec)
Grid Service Broker
Replica Catalog
AARNET
NWS NameServer
VirtualOrganization
Analysis Request
Analysis Results
CertificateAuthority
NWSSensor
GridFTPGRIS
GlobusGatekeeper
Dual Intel Xeon 2.8 Ghz, 2 GB RAM
NWSSensor
GridFTPGRIS
GlobusGatekeeper
Dual Intel Xeon 2.8 Ghz, 2 GB RAM
NWSSensor
GridFTPGRIS
GlobusGatekeeper
Dual Intel Xeon 2.8 Ghz, 2 GB RAM
GRIDS Lab, University of Melbourne
Dept. of Physics,University of Sydney
ANU, Canberra
Dept. of Computer Science, University of Adelaide
NWSSensor
GridFTPGRIS
GlobusGatekeeper
Intel Pentium 2.0 Ghz, 512 MB RAM
Dept. of Physics,University of Melbourne
NWSSensor
GridFTPGRIS
GlobusGatekeeper
Dual Intel Xeon 2.8 Ghz, 2 GB RAM
NA
G$4
G$4
Datanode
G$6VPAC
MelbourneG$2
46
Belle Data Grid (Bandwidth Price: G$/MB)
Grid Service Broker
Replica Catalog
AARNET
NWS NameServer
VirtualOrganization
Analysis Request
Analysis Results
CertificateAuthority
NWSSensor
GridFTPGRIS
GlobusGatekeeper
Dual Intel Xeon 2.8 Ghz, 2 GB RAM
NWSSensor
GridFTPGRIS
GlobusGatekeeper
Dual Intel Xeon 2.8 Ghz, 2 GB RAM
NWSSensor
GridFTPGRIS
GlobusGatekeeper
Dual Intel Xeon 2.8 Ghz, 2 GB RAM
GRIDS Lab, University of Melbourne
Dept. of Physics,University of Sydney
ANU, Canberra
Dept. of Computer Science, University of Adelaide
NWSSensor
GridFTPGRIS
GlobusGatekeeper
Intel Pentium 2.0 Ghz, 512 MB RAM
Dept. of Physics,University of Melbourne
NWSSensor
GridFTPGRIS
GlobusGatekeeper
Dual Intel Xeon 2.8 Ghz, 2 GB RAM
NA
G$4
G$4
Datanode
G$6VPAC
MelbourneG$2
34
31
38
31
30
3336
32
47
Deploying Application Scenario
A data grid scenario with 100 jobs and each accessing remote data of ~30MB
Deadline: 3hrs. Budget: G$ 60K Scheduling Optimisation Scenario:
Minimise Time Minimise Cost
Results:
SUMMARY OF EVALUATION RESULTS
Scheduling strategy Total Time Taken (mins.)
Compute Cost (G$)
Data Cost (G$)
Total Cost (G$)
Cost Minimization 71.07 26865 7560 34425 Time Minimization 48.5 50938 7452 58390
48
Time Minimization in Data Grids
0
10
20
30
40
50
60
70
80
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42
Time (in mins.)
Nu
mb
er
of
job
s c
om
ple
ted
fleagle.ph.unimelb.edu.au belle.anu.edu.au belle.physics.usyd.edu.au brecca-2.vpac.org
49
Results : Cost Minimization in Data Grids
0
10
20
30
40
50
60
70
80
90
100
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63
Time(in mins.)
Nu
mb
er o
f jo
bs
com
ple
ted
fleagle.ph.unimelb.edu.au belle.anu.edu.au belle.physics.usyd.edu.au brecca-2.vpac.org
50
SUMMARY OF EVALUATION RESULTS
Scheduling strategy Total Time Taken (mins.)
Compute Cost (G$)
Data Cost (G$)
Total Cost (G$)
Cost Minimization 71.07 26865 7560 34425 Time Minimization 48.5 50938 7452 58390
Observation
Organization
Node details Cost (in G$/CPU-sec)
Total Jobs Executed
Time Cost
CS,UniMelb belle.cs.mu.oz.au4 CPU, 2GB RAM, 40 GB HD, Linux
N.A. (Not used as a compute resource)
-- --
Physics, UniMelb fleagle.ph.unimelb.edu.au1 CPU, 512 MB RAM, 40 GB HD, Linux
2 3 94
CS, University of Adelaide
belle.cs.adelaide.edu.au4 CPU (only 1 available) , 2GB RAM, 40 GB HD, Linux
N.A. (Not used as a compute resource)
-- --
ANU, Canberra belle.anu.edu.au4 CPU, 2GB RAM, 40 GB HD, Linux
4 2 2
Dept of Physics, USyd
belle.physics.usyd.edu.au4 CPU (only 1 available), 2GB RAM, 40 GB HD, Linux
4 72 2
VPAC, Melbourne
brecca-2.vpac.org180 node cluster (only head node used), Linux
6 23 2
51
Summary and Conclusion
Application scheduling on global Grids is a complex undertaking as systems need to be adaptive, scalable, competitive,…, and driven by QoS.
Nimrod-G is one of the popular Grid Resource Broker for scheduling parameter sweep applications on Global Grids
Scheduling experiments on the World Wide Grid demonstrate Nimrod-G broker ability to dynamically lease services at runtime based on their quality, cost, and availability depending on consumers QoS requirements.
Easy to use tools for creating Grid applications are essential for success of Grid Computing.
52
References
Rajkumar Buyya, David Abramson, Jonathan Giddy, Nimrod/G: An Architecture for a Resource Management and Scheduling System in a Global Computational Grid, Proceedings of the 4th International Conference on High Performance Computing in Asia-Pacific Region (HPC Asia 2000), Beijing, China. IEEE Computer Society Press, USA, 2000.
David Abramson, Rajkumar Buyya, and Jonathan Giddy, A Computational Economy for Grid Computing and its Implementation in the Nimrod-G Resource Broker, Future Generation Computer Systems (FGCS) Journal, Volume 18, Issue 8, Pages: 1061-1074, Elsevier Science, The Netherlands, October 2002.
Jennifer Schopf, Ten Actions When SuperScheduling, Global Grid Forum Document GFD.04, 2003.
Srikumar Venugopal, Rajkumar Buyya and Lyle Winton, A Grid Service Broker for Scheduling e-Science Applications on Global Data Grids, Concurrency and Computation: Practice and Experience, Volume 18, Issue 6, Pages: 685-699, Wiley Press, New York, USA, May 2006.