Creating A Virtual Computing Facility
description
Transcript of Creating A Virtual Computing Facility
MASPLAS ’02
Creating A Creating A Virtual Computing FacilityVirtual Computing Facility
Creating A Creating A Virtual Computing FacilityVirtual Computing Facility
Ravi PatchigollaChris ClarkeLu Marino
8th Annual Mid-Atlantic Student 8th Annual Mid-Atlantic Student WorkshopWorkshop
On Programming Languages And SystemsOn Programming Languages And Systems
The Virtual Pool Concept
Resources’ locations are transparent
A user is allocated requested services based on availability by an Agent
Execution Transparency
User jobs are executed on whichever nodes are available within the pool
FOR MORE INFO...R Figueiredo, N Kapadia, J Fortes, The PUNCH Virtual File System: Seamless Access to Decentralized Storage Services in A Computational Grid. 10th IEEE International Symposium on High Performance Computing Aug01.
Virtual Machines and Time Shares
Virtual Reality
Vehicular Simulators
Remote Viewing
TeleImmersion
TeleOperation
Virtual Terminals
Remote Session
Telnet
TTY
VT100
Commodity PC prices have dropped dramatically over the years due to innovations in Chip fabrication Technology
Clustered Commodity PC can achieve GigaFlop performance
Network Component Prices as well as Networking technology are following on the heels of the Commodity PC
Open Source Software Is “Pervasive”
Linux is freeware
Security software
Academic projects
Open Source projects
Create a Beowulf Style Cluster
Document and Benchmark
Create a Virtual Pool of Win32 Machines
Run Some Parallel Simulation software
Test CONDOR on Win32 Machines
Test The Job Management Capabilities on Win32
SWITCH
LynkSys 8 port10/100 Mbs
SWITCH
LynkSys 8 port10/100 Mbs
Master / Node 1192.168.0.1
PII 266 MHz64 Meg SDRAM4Gb HarddiskGeneric VGA
Princeton E015Floppy Disk
CD Rom DriveLinux OS
10 Mbps NIC + 10/100Mbps NIC
Master / Node 1192.168.0.1
PII 266 MHz64 Meg SDRAM4Gb HarddiskGeneric VGA
Princeton E015Floppy Disk
CD Rom DriveLinux OS
10 Mbps NIC + 10/100Mbps NIC
Node 2192.168.0.2
PII 266 MHz64 Meg RAM4Gb HarddiskGeneric VGA
Princeton E015
Floppy DiskCD Rom Drive
Linux OS10 MB NIC
Node 2192.168.0.2
PII 266 MHz64 Meg RAM4Gb HarddiskGeneric VGA
Princeton E015
Floppy DiskCD Rom Drive
Linux OS10 MB NIC
Node 4192.168.0.4
PII 266 MHz64 Meg SDRAM4Gb HarddiskGeneric VGA
Princeton E015Floppy Disk
CD Rom DriveLinux OS
10 MB NIC
Node 4192.168.0.4
PII 266 MHz64 Meg SDRAM4Gb HarddiskGeneric VGA
Princeton E015Floppy Disk
CD Rom DriveLinux OS
10 MB NICNode 3
192.168.0.3
PII 166 MHz32 Meg SDRAM2Gb HarddiskGeneric VGA
Princeton E015Floppy Disk
CD Rom DriveLinux OS
10 MB NIC
Node 3 192.168.0.3
PII 166 MHz32 Meg SDRAM2Gb HarddiskGeneric VGA
Princeton E015Floppy Disk
CD Rom DriveLinux OS
10 MB NIC
www.linux.org penguin
BASELINEBASELINE Run 1Run 1 Run 2Run 2 Run 3Run 3 Run 4Run 4 AverageAverage
PII 586/166Mhz 203.60 s 204.52 s 202.26 s 203.15 s 203.38203.38
PII 686/266 78.67 s 78.67 s 78.67 s 78.87 s 78.7278.72
Test 1 (2 Computer Net)Test 1 (2 Computer Net) Run 1Run 1 Run 2Run 2 Run 3Run 3 Run 4Run 4 AverageAverage
PII 266/266 Mhz (N1/N2) 70.67 s 69.30 s 69.40 s 70.10 s 69.8769.87
PII 166/266 Mhz (N3/N2) 140.67 s 140.16 s 139.68 s 139.20 s 139.91139.91
Test 2 (3 Computer Net)Test 2 (3 Computer Net) Run 1Run 1 Run 2Run 2 Run 3Run 3 Run 4Run 4 AverageAverage
PII 266/266/266 Mhz (N1/N2/N4) 62.39 s 63.42 s 62.69 s 62.27 s 62.6962.69
Test 3 (4 Computer Net)Test 3 (4 Computer Net) Run 1Run 1 Run 2Run 2 Run 3Run 3 Run 4Run 4 AverageAverage
PII 166/266/266/266 Mhz (N3/N1/N2/N4) 31.84 s 34.72 s 32.97 s 31.81 s 32.8432.84
Load the Charm daemon to create pool
Launch the Simulator NAMD with varying workload and processor requests
View the simulation with VMD
Record the CPU Usage on the affected machines
R Figueiredo, N Kapadia, J Fortes, The PUNCH Virtual File System: Seamless Access to Decentralized The PUNCH Virtual File System: Seamless Access to Decentralized Storage Services in A Computational Grid.Storage Services in A Computational Grid. 10th IEEE International Symposium on High Performance Computing Aug01.
I. Foster and C. Kesselman, editors. The Grid: Blueprint for a New Computing Infrastructure. Morgan The Grid: Blueprint for a New Computing Infrastructure. Morgan Kaufmann Publishers, 1998. Kaufmann Publishers, 1998. http://www.http://www.globusglobus.org/research.org/research
Cluster In A Box: Cluster In A Box: OSCAR 1.2.1OSCAR 1.2.1. Feb02.. Feb02. OSCAR version 1.2.1OSCAR version 1.2.1 is a snapshot of the best known methods for building, programming, and using clusters. It consists of a fully integrated and easy to install software bundle designed for high performance cluster computing. http://oscar.sourceforge.net
Condor: the goal of the Condor ProjectCondor: the goal of the Condor Project is to develop, implement, deploy, and evaluate mechanisms and policies that support High Throughput Computing (HTC) on large collections of distributively owned computing resources. http://www.cs.wisc.edu/condor/
Globus Toolkit: The Globus ProjectGlobus Toolkit: The Globus Project is developing fundamental technologies needed to build computational grids. Grids are persistent environments that enable software applications to integrate instruments, displays, computational and information resources that are managed by diverse organizations in widespread locations.
NAMDNAMD is a parallel, object-oriented molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD scales to hundreds of processors on high-end parallel platforms and tens of processors on commodity clusters using switched fast ethernet. http://www.ks.uiuc.edu/Research/namd/
VMDVMD is a molecular visualization program for displaying, animating, and analyzing large biomolecular systems using 3-D graphics and built-in scripting. VMD supports computers running MacOS-X, Unix, or Windows, is distributed free of charge, and includes source code. http://www.ks.uiuc.edu/Research/vmd/