The DAS-3 Project

15
The DAS-3 Project Henri Bal Vrije Universiteit Amsterdam Faculty of Sciences

description

The DAS-3 Project. Vrije Universiteit Amsterdam Faculty of Sciences. Henri Bal. Distributed ASCI Supercomputer. Joint infrastructure of ASCI research school Clusters integrated in a single distributed testbed Long history and continuity. - PowerPoint PPT Presentation

Transcript of The DAS-3 Project

Page 1: The DAS-3 Project

The DAS-3 ProjectHenri Bal

Vrije Universiteit AmsterdamFaculty of Sciences

Page 2: The DAS-3 Project

Distributed ASCI Supercomputer

• Joint infrastructure of ASCI research school• Clusters integrated in a single

distributed testbed • Long history and continuity

DAS-1 (1997) DAS-2 (2002) DAS-3 (Oct 2006)

Page 3: The DAS-3 Project

DAS is a Computer Science grid

• Motivation: CS needs its own infrastructure for- Systems research and experimentation- Distributed experiments- Doing many small, interactive experiments

• DAS is simpler and more homogeneous than production grids- Single operating system - “A simple grid that works’’

Page 4: The DAS-3 Project

Usage of DAS

• ~ 200 users, 32 Ph.D. theses• Clear shift of interest:

Cluster computing

Distributed computing

Grids and P2P

Virtual laboratories

Page 5: The DAS-3 Project

Impact of DAS

• Major incentive for VL-e 20 M€ BSIK funding- Virtual Laboratory for e-Science

• Collaboration with French Grid’5000- Towards a European scale CS grid?

• Collaboration SURFnet on DAS-3- SURFnet provides multiple 10 Gb/s light paths

Grid’5000

Page 6: The DAS-3 Project

VU (85 nodes)

TU Delft (68) Leiden (32)

UvA/MultimediaN(46)

UvA/VL-e (40)

DAS-3

272 AMD Opteron nodes792 cores, 1TB memoryMore heterogeneous: 2.2-2.6 GHz Single/dual core nodesMyrinet-10G (exc. Delft)Gigabit Ethernet

SURFnet6

10 Gb/s lambdas

Page 7: The DAS-3 Project

Status• Timeline

- Sep. 04 Proposal- Apr. 05 NWO/NCF funding- Dec. 05 European tender (with TUD/GIS,

Stratix)- Apr. 06 Selected ClusterVision- Oct. 06 Operational

• SURFnet6 connection shortly- Multiple 10 Gb/s dedicated lambdas

• First local Myrinet measurements- 2.6 μsec 1-way null-latency- 950 MB/sec throughput

Page 8: The DAS-3 Project

Projects using DAS-3• VL-e

- Grid computing, scheduling, workflow, PSE, visualization

• MultimediaN- Searching, classifying multimedia data

• NWO i-Science (GLANCE, VIEW, STARE)- StarPlane, JADE-MM, GUARD-G, VEARD, GRAPE

Grid, SCARIe, AstroStream• NWO Computational Life Sciences:

- 3D-RegNet, CellMath, MesoScale• Open competition (many)• NCF projects (off-peak hours)

Page 9: The DAS-3 Project

StarPlane• Key idea:

- Applications can dynamically allocate light paths

- Applications can change the topologyof the wide-area network,possibly even atsub-second timescale

• VU (Bal, Bos, Maassen)UvA (de Laat, Grosso, Xu, Velders)

CP

U’s

R

CPU’sR

CPU’s

R

CP

U’s

R

CPU’s

R

NOC

Page 10: The DAS-3 Project

StarPlane

• Challenge: how to integrate such a network infrastructure with (e-Science) applications?- Distributed supercomputing- Remote data access- Visualization

CPU Data

Network

Page 11: The DAS-3 Project

Jade-MM

• Large-scale multimedia content analysis on grids• Problem: >30 CPU hours per hour of video

Beeld&Geluid 20.000 hours of TV broadcasts per year

London Underground >120.000 years of processing for >> 10.000’s CCTV cameras

• Data-dependencies at all levels of granularity• UvA (Smeulders, Seinstra) + VU (Bal, Kielmann,

Koole, van der Mei)

Page 12: The DAS-3 Project

GUARD-G• How to turn grids into a predictable utility for

computing (much like the telephone system)• Problems:

- Predictability of workloads- Predictability of system availability (grids are faulty!)

• Allocation of light paths very useful here• TU Delft (Epema) + Leiden (Wolters)

Page 13: The DAS-3 Project

Summary

• DAS has a major impact on experimental Computer Science research

• It has attracted a large user base• DAS-3 provides

- State-of-the-art CPUs: 64-bit (dual-core)- High-speed local interconnect (Myrinet-10G)- A flexible optical wide-area network

More info:http://www.cs.vu.nl/das3/

Page 14: The DAS-3 Project

ConfigurationLU TUD UvA-VLe UvA-MN VU TOTALS

Head* storage 10TB 5TB 2TB 2TB 10TB 29TB

* CPU 2x2.4GHz DC

2x2.4GHz DC

2x2.2GHz DC 2x2.2GHz DC 2x2.4GHz

DC* memory 16GB 16GB 8GB 16GB 8GB 64GB

* Myri 10G 1 1 1 1

* 10GE 1 1 1 1 1

Compute 32 68 40 (1) 46 85 271* storage 400GB 250GB 250GB 2x250GB 250GB 84 TB

* CPU 2x2.6GHz 2x2.4GHz 2x2.2GHz DC 2x2.4GHz 2x2.4GHz

DC 1.9 THz

* memory 4GB 4GB 4GB 4GB 4GB 1048 GB* Myri 10G 1 1 1 1

Myrinet

* 10G ports 33 (7) 41 47 86 (2)

* 10GE ports 8 8 8 8 320 Gb/s

Nortel* 1GE ports 32 (16) 136 (8) 40 (8) 46 (2) 85 (11) 339 Gb/s

* 10GE ports 1 (1) 9 (3) 2 2 1 (1)

Page 15: The DAS-3 Project

DAS-3 networksNortel 5530 + 3 * 5510

ethernet switchcompute nodes (85)

1 Gb/s ethernet (85x)

Myri-10G switch

10 Gb/s Myrinet (85x)

10 Gb/s ethernet blade

10 Gb/s eth fiber (8x)

Nortel OME 6500with DWDM blade

80 Gb/s DWDMSURFnet6

1 or 10 Gb/s Campus uplink

Headnode(10 TB mass storage)

10 Gb/s Myrinet

10 Gb/s ethernet

vrije Universiteit