ICFA/SCIC Aug '05 SLAC Site Report 1 SLAC Site Report Les Cottrell, Gary Buhrmaster, Richard Mount,...

14
ICFA/SCIC Aug '05 SLAC Si te Report 1 SLAC Site Report Les Cottrell, Gary Buhrmaster, Richard Mount, SLAC For ICFA/SCIC meeting 8/22/05 www.slac.stanford.edu/grp/scs/net/talk05/icfa-slac-aug05. ppt

Transcript of ICFA/SCIC Aug '05 SLAC Site Report 1 SLAC Site Report Les Cottrell, Gary Buhrmaster, Richard Mount,...

Page 1: ICFA/SCIC Aug '05 SLAC Site Report 1 SLAC Site Report Les Cottrell, Gary Buhrmaster, Richard Mount, SLAC For ICFA/SCIC meeting 8/22/05 .

ICFA/SCIC Aug '05 SLAC Site Report

1

SLAC Site Report

Les Cottrell, Gary Buhrmaster, Richard Mount, SLAC

For ICFA/SCIC meeting 8/22/05

www.slac.stanford.edu/grp/scs/net/talk05/icfa-slac-aug05.ppt

Page 2: ICFA/SCIC Aug '05 SLAC Site Report 1 SLAC Site Report Les Cottrell, Gary Buhrmaster, Richard Mount, SLAC For ICFA/SCIC meeting 8/22/05 .

ICFA/SCIC Aug '05 SLAC Site Report

2

SLAC Funding• Increasingly multi-program:

– Increasing focus on photon sources• SPEAR3, Linear Coherent Light Source (LCLS) and Ultra

Fast Science Center

– Increased funding from BES• LINAC increasingly funded by BES (all in 2009)

– HEP roughly stable, BaBar stops taking data 2008– Also NASA (GLAST and Large Synoptic Space

Telescope (LSST))– Joint (Stanford / DoE / NSF) funded projects

• KIPAC, UltraFast center, Guest House

Page 3: ICFA/SCIC Aug '05 SLAC Site Report 1 SLAC Site Report Les Cottrell, Gary Buhrmaster, Richard Mount, SLAC For ICFA/SCIC meeting 8/22/05 .

ICFA/SCIC Aug '05 SLAC Site Report

3

SLAC Organization

• Photon Science

• Particle & Particle Astrophysics

• LCLS Construction (~$379 million)

• Operations (COO)– Computing/networking included here

• Computing as utility to all SLAC

Page 4: ICFA/SCIC Aug '05 SLAC Site Report 1 SLAC Site Report Les Cottrell, Gary Buhrmaster, Richard Mount, SLAC For ICFA/SCIC meeting 8/22/05 .

ICFA/SCIC Aug '05 SLAC Site Report

4

Requires

• New business practices– More project oriented since multiple projects

so need more accountability– No longer dominated by HEP

• Harder to “hide” projects like PingER with no sources of funding for operations.

Page 5: ICFA/SCIC Aug '05 SLAC Site Report 1 SLAC Site Report Les Cottrell, Gary Buhrmaster, Richard Mount, SLAC For ICFA/SCIC meeting 8/22/05 .

ICFA/SCIC Aug '05 SLAC Site Report

5

SLAC external network traffic

• SLAC is one of top users of ESNet and one of the top users of Internet2. (Fermilab doesn’t do so badly either)– Majority of our science traffic is international – Connectivity to both ESnet and

CENIC (via Stanford)

Page 6: ICFA/SCIC Aug '05 SLAC Site Report 1 SLAC Site Report Les Cottrell, Gary Buhrmaster, Richard Mount, SLAC For ICFA/SCIC meeting 8/22/05 .

ICFA/SCIC Aug '05 SLAC Site Report

6

Ter

abyt

es/M

onth

Fer

mila

b (U

S)

Wes

tGrid

(C

A)

SLA

C (

US

)

INF

N C

NA

F (

IT)

SLA

C (

US

)

RA

L (U

K)

Fer

mila

b (U

S)

MIT

(U

S)

SLA

C (

US

)

IN2P

3 (F

R)

IN2P

3 (F

R)

Fer

mila

b (U

S)

SLA

C (

US

)

Kar

lsru

he (

DE

)

Fer

mila

b (U

S)

J

ohns

Hop

kins

12

10

8

6

4

2

0

LIG

O (

US

)

Cal

tech

(U

S)

LLN

L (U

S)

NC

AR

(U

S)

Fer

mila

b (U

S)

SD

SC

(U

S)

Fer

mila

b (U

S)

Kar

lsru

he (

DE

)

LBN

L (U

S)

U. W

isc.

(U

S)

Fer

mila

b (U

S)

U

. Tex

as, A

ustin

(U

S)

BN

L (U

S)

LLN

L (U

S)

BN

L (U

S)

LLN

L (U

S)

Fer

mila

b (U

S)

U

C D

avis

(U

S)

Qw

est (

US

)

ES

net (

US

)F

erm

ilab

(US

)

U. T

oron

to (

CA

)B

NL

(US

)

LLN

L (U

S)

BN

L (U

S)

LLN

L (U

S)

CE

RN

(C

H)

BN

L (U

S)

NE

RS

C (

US

)

LB

NL

(US

)D

OE

/GT

N (

US

)

JLa

b (U

S)

U. T

oron

to (

CA

)

Fer

mila

b (U

S)

NE

RS

C (

US

)

LB

NL

(US

)N

ER

SC

(U

S)

LB

NL

(US

)N

ER

SC

(U

S)

LB

NL

(US

)N

ER

SC

(U

S)

LB

NL

(US

)C

ER

N (

CH

)

Fer

mila

b (U

S)

DOE Lab-International R&E

Lab-U.S. R&E (domestic)

Lab-Lab (domestic)

Lab-Comm. (domestic)

Page 7: ICFA/SCIC Aug '05 SLAC Site Report 1 SLAC Site Report Les Cottrell, Gary Buhrmaster, Richard Mount, SLAC For ICFA/SCIC meeting 8/22/05 .

ICFA/SCIC Aug '05 SLAC Site Report

7

ESnet BAMAN connection

• SLAC participated in the BAMAN “christening” activity on June 24th 2005– Moved physics data from SLAC to NERSC at

~8Gb/sec

• SLAC and ESnet personnel are working on the “commissioning” activities for production traffic cutover– Interim will be connection using 1Gb/s links

Page 8: ICFA/SCIC Aug '05 SLAC Site Report 1 SLAC Site Report Les Cottrell, Gary Buhrmaster, Richard Mount, SLAC For ICFA/SCIC meeting 8/22/05 .

ICFA/SCIC Aug '05 SLAC Site Report

8

SLAC 10Gb/s plans

• Upgrade for border and core site equipment is being ordered RSN – – Cisco 6500’s SUP720s

• Router functionality – Netflow, MPLS, etc.– Will connect to ESnet and CENIC (via Stanford) at

10Gb/s (when Stanford gets their 10Gb/s upgrade)• Power installation has been requested, but

currently does not have a completion date– We had actually planned for new power and had it

partially installed, and then the electrical arc flash accident on October 2004, suspending most electrical “hot work”, and the ESnet BAMAN equipment has utilized the previously installed outlets.

Page 9: ICFA/SCIC Aug '05 SLAC Site Report 1 SLAC Site Report Les Cottrell, Gary Buhrmaster, Richard Mount, SLAC For ICFA/SCIC meeting 8/22/05 .

ICFA/SCIC Aug '05 SLAC Site Report

9

Network Research Activities

• IEPM for >= 10Gbits/s hybrid networks – Forecasting for middleware/scheduling,

problem detection, trouble shooting, develop/evaluate new measurement tools

– Passive monitoring for high speed links– Provide network monitoring infrastructure to

support critical HEP experiments

• Next Gen transport evaluation:– User space transport (UDT), new TCP stacks,

RDMA/DDP

Page 10: ICFA/SCIC Aug '05 SLAC Site Report 1 SLAC Site Report Les Cottrell, Gary Buhrmaster, Richard Mount, SLAC For ICFA/SCIC meeting 8/22/05 .

ICFA/SCIC Aug '05 SLAC Site Report

10

Network Research Activities

• Datagrid Wide area network Monitoring Infrastructure (DWMI)

• PingER and the Digital Divide

Page 11: ICFA/SCIC Aug '05 SLAC Site Report 1 SLAC Site Report Les Cottrell, Gary Buhrmaster, Richard Mount, SLAC For ICFA/SCIC meeting 8/22/05 .

ICFA/SCIC Aug '05 SLAC Site Report

11

Network Research Activities

• High speed testbed involvement– UltraLight

• SLAC systems currently at Sunnyvale Level(3)– Originally UL equipment was to be located at SLAC, but

connection to USN has changed plans

– USN• via UltraLight project. Not directly connected at this time

– ESnet Science Data network (SDN)• provisioned, guaranteed bandwidth circuits to support large,

high-speed science data flows

– SC05

Page 12: ICFA/SCIC Aug '05 SLAC Site Report 1 SLAC Site Report Les Cottrell, Gary Buhrmaster, Richard Mount, SLAC For ICFA/SCIC meeting 8/22/05 .

ICFA/SCIC Aug '05 SLAC Site Report

12

Future production network requirements

• BaBar detector runs until December 2008 luminosity will continue to increase until end of run

analysis will continue after 2008

• GLAST - Launch in 2006 (low data rate)

• LCLS - First science in 2009

• LSST - First science in 2012 (~0.5GB/sec)

Page 13: ICFA/SCIC Aug '05 SLAC Site Report 1 SLAC Site Report Les Cottrell, Gary Buhrmaster, Richard Mount, SLAC For ICFA/SCIC meeting 8/22/05 .

ICFA/SCIC Aug '05 SLAC Site Report

13

Futures

• UltraFast center – modeling and analysis

• Huge memory systems for data analysis– The PetaCache project

• The Broader US HEP Program (aka LHC)– Contributes to the orientation of SLAC Scientific

Computing R&D

• Continued network research activities– Network research vs. Research network activities

Page 14: ICFA/SCIC Aug '05 SLAC Site Report 1 SLAC Site Report Les Cottrell, Gary Buhrmaster, Richard Mount, SLAC For ICFA/SCIC meeting 8/22/05 .

ICFA/SCIC Aug '05 SLAC Site Report

14

Futures

• Possibility of moving some/all of site computing infrastructure offsite– Power & Cooling challenges onsite

• We have a 1MW substation outside of building for expansion, but no cables into building. We have an 8” water cooling pipe, but we are near cooling capacity.

– Building retrofit will be disruptive• Computing center built for water cooled mainframes, not air

cooled rack mounted equipment

– If SLAC moves forward, we will require multiple lambdas from site to collocation facility