HEP Data Grid in Japan Takashi Sasaki Computing Research Center KEK.

36
HEP Data Grid in Japan Takashi Sasaki Computing Research Center KEK

Transcript of HEP Data Grid in Japan Takashi Sasaki Computing Research Center KEK.

Page 1: HEP Data Grid in Japan Takashi Sasaki Computing Research Center KEK.

HEP Data Grid in Japan

Takashi Sasaki

Computing Research Center

KEK

Page 2: HEP Data Grid in Japan Takashi Sasaki Computing Research Center KEK.

28/07/[email protected]

Contents

• Japanese HENP projects

• Belle GRID

• ATLAS-Japan Regional Center

• Collaboration in Asia Pacific

Page 3: HEP Data Grid in Japan Takashi Sasaki Computing Research Center KEK.

28/07/[email protected]

Major HENP projects in Japan

• KEK Proton Synchrotron • K2K etc.

• KEKB– BELLE experiment

• 300 members from 54 institutes in10 countries • Super-B is planning

• J-PARC– New facility under construction at Tokai (~2008)  – particle physics, material science and life science

• T2K

• International collaboration – LHC ATLAS, CERN– HERA ZEUS, DESY– RHIC, BNL– etc.

Page 4: HEP Data Grid in Japan Takashi Sasaki Computing Research Center KEK.

28/07/[email protected]

KEK-B Belle Experiment

Page 5: HEP Data Grid in Japan Takashi Sasaki Computing Research Center KEK.

28/07/[email protected]

1km

Page 6: HEP Data Grid in Japan Takashi Sasaki Computing Research Center KEK.

28/07/[email protected]

Belle Data

• 15MB/sec data rate in maximum

• 200TB/year raw data recorded

• Equivalent number of Monte Carlo events

Page 7: HEP Data Grid in Japan Takashi Sasaki Computing Research Center KEK.

28/07/[email protected]

Belle PC farms

Page 8: HEP Data Grid in Japan Takashi Sasaki Computing Research Center KEK.

28/07/[email protected]

Super KEK-B• L = 10^35cm^-2s^-1 in 2007?• Data rate ~ 250MB/s

Page 9: HEP Data Grid in Japan Takashi Sasaki Computing Research Center KEK.

28/07/[email protected]

K2K now upgrading to T2K

Page 10: HEP Data Grid in Japan Takashi Sasaki Computing Research Center KEK.

28/07/[email protected]

T2K (Tokai to Kamioka)• J-PARC

– for particles physics, material and life science – joint project of JAERI and KEK– 100 times intense neutrino beam

• High trigger rate at the near detector– Operational in 2007

Page 11: HEP Data Grid in Japan Takashi Sasaki Computing Research Center KEK.

28/07/[email protected]

LHC-ATLAS

• Japanese contribution– Semiconductor Tracker– Endcap Muon Trigger System– Muon Readout Electronics– Superconducting Solenoid– DAQ system– Regional Analysis Facility

• ICEPP will the site of the regional center

Page 12: HEP Data Grid in Japan Takashi Sasaki Computing Research Center KEK.

28/07/[email protected]

HEP GRID related activities

• Networking – domestic/international

• BELLE– data distribution and remote MC production

• ALTAS Japan– collaboration between ICEPPand KEK

• Hadron therapy simulation – application of HEP tools into medical field

Page 13: HEP Data Grid in Japan Takashi Sasaki Computing Research Center KEK.

28/07/[email protected]

SuperSInetMajor HEP sites have a DWDM connectionto KEK with 1Gbps

Page 14: HEP Data Grid in Japan Takashi Sasaki Computing Research Center KEK.

28/07/[email protected]

NY

Chicago

To Novosibirsk

0.5 M

10 G

5 G -> 10G

ASNET/TANET2 622 M

APII 1G

SuperSINET

APAN

TransPAC

CERNET155 M

Tsukuba WAN

Taiwan

China

Russia

USA

SuperSINET

Current Network Connections around Japan

As of July 2004

SINET2MThailand

Hawaii 155M

Korea

Page 15: HEP Data Grid in Japan Takashi Sasaki Computing Research Center KEK.

28/07/[email protected]

BELLE GRID

• Distributed analysis and data sharing among institutes – Experimental data distribution to remote sites– Monte Carlo production at remote sites and send

back events to KEK– remote job submission – etc

• Testing SRB with GSI now– among Australia, Taiwan and KEK

• hopefully Korea, soon

Page 16: HEP Data Grid in Japan Takashi Sasaki Computing Research Center KEK.

28/07/[email protected]

Why SRB?• Easy to install, use and manage

– Most of Belle’s collaborating institutes are middle or small size universities

• poor in man power • Useful features

– file replication– parallel data transfer – Grid aware: GSI authentication– Command line interface, API, GUI – Fancy Web interface– etc

• Excellent user support– quick response – seminars

• Available today!– Solution is needed for Belle

Page 17: HEP Data Grid in Japan Takashi Sasaki Computing Research Center KEK.

28/07/[email protected]

SRB(Storage Resource Broker)

SRBserver

SRBserver

SRBserver

SRBserver

Internet

disktape DBnfsMCATRAID

SRBclient

SRBclient

Zone

Page 18: HEP Data Grid in Japan Takashi Sasaki Computing Research Center KEK.

28/07/[email protected]

SRB zone federation

SRBserver

SRBserver

SRBserver

SRBserver

Internet

disktape DBnfsMCATRAID

disk DB MCAT

SRBserver

SRBserver

SRBserver

ZoneA

ZoneB

Page 19: HEP Data Grid in Japan Takashi Sasaki Computing Research Center KEK.

28/07/[email protected]

SRB Command line interface

Page 20: HEP Data Grid in Japan Takashi Sasaki Computing Research Center KEK.

28/07/[email protected]

SRB test bed system

HPSS120TB

ZONE:glsrb01

Fire Wall

MCAT

ZONE:anusf

store Internet

federation

MCATPostgreSQL

gtdmz01

ZONE:gtsrb13

ZONE:kekgt15

KEKAustralia

gt13

gl03gl01

MCATDB2

MCATPostgreSQL

gt15 RAID800GB

bcs20 NFS(Belle data)

BELLE Computer systemclient

DMZ

BELLESecure net

Page 21: HEP Data Grid in Japan Takashi Sasaki Computing Research Center KEK.

28/07/[email protected]

Status

• Zone federation between ANU and KEK has been established – ANU, University of Melbourne and University of Sydn

ey are collaborating for BELLE/ATLAS Grid issue • One MCAT running at ANU

– Data can store/retrieve on Belle data system and also HPSS in KEK side

• Zone federation between Academia Sinica, Taiwan and KEK has been established – still need to solve some problems

Page 22: HEP Data Grid in Japan Takashi Sasaki Computing Research Center KEK.

28/07/[email protected]

BELLE GRID Future plan

• Participation of Korean sites• Mutual job submission using Globus• LCG2+SRB (if they wish)

– LCG2 is under testing at KEK also with help of ICEPP.

– Because many foreign institutes are working both for BELLE and one of LHC experiments, they want to use LCG rather than vanilla Globus

• baby “tier-0” at KEK and “tier-1” at ICEPP

– SRB and LCG-RLS synchronization will be tried based on Simon Matson’s (Bristol, CMS) work

Page 23: HEP Data Grid in Japan Takashi Sasaki Computing Research Center KEK.

28/07/[email protected]

Grid in ATLAS Japan

• Regional analysis center for ATLAS– ICEPP, the University of Tokyo

• Joint collaboration between ICEPP and KEK for Grid deployment

Page 24: HEP Data Grid in Japan Takashi Sasaki Computing Research Center KEK.

28/07/[email protected]

LCG MW Deployment in Japan

• RC Pilot Model System@ICEPP– Since 2002– LCG testbed. Now LCG2_1_1

• LCG2 test bed@KEK– baby tier-0– LCG2_1_1

• Regional Center Facility– Will be introduced in JFY2006– Aiming “Tier1” size resource

Page 25: HEP Data Grid in Japan Takashi Sasaki Computing Research Center KEK.

28/07/[email protected]

Page 26: HEP Data Grid in Japan Takashi Sasaki Computing Research Center KEK.

28/07/[email protected]

SuperSInet Performance Measurement“A” settingTCP 479Mbps -P 1 -t 1200 -w 128KBTCP 925Mbps -P 2 -t 1200 -w 128KBTCP 931Mbps -P 4 -t 1200 -w 128KBUDP 953Mbps -b 1000MB -t 1200 -w 128KB

"A" setting: 104.9MB/s "B" setting: 110.2MB/s

“B” setting (longer window size)TCP 922Mbps -P 1 -t 1200 -w 4096KBUDP 954Mbps -b 1000MB -t 1200 -w 4096KB

DWDM

Page 27: HEP Data Grid in Japan Takashi Sasaki Computing Research Center KEK.

28/07/[email protected]

1Gbps

100Mbps

ICEPP KEK

100 CPUs6CPUs

HPSS 120TB

GRID testbed environmentwith HPSS through GbE-WAN

NorduGrid - grid-manager - gridftp-serverGlobus-mdsGlobus-replicaPBS server

NorduGrid - grid-manager - gridftp-serverGlobus-mdsPBS server

PBS clientsPBS clients

HPSS servers

~ 60km0.2TBSECE

CECE

SE

User PCs

Page 28: HEP Data Grid in Japan Takashi Sasaki Computing Research Center KEK.

28/07/[email protected]

Client disk speed @ KEK = 48MB/sClient disk speed @ ICEPP=33MB/s

0 2 4 6 8 100

20

40

60

80

KEK client (LAN) ICEPP client(WAN)

# of file transfer in parallel

Agg

rega

te T

rans

fer

spee

d (M

B/s

)Pftp→pftp HPSS mover disk Client disk

to /dev/null

to client disk Ftp buffer=64MB

Even 3ms latency affects on results

Page 29: HEP Data Grid in Japan Takashi Sasaki Computing Research Center KEK.

28/07/[email protected]

microscopic monitoring on network performance

•speed=increment of sum of TCP data size in every 10ms•window size grows slowly after packet losss

•longer windows is not perfect when you have packet loss

w/o packet loss w/ packet loss

Page 30: HEP Data Grid in Japan Takashi Sasaki Computing Research Center KEK.

28/07/[email protected]

FAST

• http://netlab.caltech.edu/FAST/

• Looks nice and we want to try, but we haven’t because of the patent and IP issue. – Some people at KEK afraid that they might

have a difficulty to work in the similar topic once they see the source code of it.

– Is this true?

Page 31: HEP Data Grid in Japan Takashi Sasaki Computing Research Center KEK.

28/07/[email protected]

CA

• A key issue of GRID• ICEPP and KEK will run one CA jointly for

BELLE and ATLAS– only for active KEK users to simplify the procedure– current situation

• ICEPP depends on a foreign CA to join LCG • KEK is running a test CA locally

• CA management costs are not cheap. Any good idea?

Page 32: HEP Data Grid in Japan Takashi Sasaki Computing Research Center KEK.

28/07/[email protected]

Storage evaluation

• SAN solutions – IBM SANfile (aka. StorageTank)

• Linux+AIX in server side• tested AIX, Solaris and Linux clients

– Linux clients were fastest in our tests

– HP Lustre• waiting for beta product delivery

Page 33: HEP Data Grid in Japan Takashi Sasaki Computing Research Center KEK.

28/07/[email protected]

Distributed simulation in advanced radio therapy

• A model on hospitals and computer centers– Hospitals send CT, MRI or PET images (DICOM) with

treatment planning data to a computing center as input

• higher security is required to protect personal data

– Full simulation using Geant4 at computing centers • parallel simulation

– Analysis results and feedbacks in DICOM to hospitals

• Validation of treatment planning

Page 34: HEP Data Grid in Japan Takashi Sasaki Computing Research Center KEK.

28/07/[email protected]

Toward Asia-Pacific collaboration

• Taka an advantage on working together with people in neighboring time zones

• HEP population in Asia-Pacific– ATLAS

• Australia(7), China(15), Japan(45) and Taiwan(5)• 72/1446 = 5.0%

– CMS• China(31), Korea(17) and Taiwan(14)• 62/1676 = 3.7%

– Alice• China(20), Japan(3) and Korea(12)• 35/747 = 4.7%

– LHC-b• China(19)• 19/737 = 2.6%

– Belle • Australia(8),China(11),Japan(122),Korea(15) and Taiwan(13)• 169/246 = 67% (excluding Japan 20%)

Page 35: HEP Data Grid in Japan Takashi Sasaki Computing Research Center KEK.

28/07/[email protected]

Page 36: HEP Data Grid in Japan Takashi Sasaki Computing Research Center KEK.

28/07/[email protected]

Summary

• Japan is an unique country in Asia-Pacific region which have large accelerators for HENP – should have a “tier-0” center

• SRB is under testing for BELLE• ICEPP, U-Tokyo and KEK are collaborating

together to build the ATLAS regional center in Japan

• We seek a collaboration among Asia-Pacific countries– BELLE, LHC– More bandwidth is necessary among sites