Computing at MAGIC: present and future Javier Rico Institució Catalana de Recerca I Estudis...

17
Computing at MAGIC: Computing at MAGIC: present and future present and future Javier Rico Javier Rico Institució Catalana de Recerca I Estudis Avançats Institució Catalana de Recerca I Estudis Avançats & Institut de Física d’Altes Energies Institut de Física d’Altes Energies Barcelona, Spain Barcelona, Spain ASPERA Computing and Astroparticle Physics Meeting Paris 8 February 2008

Transcript of Computing at MAGIC: present and future Javier Rico Institució Catalana de Recerca I Estudis...

Page 1: Computing at MAGIC: present and future Javier Rico Institució Catalana de Recerca I Estudis Avançats & Institut de Física d’Altes Energies Barcelona, Spain.

Computing at MAGIC:Computing at MAGIC:

present and futurepresent and future

Javier RicoJavier RicoInstitució Catalana de Recerca I Estudis AvançatsInstitució Catalana de Recerca I Estudis Avançats

&&Institut de Física d’Altes EnergiesInstitut de Física d’Altes Energies

Barcelona, SpainBarcelona, Spain

ASPERA Computing and Astroparticle Physics Meeting

Paris 8 February 2008

Page 2: Computing at MAGIC: present and future Javier Rico Institució Catalana de Recerca I Estudis Avançats & Institut de Física d’Altes Energies Barcelona, Spain.

J. Rico (ICREA & IFAE)Computing at MAGIC

Su

mm

ary

Su

mm

ary

Introduction: VHE -ray astronomy and MAGIC

Data handling at MAGIC GRID at MAGIC Virtual observatory Conclusions

Page 3: Computing at MAGIC: present and future Javier Rico Institució Catalana de Recerca I Estudis Avançats & Institut de Física d’Altes Energies Barcelona, Spain.

J. Rico (ICREA & IFAE)Computing at MAGIC

VH

E a

stro

no

my

VH

E a

stro

no

my

SNRs SNRs QSRsQSRs

Dark MatterDark Matter

PulsarsPulsars GRBsGRBs

Quantum Quantum GravityGravity

CosmologyCosmology

AGNsAGNs

Origin of Origin of CRsCRs

• MAGIC is a Cherenkov telescope (system) devoted to the study of the most energetic electromagnetic radiation, i.e. very high energy (VHE, E > 100 GeV) -rays

• VHE -rays are produced in non-thermal violent processes in the most extreme environments in our Universe

• Astrophysics of latest stellar stages, AGN’s, GRB’s

• Fundamental physics

Page 4: Computing at MAGIC: present and future Javier Rico Institució Catalana de Recerca I Estudis Avançats & Institut de Física d’Altes Energies Barcelona, Spain.

J. Rico (ICREA & IFAE)Computing at MAGIC

• MAGIC is currently the largest-dish Cherenkov telescope in operation (17 m diameter)

• Located in the Observatorio del Roque de los Muchachos on Canary Island of La Palma (Spain)

• Run by international Collaboration of ~150 physicists from Germany, Spain, Italy, Switzerland, Poland, Armenia, Finland, Bulgaria

• In operation since fall 2004 (about to finish 3rd observation cycle)

• 2nd telescope (MAGIC-II) to be inaugurated on September 21st 2008

MAGICMAGIC

Page 5: Computing at MAGIC: present and future Javier Rico Institució Catalana de Recerca I Estudis Avançats & Institut de Física d’Altes Energies Barcelona, Spain.

J. Rico (ICREA & IFAE)Computing at MAGIC

IMAGINGIMAGING

A segmented PMT camera (577/1039 channels for the first/second telescopes) allows to image Cherenkov showers

Page 6: Computing at MAGIC: present and future Javier Rico Institució Catalana de Recerca I Estudis Avançats & Institut de Física d’Altes Energies Barcelona, Spain.

J. Rico (ICREA & IFAE)Computing at MAGIC

Raw

dat

a vo

lum

eR

aw d

ata

volu

me

Event rate: R Number of camera pixels: n Digitization samples: s Precision: p

Data volume rate = R × n × s × p

Phase R (Hz) n s p (bits) 1 hour 1 day 1 year

1 300 577 30 8 18 GB 150 GB 20 TB

2 300 577 80 10 62 GB 500 GB 75 TB

3 300 1616 80 10/12 175 GB 1400 GB 210 TB

1: One telescope, 300 MHz digitization system: Oct 2004-Dec 20062: One telescope 2 GHz digitization system: Jan 2007-Sep 20083: Two telescopes 2 GHz digitization system: Sep 2008

Page 7: Computing at MAGIC: present and future Javier Rico Institució Catalana de Recerca I Estudis Avançats & Institut de Física d’Altes Energies Barcelona, Spain.

J. Rico (ICREA & IFAE)Computing at MAGIC

Dat

a fl

ow

Dat

a fl

ow

DAQ

Fast analysis

Raw data

Reduced data

La Palma

Calibrate

FTP/Mail

FTP

Reduce

Raw data200 TB/yr

Reduced data2 TB/yr

PIC, Barcelona

Reduced data

UserCalib. Data20 TB/yr

Starting in September 2008 MAGIC data center hosted at PIC, Barcelona (tier-I center)In test-phase since 1 year alreadyProvides:

automatic data transfer from La Palmatape storage of raw data automatic data analysisaccess to latest year calibrated and reduced datacpu and disk buffer for data analysisdatabase

Page 8: Computing at MAGIC: present and future Javier Rico Institució Catalana de Recerca I Estudis Avançats & Institut de Física d’Altes Energies Barcelona, Spain.

J. Rico (ICREA & IFAE)Computing at MAGIC

MA

GIC

/PIC

Dat

a ce

nte

rM

AG

IC/P

IC D

ata

cen

ter

Use Size

1 yr reduced data 3 TB

1 yr calib. data 21 TB

Buffer data processing 21 TB

Buffer tape/disk I/O 21 TB

Users’ buffer 6 TB

Total 72 TB

DATA center disk needs:

+ unlimited tape storage capacity

Already running system consists of:• 25 TByte disk space (ramp-up to final 72 TByte foreseen for next September within schedule)• LTO3/LTO4 tape storage and I/O with robots• ~20 × CPU (2 GHz) for data processing and analysis• automatization of data tranfer/process/analysis• database• WEB access

Page 9: Computing at MAGIC: present and future Javier Rico Institució Catalana de Recerca I Estudis Avançats & Institut de Física d’Altes Energies Barcelona, Spain.

J. Rico (ICREA & IFAE)Computing at MAGIC

Tre

nd

s fo

rese

en f

or

2008

Tre

nd

s fo

rese

en f

or

2008 Philosophy:

Adopt Grid to allow MAGIC users to do better science “If it ain’t broken, don’t fix it”

Leverage worldwide mutual trust agreement for Grid certificates to simplify user ID management for: Interactive login (ssh → gsi-ssh or equivalent) Casual file transfer (https via Apache+mod_gridsite or gridftp)

Move to batch submission via Grid tools in order to unify CPU accounting with LHC

Setup the Grid utility “reliable File Transfer Service” to automate file distribution between MAGIC Datacenter @ PIC and sites which regularly subscribe many datasets

PIC/IFAE will have specific resources to help with this, partially thanks to funding from the EGEE-III project

Integrate into the procedure for opening an account in the Datacenter the additional steps for a user to get a Grid certificate and to be included as member of the MAGIC VO.

Page 10: Computing at MAGIC: present and future Javier Rico Institució Catalana de Recerca I Estudis Avançats & Institut de Física d’Altes Energies Barcelona, Spain.

J. Rico (ICREA & IFAE)Computing at MAGIC

Mo

nte

Car

lo s

imu

lati

on

Mo

nte

Car

lo s

imu

lati

on

The recorded data are mainly background events due to charged cosmic rays (CR)

Background rejection needs large samples of Monte Carlo simulated -ray and CR showers

Very CPU consuming (1 night of background > 106 computer days )

Access to simulated samples, MC production coordination, scalability (MAGIC II, CTA...)

GRID can help with these issues

Page 11: Computing at MAGIC: present and future Javier Rico Institució Catalana de Recerca I Estudis Avançats & Institut de Física d’Altes Energies Barcelona, Spain.

J. Rico (ICREA & IFAE)Computing at MAGIC

• H. Kornmayer (Karlsruhe) proposed following scheme

• MAGIC Virtual Organization created within EGEE-II

• Involves three national Grid centers

• CNAF (Bologna)• PIC (Barcelona)• GridKA (Karlsruhe)

• Connect MAGIC resources to enable collaboration

• 2 subsystems• MC (Monte Carlo)• Analysis

• Start with MC first

The ideaThe idea

Page 12: Computing at MAGIC: present and future Javier Rico Institució Catalana de Recerca I Estudis Avançats & Institut de Física d’Altes Energies Barcelona, Spain.

J. Rico (ICREA & IFAE)Computing at MAGIC

I need 1.5 million hadronicshowers with Energy E, Direction (theta, phi), ... As background sample for observation of „Crab nebula“

Run Magic MonteCarlo Simulation and register output data

Run Magic MonteCarlo Simulation and register output data

Run Magic MonteCarlo Simulation and register output data

Run Magic MonteCarlo Simulation and register output data

Run Magic Monte Carlo Simulation (MMCS) and register output data

Simulate the Telescope Geometry with the reflector program for all interesting MMCS files and register output data

Simulate the Starlight Background for a given position in the sky and register output data

Simulate the response of the MAGIC camera for all interesting reflector files and register output data

Merge the shower simulation and the StarLight simulation and produce a MonteCarlo data sample

MC WorkflowMC Workflow

Page 13: Computing at MAGIC: present and future Javier Rico Institució Catalana de Recerca I Estudis Avançats & Institut de Física d’Altes Energies Barcelona, Spain.

J. Rico (ICREA & IFAE)Computing at MAGIC

ImplementationImplementation

3 main components: • meta data base

• bookkeeping of the requests, their jobs and the data

• Requestor• user define the parameters by inserting the request to the meta data base

• Executor• creates Grid jobs by checking the metadatabase frequently (cron) and generating the input files

Page 14: Computing at MAGIC: present and future Javier Rico Institució Catalana de Recerca I Estudis Avançats & Institut de Física d’Altes Energies Barcelona, Spain.

J. Rico (ICREA & IFAE)Computing at MAGIC

Sta

tus

MC

pro

du

ctio

nS

tatu

s M

C p

rod

uct

ion

Last data challenge (from September 2005) produced ~15000 simulated -ray showers, ~4% failures

After that H. Kornmayer left the project, has been stalled since.

New crew taking over the VO (UCM Madrid + INSA + Dortmund).

Plan to start producing MC for MAGIC-II soon

Page 15: Computing at MAGIC: present and future Javier Rico Institució Catalana de Recerca I Estudis Avançats & Institut de Física d’Altes Energies Barcelona, Spain.

J. Rico (ICREA & IFAE)Computing at MAGIC

Vir

tual

ob

serv

ato

ryV

irtu

al o

bse

rvat

ory

• MAGIC will share data with other experiments (GLAST, Veritas, HESS... More?)

• There might be some time reserved for external observers (from experiment to observatory)

• In general, MAGIC results should be more accessible to the astrophysics community

• MAGIC will release data at PIC datacenter using GRID technology in FITS format

• Step by step approach:• Published data (skymaps, light-

curves, spectra, …) → imminent• Data shared with other

experiments (GLAST) → soon• Data for external observers →

mid-term• A standard format has to be defined

(other experiments, future CTA)• Eventually integrated within a Virtual

Observatory (under investigation)

Crab Nebula September 2006

MAGIC

Page 16: Computing at MAGIC: present and future Javier Rico Institució Catalana de Recerca I Estudis Avançats & Institut de Física d’Altes Energies Barcelona, Spain.

J. Rico (ICREA & IFAE)Computing at MAGIC

MAGIC-GRID Architectural Design proposal

The Server application creates Grid template files that are sent to each of the available Grid resources.

MAGIC EXECUTOR (SERVICE)MAGIC EXECUTOR (SERVICE)

GRIDGRID

The workflow is executed in the available Grid nodes within the MAGIC Virtual Organization. The products are stored in a Data Product Storage unit.

Meta Data DatabaseMeta Data Database

Bookkeeping of the requests, their jobs and the data

GRID JOB TEMPLATEGRID JOB TEMPLATEThe Server creates the template using the Middleware (gLite, LCG) and submits the jobs to the GRID for execution.

MAGIC Request (VOTable). SOAP Message (*)

MAGIC Submitted Job status information and results Information

(*) SOAP Message: Simple Object Access Protocol VOTable: XML standard for interchange of data represented as a set of tables (http://www.ivoa.net/Documents/REC/VOTable/VOTable-20040811.html)

SOFTWARESOFTWARE

- MMCS- Reflector- Camera

MAGIC REQUESTOR (CLIENT)MAGIC REQUESTOR (CLIENT)

The user specifies the parameters of a particular job request through an interface.

DATA PRODUCT STORAGE UNITDATA PRODUCT STORAGE UNIT

VIRTUAL OBSERVATORY ACCESSIBILITYVIRTUAL OBSERVATORY ACCESSIBILITY

The MAGIC Executor gets notified when the jobs have finished. The application will be designed to send the output data to persistent layer compliant with the emerging VOSpace protocol. (To Be Implemented).

VO TOOLSVO TOOLS

The MAGIC Requestor should allow the interaction with VO applications to be as close as possible with the new emerging astronomical applications.

Page 17: Computing at MAGIC: present and future Javier Rico Institució Catalana de Recerca I Estudis Avançats & Institut de Física d’Altes Energies Barcelona, Spain.

J. Rico (ICREA & IFAE)Computing at MAGIC

Su

mm

ary

Su

mm

ary

MAGIC scientific program requires large computing power and storage capacity

Data center at PIC/IFAE (Barcelona) is up and start official operation in September 2008 with MAGIC-II

Massive MC production for MAGIC-II will involve GRID

(Some) Data will be release through virtual observatory

Good benchmark for other astro-particle present and future projects