Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC...

14
Copyright © 2000 OPNET Technologies, Inc Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H. Niewodniczanski Institute of Nuclear Physics, Cracow

Transcript of Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC...

Page 1: Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.

Copyright © 2000 OPNET Technologies, Inc.

Title – 1

Distributed Trigger System for the LHC experiments

Krzysztof Korcyl

ATLAS experiment laboratory

H. Niewodniczanski Institute of Nuclear Physics, Cracow

Page 2: Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.

Copyright © 2000 OPNET Technologies, Inc.

Title – 2Krzysztof Korcyl, Institute of Nuclear Physics, Cracow

Contents

• LHC

• physics program

• detectors (ATLAS, LHCb)

• LHC T/DAQ system challenges

• T/DAQ system overview

• ATLAS

• LHCb

•T/DAQ trigger and data collection scheme

• ATLAS

• LHCb

Page 3: Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.

Copyright © 2000 OPNET Technologies, Inc.

Title – 3Krzysztof Korcyl, Institute of Nuclear Physics, Cracow

CERN and the Large Hadron Collider, LHC

LHC is being constructed underground inside a 27 km tunnel. Head on collisions of very high energy protons. ALICE, ATLAS, CMS, LHCb - approved experiments.

Page 4: Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.

Copyright © 2000 OPNET Technologies, Inc.

Title – 4Krzysztof Korcyl, Institute of Nuclear Physics, Cracow

The ATLAS LHC Experiment

Page 5: Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.

Copyright © 2000 OPNET Technologies, Inc.

Title – 5Krzysztof Korcyl, Institute of Nuclear Physics, Cracow

The LHCb LHC Experiment

Page 6: Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.

Copyright © 2000 OPNET Technologies, Inc.

Title – 6Krzysztof Korcyl, Institute of Nuclear Physics, Cracow

The LHCb LHC Experiment - an event signature

Page 7: Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.

Copyright © 2000 OPNET Technologies, Inc.

Title – 7Krzysztof Korcyl, Institute of Nuclear Physics, Cracow

Challenges for Trigger/DAQ system

The challenges:

• unprecedented LHC rate of 109 interactions per second

• large and complex detectors with O (108) channels to be read out

• bunch crossing rate 40 MHz requires a decision every 25 ns

• event storage rate limited to O (100) MB/s

The big challenge: to select rare physics signatures with high efficiency while rejecting common (background) events.

• E.q. H yy (mH 100 GeV) rate is ~ 10-13 of LHC interaction rate

Approach: three level trigger system

Page 8: Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.

Copyright © 2000 OPNET Technologies, Inc.

Title – 8Krzysztof Korcyl, Institute of Nuclear Physics, Cracow

ATLAS Trigger/DAQ - system overview

CALO MUON TRACKING

LVL1Pipeline

memories

Regions of

Interest

LVL2

Event builder

Data Recording

Readout Driver

Readout Driver

Readout Driver

Readout Buffer

Readout Buffer

Readout Buffer

Interaction rate ~ 1 GHz

Bunch crossing rate: 40 MHz

rate : 100 kHz

latency: < 2.5 s

throughput 200 GB/s

rate : 1 kHz

latency: <10ms>

throughput 4 GB/s

rate : 100 Hz

latency: <1 s>

throughput 200 MB/s

• LVL1 decision based on course granularity calorimeter data and muon trigger stations

• LVL2 can get data at full granularity and can combine information from all detectors. Emphasis on fast rejection. Region of Interest (RoI) from LVL1 are used to reduce data requested (few % of whole event) in most cases

• EF refines the selection according to the LVL2 classification, performing a fuller reconstruction. More detailed alignment and calibration data can be used

EF

Page 9: Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.

Copyright © 2000 OPNET Technologies, Inc.

Title – 9Krzysztof Korcyl, Institute of Nuclear Physics, Cracow

EF SUBFARM

ROS

ROD

ROS ROS ROS ROS

DFML2SVRolB

LVL2 FARMEF SUBFARM

SFI

L2PU

L2PUCPU

SWITCH

SWITCH

SWITCH

LARGE SWITCHDATA COLLECTION NETWORK

SFISFI

Readout Subsystem

ATLAS overall data collection scheme

LVL1

Page 10: Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.

Copyright © 2000 OPNET Technologies, Inc.

Title – 10Krzysztof Korcyl, Institute of Nuclear Physics, Cracow

Why trigger on GRID?

•First code benchmarking shows that local CPU power may not be sufficient (budget+ manageable size of the cluster) distribute the work over remote clusters.

•Why not? The GRID technology will provide platform independent tools which perfectly match the needs to run, monitor and control the remote trigger algorithms.

•Developement of dedicated tools (based on the GRID technology) ensuring quasi real-time response of the order of a few seconds might be necessary task for CROSSGRID

Page 11: Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.

Copyright © 2000 OPNET Technologies, Inc.

Title – 11Krzysztof Korcyl, Institute of Nuclear Physics, Cracow

Data flow diagram

Experiment Local high level trigger

Tape

Remote high level

trigger

Event buffer for remote

processing

Remote high level

trigger

...decision

decision

Event dispatcher

CROSSGRID

interface

Page 12: Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.

Copyright © 2000 OPNET Technologies, Inc.

Title – 12Krzysztof Korcyl, Institute of Nuclear Physics, Cracow

Operational features

• Event dispatcher is a separate module. Easy to activate and deactivate

• Implementation independent on specific trigger solutions for a given experiment

• Dynamical resource assignment to keep system running within assumed performance limits (event buffer occupancy, link bandwidth, number of remote centers, timeout rate...)

• Fault tollerance and timeout management (no decision within allowed time limit)

• User interface to monitor and control by a shifter

Page 13: Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.

Copyright © 2000 OPNET Technologies, Inc.

Title – 13Krzysztof Korcyl, Institute of Nuclear Physics, Cracow

Testbed for distributed triggerEasy to test by substituting real experiment data

with PC sending Monte Carlo data

Monte Carlo Data

PC at CERN

Event Buffer

Poland

Spain

Germany

Event Dispatcher - Monitoring and

Control Tool

decision

decision

decision

Page 14: Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.

Copyright © 2000 OPNET Technologies, Inc.

Title – 14Krzysztof Korcyl, Institute of Nuclear Physics, Cracow

Summary

• Trigger systems for the LHC experiments are challenging

• GRID technology may help to solve lack of local CPU power

• Proposed distributed trigger structure as a separate Event dispatcher module offers cross-experiments platform independent of specific local trigger solutions.

• Implementation on testbed feasible even without running experiments

• Dedicated tools to be developed within CROSSGRID project to ensure interactivity, monitoring and control.