High Level Triggering
description
Transcript of High Level Triggering
High Level Triggering
Fred Wickens
High Level Triggering (HLT)
• Introduction to triggering and HLT systems– What is Triggering– What is High Level Triggering – Why do we need it
• Case study of ATLAS HLT (+ some comparisons with other experiments)
• Summary
Simple trigger for spark chamber set-up
0
-12kV
0
-12kV
0
-12kV
0
ScintillatorLight Guide
Photo-multiplier
Spark Chamber
C1
C2
C1
C2
Discriminator
DiscriminatorAnd Gate Amplifier Spark Gap Spark
Chamber
Logic signals e.g. NIM
Dead time• Experiments frozen from trigger to end of readout
– Trigger rate with no deadtime = R per sec.
– Dead time / trigger = sec.
– For 1 second of live time = 1 + R seconds
– Live time fraction = 1/(1 + R)
– Real trigger rate = R/(1 + R) per sec.
Rate in Hz Dead time ms. Live time % Trigger rate Hz
10 10 91 9.1
1000 10 9.1 91
Trigger systems 1980’s and 90’s
• bigger experiments more data per event• higher luminosities more triggers per
second– both led to increased fractional deadtime
• use multi-level triggers to reduce dead-time– first level - fast detectors, fast algorithms– higher levels can use data from slower detectors
and more complex algorithms to obtain better event selection/background rejection
Trigger systems 1990’s and 2000’s
• Dead-time was not the only problem• Experiments focussed on rarer processes
– Need large statistics of these rare events– But increasingly difficult to select the interesting events– DAQ system (and off-line analysis capability) under
increasing strain - limiting useful event statistics• This is a major issue at hadron colliders, but is also significant
at ILC
• Use the High Level Trigger to reduce the requirements for– The DAQ system– Off-line data storage and off-line analysis
Summary of ATLAS Data Flow Rates
• From detectors > 1014 Bytes/sec
• After Level-1 accept ~ 1011 Bytes/sec
• Into event builder ~ 109 Bytes/sec
• Onto permanent storage ~ 108 Bytes/sec
~ 1015 Bytes/year
TDAQ Comparisons
The evolution of DAQ systems
Typical architecture 2000+
Level 1 (Sometimes called Level-0 - LHCb)
• Time: one very few microseconds• Standard electronics modules for small systems• Dedicated logic for larger systems
– ASIC - Application Specific Integrated Circuits– FPGA - Field Programmable Gate Arrays
• Reduced granularity and precision– calorimeter energy sums– tracking by masks
• Event data stored in front-end electronics (at LHC use pipeline as collision rate shorter than Level-1 decision time)
Level 2
• 1) few microseconds (10-100) – hardwired, fixed algorithm, adjustable parameters
• 2) few milliseconds (1-100)– Dedicated microprocessors, adjustable algorithm
• 3-D, fine grain calorimetry• tracking, matching• Topology
– Different sub-detectors handled in parallel• Primitives from each detector may be combined in a
global trigger processor or passed to next level
Level 2 - cont’d
• 3) few milliseconds (10-100) - 2006– Processor farm with Linux PC’s– Partial events received with high-speed network– Specialised algorithms– Each event allocated to a single processor, large
farm of processors to handle rate
– If separate Level 2 data from each event stored in many parallel buffers (each dedicated to a small part of the detector)
Level 3
• millisecs to seconds• processor farm
– microprocessors/emulators/workstations– Now standard server PC’s
• full or partial event reconstruction– after event building (collection of all data from all
detectors)
• Each event allocated to a single processor, large farm of processors to handle rate
Summary of Introduction
• For many physics analyses, aim is to obtain as high statistics as possible for a given process– We cannot afford to handle or store all of the data a detector
can produce!
• What does the trigger do– select the most interesting events from the myriad of events
seen• I.e. Obtain better use of limited output band-width• Throw away less interesting events• Keep all of the good events(or as many as possible)
– But note must get it right - any good events thrown away are lost for ever!
• High level trigger allows much more complex selection algorithms
Case study of the ATLAS HLT system
Concentrate on issues relevant forATLAS (CMS very similar issues), but
try to address some more general points
Starting points for any HLT system
• physics programme for the experiment– what are you trying to measure
• accelerator parameters– what rates and structures
• detector and trigger performance– what data is available– what trigger resources do we have to use it
Interesting events are buried in a seaof soft interactions
Higgs production
High energy QCD jet production
Physics at the LHC
B physics
top physics
The LHC and ATLAS/CMS
• LHC has – design luminosity 1034 cm-2s-1 (In 2008 from 1031 - 1033 ?)– bunch separation 25 ns (bunch length ~1 ns)
• This results in– ~ 23 interactions / bunch crossing
• ~ 80 charged particles (mainly soft pions) / interaction
• ~2000 charged particles / bunch crossing
• Total interaction rate 109 sec-1
– b-physics fraction ~ 10-3 106 sec-1
– t-physics fraction ~ 10-8 10 sec-1
– Higgs fraction ~ 10-11 10-2 sec-1
Physics programme
• Higgs signal extraction important but very difficult • Also there is lots of other interesting physics
– B physics and CP violation– quarks, gluons and QCD– top quarks– SUSY– ‘new’ physics
• Programme will evolve with: luminosity, HLT capacity and understanding of the detector– low luminosity (first ~2 years)
• high PT programme (Higgs etc.)• b-physics programme (CP measurements)
– high luminosity• high PT programme (Higgs etc.)• searches for new physics
Trigger strategy at LHC
• To avoid being overwhelmed use signatures with small backgrounds– Leptons– High mass resonances– Heavy quarks
• The trigger selection looks for events with: – Isolated leptons and photons, -, central- and forward-jets – Events with high ET
– Events with missing ET
Objects Physics signatures
Electron 1e>25, 2e>15 GeV Higgs (SM, MSSM), new gauge bosons, extra dimensions, SUSY, W, top
Photon 1γ>60, 2γ>20 GeV Higgs (SM, MSSM), extra dimensions, SUSY
Muon 1μ>20, 2μ>10 GeV Higgs (SM, MSSM), new gauge bosons, extra dimensions, SUSY, W, top
Jet 1j>360, 3j>150, 4j>100 GeV SUSY, compositeness, resonances
Jet >60 + ETmiss >60 GeV SUSY, leptoquarks
Tau >30 + ETmiss >40 GeV Extended Higgs models, SUSY
Example Physics signatures
ARCHITECTURE
40 MHz
Trigger DAQ
~1 PB/s(equivalent)
~ 200 Hz ~ 300 MB/sPhysics
Three logical levels
LVL1 - Fastest:Only Calo and
MuHardwired
LVL2 - Local:LVL1
refinement +track
associationLVL3 - Full
event:“Offline” analysis
~2 s
~10 ms
~1 sec.
Hierarchical data-flow
On-detector electronics:
Pipelines
Event fragments buffered in
parallel
Full event in processor farm
Selected (inclusive) signatures
Process Level-1 Level-2
H0→γ γ ≥2 em, ET>20 GeV 2 γ, ET>20 GeV
H0→Z Z*→ l+ l– l+ l– ≥2 em, ET>20 GeV≥2 µ, pT>6 GeV≥1 em, ET>30 GeV≥1 µ, pT>20 GeV
2 e, ET>20 GeV2 µ, ET>6 GeV, I1 e, ET>30 GeV1 µ, ET>20 GeV, I
Z→ l+l–+X ≥2 em, ET>20 GeV≥2 µ, pT>6 GeV≥1 em, ET>30 GeV≥1 µ, pT>20 GeV
2 e, ET>20 GeV2 µ, ET>6 GeV, I1 e, ET>30 GeV1 µ, ET>20 GeV, I
tt → leptons+jets ≥1 em, ET>30 GeV≥1 µ, pT>20 GeV
1 e, ET>30 GeV1 µ, ET>20 GeV, I
W', Z'→ jets ≥1 jet, ET>150 GeV 1 jet, ET>300 GeVSUSY→jets ≥1 jet, ET>150 GeV
ETmiss
3 jet, ET>150 GeV
ETmiss
Trigger design - Level-1• Level-1
– sets the context for the HLT– reduces triggers to ~75 kHz– has a very short time budget
• few micro-sec (ATLAS/CMS ~2.5 - much used up in cable delays!)
• Detectors used must provide data very promptly, must be simple to analyse– Coarse grain data from calorimeters– Fast parts of muon spectrometer (I.e. not precision
chambers)– NOT precision trackers - too slow, too complex– (LHCb does use some simple tracking data from their VELO
detector to veto events with more than 1 primary vertex)– Proposed FP420 detectors provide data too late
Central TriggerProcessor
Region-of-Interest Unit(Level-1/Level-2)
Level-2 TriggerFront-end Systems
Calorimeter TriggerProcessor
MuonTrigger
Processor
µ
Subtriggerinformation
Timing, trigger andcontrol distribution
JetET e / γ
Calorimeters Muon Detectors
ATLAS Level-1 trigger system
• Calorimeter and muon– trigger on inclusive
signatures• muons; • em/tau/jet calo clusters;
missing and sum ET
• Hardware trigger– Programmable thresholds– Selection based on
multiplicities and thresholds
ATLAS em cluster trigger algorithm
E.M. calorimeter
Hadronic calorimeter
OR
> E.M. cluster threshold
OR
AND< E.M. isolation threshold
AND< Hadronic isolation threshold
OROR
Δη xΔφ ≈ 0.1 0.1x
“Sliding window” algorithm repeated for each of ~4000 cells
ATLAS Level 1 Muon trigger
RPC: Restive Plate Chambers TGC: Thin Gap Chambers MDT: Monitored Drift Tubes
RPC - Trigger Chambers - TGC
Measure muon momentum with very simple tracking in a few planes of trigger chambers
Level-1 Selection
• The Level-1 trigger - an “or” of a large number of inclusive signals - set to match the current physics priorities and beam conditions
• Precision of cuts at Level-1 is generally limited• Adjust the overall Level-1 accept rate (and the
relative frequency of different triggers) by– Adjusting thresholds – Pre-scaling (e.g. only accept every 10th trigger of a
particular type) higher rate triggers• Can be used to include a low rate of calibration events
• Menu can be changed at the start of run – Pre-scale factors may change during the course of a run
Example Level-1 Menu for 2x10^33
Level-1 signature Output Rate (Hz)
EM25i 12000
2EM15i 4000
MU20 800
2MU6 200
J200 200
3J90 200
4J65 200
J60 + XE60 400
TAU25i + XE30 2000
MU10 + EM15i 100
Others (pre-scaled, exclusive, monitor, calibration) 5000
Total ~25000
Trigger design - Level-2
• Level-2 reduce triggers to ~2 kHz– Note CMS does not have a physically separate Level-2 trigger, but
the HLT processors include a first stage of Level-2 algorithms
• Level-2 trigger has a short time budget – ATLAS ~10 milli-sec average
• Note for Level-1 the time budget is a hard limit for every event, for the High Level Trigger it is the average that matters, so a some events can take several times the average, provided thay are a minority
• Full detector data is available, but to minimise resources needed:– Limit the data accessed– Only unpack detector data when it is needed– Use information from Level-1 to guide the process– Analysis proceeds in steps with possibility to reject event after each
step– Use custom algorithms
Regions of Interest
• The Level-1 selection is dominated by local signatures (I.e. within Region of Interest - RoI)– Based on coarse granularity
data from calo and mu only
• Typically, there are 1-2 RoI/event
• ATLAS uses RoI’s to reduce network b/w and processing power required
Trigger design - Level-2 - cont’d
• Processing scheme– extract features from sub-detector data in each
RoI – combine features from one RoI into object – combine objects to test event topology
• Precision of Level-2 cuts– Emphasis is on very fast algorithms with
reasonable accuracy• Do not include many corrections which may be applied
off-line
– Calibrations and alignment available for trigger not as precise as ones available for off-line
ARCHITECTURE
H
L
T
40 MHz
75 kHz
~2 kHz
~ 200 Hz
40 MHz
RoI data = 1-2%
~2 GB/s
FE Pipelines2.5 s
LVL1 accept
Read-Out DriversROD ROD ROD
LVL1 2.5 s
CalorimeterTrigger
MuonTrigger
Event Builder
EB
~3 GB/s
ROS Read-Out Sub-systems
Read-Out BuffersROB ROB ROB
120 GB/s Read-Out Links
Calo MuTrCh Other detectors
~ 1 PB/s
Event Filter
EFPEFP
EFP
~ 1 sec
EFN
~3 GB/s
~ 300 MB/s
~ 300 MB/s
Trigger DAQ
LVL2 ~ 10 ms
L2P
L2SV
L2NL2PL2P
ROIB
LVL2 accept
RoI requests
RoI’s
CMS Event Building
• CMS perform Event Building after Level-1• This simplifies the architecture, but places
much higher demand on technology:– Network traffic ~100 GB/s
• Use Myrinet instead of GbE for the EB network• Plan a number of independent slices with barrel shifter to
switch to a new slice at each event
– Time will tell whichphilosophy is better
t i m
e
e30i e30i +Signature
ecand ecand+Signature
e e +Signature
e30 e30+Signature
EM20i EM20i+Level1 seed
Cluster shape
Cluster shape
STEP 1
Iso–lation
Iso–lationSTEP 4
pt>30GeV
pt>30GeV
STEP 3
trackfinding
trackfinding
STEP 2
HLT Strategy: Validate step-by-step Check intermediate signatures Reject as early as possible
Sequential/modular approach facilitates early rejection
LVL1 triggers on two isolated e/m clusters with pT>20GeV(possible signature: Z–>ee)
Example for Two electron trigger
Trigger design - Event Filter / Level-3
• Event Filter reduce triggers to ~200 Hz• Event Filter budget ~ 1 sec average• Full event detector data is available, but to
minimise resources needed:– Only unpack detector data when it is needed– Use information from Level-2 to guide the process– Analysis proceeds in steps with possibility to reject
event after each step– Use optimised off-line algorithms
Electron slice at the EF
TrigCaloRec
EF tracking
TrigEgammaRec
EFTrackHypo
Wrapper of CaloRec
Wrapper of newTracking
Wrapper of EgammaRec
EFCaloHypo
EFEgammaHypo
matches electromagnetic clusters with tracks and builds egamma objects
HLT Processing at LHCb
Trigger design - HLT strategy
• Level 2– confirm Level 1, some inclusive, some semi-
inclusive,some simple topology triggers, vertex reconstruction(e.g. two particle mass cuts to select Zs)
• Level 3– confirm Level 2, more refined topology selection,
near off-line code
Example HLT Menu for 2x10^33
HLT signature Output Rate (Hz)
e25i 40
2e15i <1
gamma60i 25
2gamma20i 2
mu20i 40
2mu10 10
j400 10
3j165 10
4j110 10
j70 + xE70 20
tau35i + xE45 5
2mu6 with vertex, decay-length and mass cuts (J/psi, psi’, B) 10
Others (pre-scaled, exclusive, monitor, calibration) 20
Total ~200
Example B-physics Menu for 10^33
LVL1 : • MU6 rate 24kHz (note there are large uncertainties in cross-section)• In case of larger rates use MU8 => 1/2xRate• 2MU6
LVL2: • Run muFast in LVL1 RoI ~ 9kHz• Run ID recon. in muFast RoI mu6 (combined muon & ID) ~ 5kHz • Run TrigDiMuon seeded by mu6 RoI (or MU6)• Make exclusive and semi-inclusive selections using loose cuts
– B(mumu), B(mumu)X, J/psi(mumu) • Run IDSCAN in Jet RoI, make selection for Ds(PhiPi)
EF:• Redo muon reconstruction in LVL2 (LVL1) RoI• Redo track reconstruction in Jet RoI• Selections for B(mumu) B(mumuK*) B(mumuPhi), BsDsPhiPi etc.
LHCb Trigger Menu
Matching problem
Background
Physics channel
Off-line
On-line
Matching problem (cont.)• ideally
– off-line algorithms select phase space which shrink-wraps the physics channel
– trigger algorithms shrink-wrap the off-line selection
• in practice, this doesn’t happen– need to match the off-line algorithm selection
• For this reason many trigger studies quote trigger efficiency wrt events which pass off-line selection
– BUT off-line can change algorithm, re-process and recalibrate at a later stage
• SO, make sure on-line algorithm selection is well known, controlled and monitored
Selection and rejection
• as selection criteria are tightened– background rejection improves– BUT event selection efficiency decreases
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0 0.2 0.4 0.6 0.8 1
cut value
select / reject fraction
select reject
Selection and rejection• Example of a recent ATLAS Event Filter (I.e. Level-3)
study of the effectiveness of various discriminants used to select 25 GeV electrons from a background of dijets
Other issues for the Trigger
• Efficiency and Monitoring– In general need high trigger efficiency– Also for many analyses need a well known efficiency
• Monitor efficiency by various means– Overlapping triggers– Pre-scaled samples of triggers in tagging mode (pass-through)
• Final detector calibration and alignment constants not available immediately - keep as up-to-date as possible and allow for the lower precision in the trigger cuts when defining trigger menus and in subsequent analyses
• Code used in trigger needs to be very robust - low memory leaks, low crash rate, fast
• Beam conditions and HLT resources will evolve over several years (for both ATLAS and CMS)– In 2008 luminosity low, but also HLT capacity will be < 50% of full
system (funding constraints)
Summary
• High-level triggers allow complex selection procedures to be applied as the data is taken– Thus allow large numbers of events to be accumulated, even in
presence of very large backgrounds– Especially important at LHC - but significant at most accelerators
• The trigger stages - in the ATLAS example– Level 1 uses inclusive signatures
• muons; em/tau/jet calo clusters; missing and sum ET
– Level 2 refines Level 1 selection, adds simple topology triggers, vertex reconstruction, etc
– Level 3 refines Level 2 adds more refined topology selection
• Trigger menus need to be defined, taking into account:– Physics priorities, beam conditions, HLT resources
• Include items for monitoring trigger efficiency and calibration
• Must get it right - any events thrown away are lost for ever!
Additional Foils
The evolution of DAQ systems
ATLAS Detector
The ATLAS Sub-Detectors
• Inner tracker– pixels (silicon)
• (3 layers) precision 3-D points; • 1.4 10^8 channels; • occupancy 10^-4
– silicon strips• (4 layers) precision 2-D points; • 5.2 10^6 channels; • occupancy 10^-2
– transition radiation tracker (straw tubes)
• (40 layers) continuous tracker + electron identification;
• 4.2 10^5 channels; • 12-33% occupancy
ATLAS Sub-Detectors (cont.)• solenoid - inside calorimeters
– 4 m x 7 m x 1.8T
• calorimetry– electromagnetic
• liquid argon (accordion) + lead
– hadronic • scintillator tiles & liquid argon + iron
– 2.3 10^5 channels; – occupancy 5-15%
• muon system – air-core toroid magnet system– trigger - resistive plate and thin gap
chambers– precision – monitored drift tubes– 1.3 10^6 channels; – occupancy 2-7.5%
ATLAS event in the tracker
ATLAS event - tracker end-view
ATLAS event - tracker end-view
Trigger functional design• Level 1 Input 40 MHz Accept 75 kHz Latency 2.5 μs
Inclusive triggers based on fast detectors Muon, electron/photon, jet, sum and missing ET triggers Coarse(r) granularity, low(er) resolution data Special purpose hardware (FPGAs, ASICs)
• Level 2 Input 75 (100) kHz Accept O(1) kHz Latency ~10 ms Confirm Level 1 and add track information Mainly inclusive but some simple event topology triggers Full granularity and resolution available Farm of commercial processors with special algorithms
• Event Filter Input O(1) kHz Accept O(100) Hz Latency ~secs Full event reconstruction Confirm Level 2; topology triggers Farm of commercial processors using near off-line code
SDX1
USA15
UX15
ATLAS Trigger / DAQ Data Flow
ATLASdetector
Read-Out
Drivers(RODs) First-
leveltrigger
Read-OutSubsystems
(ROSs)
UX15
USA15
Dedicated links
Timing Trigger Control (TTC)
1600Read-OutLinks
Gig
abit
Eth
erne
t
RoIBuilder
pROSR
egio
ns O
f Int
eres
t
VME~150PCs
Data of events acceptedby first-level trigger
Eve
nt d
ata
requ
ests
Del
ete
com
man
ds
Req
uest
ed e
vent
dat
a
stores LVL2output
Second-leveltrigger
LVL2Super-visor
SDX1CERN computer centre
DataFlowManager
EventFilter(EF)
pROS
~ 500 ~1600
stores LVL2output
dual-socket server PCs
~100 ~30
Network switches
Event rate ~ 200 HzData
storage
LocalStorage
SubFarmOutputs
(SFOs)
LVL2 farm
Network switches
EventBuilderSubFarm
Inputs
(SFIs)
Event’s Eye View - step-1
• At each beam crossing latch data into detector front end
• After processing, data put into many parallel pipelines - moves along the pipeline at every bunch crossing, falls out the far end after 2.5 microsecs
• Also send calo + mu trigger data to Level-1
Event’s Eye View - step-2
• The Level-1 Central Trigger Processor combines the information from the Muon and Calo triggers and when appropriate generates the Level-1 Accept (L1A)
• The L1A is distributed in real-time via the TTC system to the detector front-ends to send data from the accepted event to the detector ROD’s (Read-Out Drivers)– Note must arrive before data has dropped out of the pipe-
line - hence hard dead-line of 2.5 micro-secs– The TTC system (Trigger, Timing and Control) is a CERN
system used by all of the LHC experiments. Allows very precise real-time data distribution of small data packets
• Detector ROD’s receive data, process and reformat it as necessary and send via fibre links to TDAQ ROS
Event’s Eye View - Step-3
• At L1A the different parts of LVL1 also send RoI data to the RoI Builder (RoIB), which combines the information and sends as a single packet to a Level-2 Supervisor PC– The RoIB is implemented as a number of VME
boards with FPGAs to identify and combine the fragments coming from the same event from the different parts of Level-1
ATLAS Level-2 Trigger
Read-OutSubsystems
(ROSs)
USA15G
igab
it E
ther
net
RoIBuilder
pROSR
egio
ns O
f Int
eres
t
~150PCs
Eve
nt d
ata
requ
ests
Req
uest
ed e
vent
dat
a
stores LVL2output
Second-leveltrigger
LVL2Super-visor
SDX1CERN computer centre
DataFlowManager
EventFilter(EF)
pROS
~ 500 ~1600
stores LVL2output
dual-socket server PC’s
~100 ~30
Network switches
Event data for Level-2 pulled:partial events @ ≤ 100 kHz
Event rate ~ 200 HzData
storage
LocalStorage
SubFarmOutputs
(SFOs)
LVL2 farm
Network switches
EventBuilderSubFarm
Inputs
(SFIs)
Region of Interest Builder (RoIB) passes formatted information to one of the LVL2 supervisors.
LVL2 supervisor selects one of the processors in the LVL2 farm and sends it the RoI information.
LVL2 processor requests data from the ROSs as needed (possibly in several steps), produces an accept or reject and informs the LVL2 supervisor. Result of processing is stored in pseudo-ROS (pROS) for an accept.
Reduces network traffic to ~2 GB/s c.f. ~150 GB/s if do full event build
LVL2 supervisor passes decision to the DataFlow Manager (controls Event Building).
Step-4
ATLAS Event Building
Read-OutSubsystems
(ROSs)
USA15G
igab
it E
ther
net
RoIBuilder
pROSR
egio
ns O
f Int
eres
t
~150PCs
Eve
nt d
ata
requ
ests
Del
ete
com
man
ds
Req
uest
ed e
vent
dat
a
stores LVL2output
Second-leveltrigger
LVL2Super-visor
SDX1CERN computer centre
DataFlowManager
EventFilter(EF)
pROS
~ 500 ~1600
stores LVL2output
dual-socket server PC’s
~100 ~30
Network switches
Event data after Level-2 pulled:full events @ ~3 kHz
Event rate ~ 200 HzData
storage
LocalStorage
SubFarmOutputs
(SFOs)
LVL2 farm
Network switches
EventBuilderSubFarm
Inputs
(SFIs)
For each accepted event the DataFlow Manager selects a Sub-Farm Input (SFI) and sends it a request to take care of the building of a complete Event.
The SFI sends requests to all ROSs for data of the event to be built. Completion of building is reported to the DataFlow Manager.
For rejected events and for events for which event Building has completed the DataFlow Manager sends "clears" to the ROSs (for 100 - 300 events Together).
Network traffic for Event Building is ~5 GB/s
Step-5
ATLAS Event Filter
Read-OutSubsystems
(ROSs)
USA15G
igab
it E
ther
net
RoIBuilder
pROSR
egio
ns O
f Int
eres
t
~150PCs
Eve
nt d
ata
requ
ests
Del
ete
com
man
ds
Req
uest
ed e
vent
dat
a
stores LVL2output
Second-leveltrigger
LVL2Super-visor
SDX1CERN computer centre
DataFlowManager
EventFilter(EF)
pROS
~ 500 ~1600
stores LVL2output
dual-socket server PC’s
~100 ~30
Network switches
Event rate ~ 200 HzData
storage
LocalStorage
SubFarmOutputs
(SFOs)
LVL2 farm
Network switches
EventBuilderSubFarm
Inputs
(SFIs)
A process (EFD) running in each Event Filter farm node collects each complete event from the SFI and assigns it to one of a number of Processing Task’s in that node
The Event Filter uses more sophisticated algorithms (near or adapted off-line) and more detailed calibration data to select events based on the complete event data
Accepted events are sent to SFO (Sub-Farm Output) node to be written to disk
Step-6
ATLAS Data Output
Read-OutSubsystems
(ROSs)
USA15G
igab
it E
ther
net
RoIBuilder
pROSR
egio
ns O
f Int
eres
t
~150PCs
Eve
nt d
ata
requ
ests
Del
ete
com
man
ds
Req
uest
ed e
vent
dat
a
stores LVL2output
Second-leveltrigger
LVL2Super-visor
SDX1CERN computer centre
DataFlowManager
EventFilter(EF)
pROS
~ 500 ~1600
stores LVL2output
dual-socket server PC’s
~100 ~30
Network switches
Event rate ~ 200 HzData
storage
LocalStorage
SubFarmOutputs
(SFOs)
LVL2 farm
Network switches
EventBuilderSubFarm
Inputs
(SFIs)
The SFO nodes receive the final accepted events and writes them to disk
The events include ‘Stream Tags’ to support multiple simultaneous files (e.g. Express Stream, Calibration, b-physics stream, etc)
Files are closed when they reach 2 GB or at end of run
Closed files are finally transmitted via GbE to the CERN Tier-0 for off-line analysis
Step-7
SDX1
USA15
UX15
ATLAS Trigger / DAQ Data Flow
ATLASdetector
Read-Out
Drivers(RODs) First-
leveltrigger
Read-OutSubsystems
(ROSs)
UX15
USA15
Dedicated links
Timing Trigger Control (TTC)
1600Read-OutLinks
Gig
abit
Eth
erne
t
RoIBuilder
pROSR
egio
ns O
f Int
eres
t
VME~150PCs
Data of events acceptedby first-level trigger
Eve
nt d
ata
requ
ests
Del
ete
com
man
ds
Req
uest
ed e
vent
dat
a
stores LVL2output
Event data pushed @ ≤ 100 kHz, 1600 fragments of ~ 1 kByte each
Second-leveltrigger
LVL2Super-visor
SDX1CERN computer centre
DataFlowManager
EventFilter(EF)
pROS
~ 500 ~1600
stores LVL2output
dual-socket server PC’s
~100 ~30
Network switches
Event data pulled:partial events @ ≤ 100 kHz, full events @ ~ 3 kHz
Event rate ~ 200 HzData
storage
LocalStorage
SubFarmOutputs
(SFOs)
LVL2 farm
Network switches
EventBuilderSubFarm
Inputs
(SFIs)
HLT HardwarePart of DAQ/HLT Pre-Series system, with
full LVL2 Farm Rack at right
SDX Level 1 Layout
Row 6 EF EF EF EF EF EF EF EF EF EF EF EF EF EF EF EF EF
Row 4 EF EF EF EF EF EF EF EF EF EF EF EF EF EF EF EF EF EF
Row 2 EF EF EF EF EF EF EF EF EF EF EF EF EF EF EF EF
Rack Number 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20Select which year
SDX Level 2 Layout
Row 6 EF/L2 EF/L2 EF/L2 EF/L2 EF/L2 EF/L2 EF/L2 EF/L2 NWbb NWbb T DCS cDCS DSS PP PP
Row 4 EF/L2 EF/L2 EF/L2 EF/L2 EF/L2 EF/L2 EF/L2 EF/L2 EF/L2 EF/L2 EF/L2 EF/L2 EF/L2 EF/L2 EF/L2 EF/L2 EF/L2 EF/L2
Row 2 EF/L2 EF/L2 SFO SFO DC SFI SFI SFI SFIBackend switch rackDC switch rackDC switch rackOnline switch rackOnline Online Online Online
Rack Number 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
16 Racks
Landing 350daN/m2
Landing 350daN/m2
Landing 350daN/m2
door
door
Ventilation duct (on the flor)
water pipes
Shaft PX15Cable Tray USA15
door
Racks enter via this door
17 Racks
18 Racks
Crinoline Ladder
FRONTs
FRONTs
BACKs
100 cm
80-100 cm
70-90 cm
63 cm (taking into account the cooler)
The lower distance is due to structuralbeams or ventillation flaps and and applies to ~5 racks.
The lower limit is due to the powerdistribution boxes and will affectmost of the racks.
AIR FLOW
20052006200720082009
Landing 350daN/m2
Flap TrapVentilation duct (on the flor)
water pipes
door
Racks enter via this door
13 Racks
18 Racks
2 Racks
17 Racks
AIR FLOW 63 cm (taking into account the cooler)
74-90 cm
100 cm
80-100 cm
FRONTs
BACKs
FRONT
e-box e-box e-box
e-box e-box e-box
ATLAS TDAQ Barrack Rack Layout
UA1 Trigger
• Level 1 <4 µs using hardwired processors– muon track segment; em showers; jets; ET
– rate ~ 30 Hz. (reduction factor 103 104)
– zero deadtime as decision time < bunch separation
• Level 2 ~7 ms using 68020 CPUs– muon tracking using drift time;
– 3-D calorimetry; position detectors
– rate ~ 3 Hz (reduction factor ~ 10)
– deadtime (30x0.007 = 20%)
– front end frozen during level 2 decision time.
UA1 Level 1
UA1 Level 2 and 3
UA1 Trigger (cont).
• Level 3 ~100 ms using 3081E farm– partial event reconstruction;
– calorimeter and tracking;
– event topology
– reduction factor ~ 3
– deadtime (3Hz x 0.03s = 10%)
• time to read data into processor system (30 ms)
LEP (ALEPH)
• luminosity 1031 /cm2/s
• bunch separation22 µs45kHz (4 bunches)11 µs90 kHz (8
bunches)
• event rate 0.1 Hz
• channels ~106
• read-out rate 13 Hz
• transfer rate ~10 Mbytes/sec
ALEPH trigger
• Level 1– ~4µs decision time + 6 µs clear time (<11µs )
– hardwired processors
– calorimeter energy sums and ITC tracks
– accept rate 3 30 Hz (5 Hz typ.)
– zero deadtime as process time < bunch separation
ALEPH trigger (cont.)
• Level 2– 60µs decision plus clear time
– hardwired LUT processor for TPC data
• operates on L1 track triggers only.
– accept rate 2 6 Hz (2 Hz typ.)
– deadtime 2bx x 5Hz(L1) / 45kHz = 0.02%, 5bx x 5Hz(L1) / 90kHz = 0.03%
ALEPH trigger (cont.)
• Level 3
– readout time ~10ms.
– processing time ~1s/processor
– microVAX farm (part reconstructed data)
– accept rate 13 Hz (design rate 1Hz)
– deadtime for readout 10ms x 2Hz(L2) = 2%
Hera and LHC
• Hera LHC
– Type ep pp
– Energy 30+800 GeV 7+7 TeV
– Luminosity 1031/cm2/s 1034/cm2/s
– Bunch separation 96 ns 25 ns(Freq. ~10MHz 40 MHz)
– Data channels
• Tracking 1.104 106 - 108
• Calorimetry 5.104 2.105
• Muons 2.105 106
Hera and LHC triggering
• Basic rate is bunch separation but
• Hera LHC– Interactions <<1/BX >>1/BX(~20)– Level 1 Yes 1 kHz 100 kHz– Level 2 Yes 200 Hz 1 kHz– Level 3 Yes 50 Hz 10 Hz– Level 4 Yes/Tape 10 Hz 10 Hz (no L4)
• NB Bunch separation < Level 1 decision time
– Solved for Hera by introduction of pipelines– Pipelines used to store data during Level 1 time– Fixed latency (~2 µs) to synchronise with trigger