Overview of DAQ at CERN experiments

Post on 30-Dec-2015

28 views 0 download

Tags:

description

Overview of DAQ at CERN experiments. E.Radicioni, INFN - 20050901 - MICE Daq and Controls Workshop. Overview. DAQ architectures Implementations Software and tools. Some basic parameters. Overall architecture. This is LHCb, but at this level they all look the same. - PowerPoint PPT Presentation

Transcript of Overview of DAQ at CERN experiments

Overview of DAQ at CERN experiments

E.Radicioni, INFN -20050901 - MICE Daq and

Controls Workshop

Overview

• DAQ architectures• Implementations• Software and tools

Some basic parameters

Overall architecture

• This is LHCb, but at this level they all look the same.

• Buffering on the FE, waiting for LV-0/1 decision

• Building network

• HLT filtering before storage.

This is already a standard PC with Ethernet

… but they DO differ right after the FEs

ALICECMS

This is the entry point into a customized Myrinet network

1 EVB level2 EVB levels

DAQ: taking care of Data Flow

• Building• Processing• Storage• Get the data to some network as fast as

possible• Custom VS standard network technology• CMS & Alice as extremes

– The others are in between

CMS approach• Only 1 hardware trigger, do the rest in HLT high DAQ bandwidth.

– Flexible trigger, but rigid DAQ. Also partitioning less flexible.• Barrel-shifter approach: deterministic but rigid, customized network• 2-stage building

ALICE approach• HLT embedded in the DAQ• More than one hardware trigger not straightforward to change trigger

– But DAQ can be very flexible with standard technology. Easy to partition down to the level of the single front-end

• HLT / Monitoring

Alice HLT

List of DAQ functions (one may expect)

• Run control (state machine) with GUI– Configure DAQ topology– Select trigger– Start/stop (by user or by DCS)– Communicate status to DCS

• Partitioning. Its importance is never stressed enough.• Minimal set of hardware access libraries (VME, USB, S-

LINK), and ready-to-use methods to initialize interfaces.

• Data flow– Push (or pull …) data from FE to Storage via (one or more)

layer of Event-Building• DAQ performance check with GUI• Data quality monitoring (or framework to do it)

– GUI most likely external• Logging (DAQ-generated messages) with GUI to

select/analyze logs

• What can you expect to be able to use out of these systems?

• MICE test beam system• Who’s providing the best “test beam”

system?– Reduced scale, but keeping rich

functionality– And software already available– Not only framework, but also ready-to-use

applications and GUIs.

All experiments more or less implement this:

Main data flow~ 10-100 KHz

Spying data subset for monitoring

Also good for test beams, ~ 1KHz

Support for test beams vary from one experiment to the other, from barebone system (just framework) to full-fledged support

• CMS: public-domain framework, called xdaq: http://xdaq.web.cern.ch/xdaq/ Just framework (data and message passing, event-builder). For the rest, you are on your own.

• ALICE tends to support its detector teams with a full set of DAQ tools– Partitioning– Data transport– Monitoring– Logging

• ATLAS similar to ALICE in this respect– However, at the times of HARP construction it was

not yet ready for release to (external) groups.

Readout

•Clear separation of readout and recording functions

•Readout high-priority (or read-time), recorder low priority (quasi-async)

•Large memory buffer to accommodate for fluctuations

User-provided functions

A clear, simple way for the user to initialize and read out its own hardware

Event builder• Running on a standard PC• Able to perform, at the same time:

– Global or partial on-demand building– EVB strategies to be matched to trigger

configurations– Event consistency checks– Recording to disk– Serving events to subscribers (i.e.

monitoring)• With basic data selections• Possibly with multi-staging after the event-building

Event format: headers and payloads, one payload per front-end

Precise time-stamping, numbering and Ids on headers of each payload

Ready-to-use GUIs

Run control should be implemented as a state machine for proper handling of state change

Configure and partition

Set run parameters and connect

Select active processes and start them

Start/stop

Run-control

One run-control agent per DAQ “actor”

Run-control partitioning

Warning: to take advantage of DAQ partitioning, also TRIGGER has to support partition …

Requirement to TRIGGER system

Logging

•Informative logging with an effective user interface

•Log filtering and archiving

•Run statistic collection and reporting also useful

Monitoring

•A ready-to-use monitoring architecture

•With a monitoring library as a software interface

•Depending on DAQ systems, a global monitoring GUI (to be extended for specific needs) might be available already

Recent trends: the ECS

•An overall controller of the complete status of the experiment, including DCS

•Partitionable state machine

•Configuration databases

•Usually interfaced to PVSS, but EPICS should also be possible

Conclusions: never underestimate …

• Users are not experts: provide them the tools to work and to report problems effectively.

• Flexible partitioning.• Event-building with accurate fragment

alignment and validity checks, state reporting and reliability.

• Redundancy / fault tolerance• A proper run-control with state-machine

– And simplifying to the users the tasks of Partition, Configure, Trigger selection, Start, Stop

• A good monitoring framework, with clear-cut separation between DAQ services and monitoring clients

• Extensive and informative logging• GUIs