DAQ for the VELO testbeam

23
DAQ for the VELO testbeam Vertex 2007 16 th International Workshop on Vertex detectors September 23-28, 2007, Lake Placid, NY, USA Niko Neufeld, CERN/PH

description

DAQ for the VELO testbeam. Vertex 2007 16 th International Workshop on Vertex detectors September 23-28, 2007, Lake Placid, NY, USA Niko Neufeld, CERN/PH. Thanks. I am (really) only the messenger: - PowerPoint PPT Presentation

Transcript of DAQ for the VELO testbeam

Page 1: DAQ for the VELO testbeam

DAQ for the VELO testbeamVertex 2007

16th International Workshop on Vertex detectors

September 23-28, 2007, Lake Placid, NY, USA

Niko Neufeld, CERN/PH

Page 2: DAQ for the VELO testbeam

DAQ for the VELO testbeam, N. Neufeld 2

Thanks

• I am (really) only the messenger:– Many thanks to (without order): Paula Collins,

Marina Artuso, Kazu Akiba, Jan Buytaert, Aras Papadelis, Lars Eklund, Jianchun Wang, Ray Mountain from the LHCb VeLo team

– Artur Barczyk, Sai Suman from the LHCb Online team

– And apologies to all those not listed here

• But of course you’re welcome to shoot the messenger! Misunderstandings, errors are entirely my fault

Page 3: DAQ for the VELO testbeam

DAQ for the VELO testbeam, N. Neufeld 3

VeLo testbeam campaign 2006: ACDC

• Alignment Challenge Detector Commissioning was held in three major editions– ACDC1 in the lab (one week in April 2006) – ACDC2 (2 ½ weeks August 2006): in the H8

test-beam area, 3 modules, most of controls & DAQ, 1 TB of data, all non-zero suppressed

– ACDC3 (2 ½ weeks November 2006) in the H8 test-beam facility in the CERN north area

• Building on each other’s success all were very important: focus of this presentation will be ACDC3

Page 4: DAQ for the VELO testbeam

DAQ for the VELO testbeam, N. Neufeld 4

ACDC3

• ½ of the right Velo Half assembled (10 Modules)

• Designed to be a full test of the Experiment Control System (ECS) electronics and DAQ

• Used boards from the final production. HV and LV supplies were the same as in the final setup

• Prototype firmware (FPGA) and software• 6 “slices” of the Velo operated and read

out simultaneously

Page 5: DAQ for the VELO testbeam

DAQ for the VELO testbeam, N. Neufeld 5

VeLo Slice

TriggerSuperviso

r

TELL1

Arx

SPECS Master

CAN USB

CTRL Board

Production

Temperature Board

Production

DAQ

TFC

data repeater board

production

LV

ECS

Drivers15m cable

ctrl cable

temp cable

shortlong

kaptons

LV

CAEN

LV cable

HV

ISEG

HV cable

GBE

InterLocks

HV

Patch box

Drawing by K. Akiba,P. Jalocha & L. Eklund

Page 6: DAQ for the VELO testbeam

DAQ for the VELO testbeam, N. Neufeld 6

The LHCb DAQ

SWITCH

HLT farm

Detector

TFC System

SWITCHSWITCH SWITCH SWITCH SWITCH SWITCH

READOUT NETWORK

L0 triggerLHC clock

MEP Request

Event building

Front-End

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

TELL1

Expe

rimen

t Con

trol

Sys

tem

(EC

S)

VELO ST OT RICH ECal HCal MuonL0

Trigger

Event dataTiming and Fast Control SignalsControl and Monitoring data

TELL1 TELL1 UKL1 TELL1 TELL1 TELL1

FEElectronics

FEElectronics

FEElectronics

FEElectronics

FEElectronics

FEElectronics

FEElectronics

660 links40 GB/s

Up to 50 subfarmswith up to 44 servers each

~ 320 readout boardsup to500 MB/s output

Page 7: DAQ for the VELO testbeam

DAQ for the VELO testbeam, N. Neufeld 7

LHCb DAQ fact-sheet• Readout over copper

Gigabit Ethernet (1000 BaseT,> 3000 links)

• Total event-size zero-suppressed 35 kB

• Event-building over switching network

• Push protocol: readout-boards send data as soon as they are ready– no hand-shake!– no request!– rely on buffering in the

destination

• Synchronous readout of entire detector @ 1 MHz

• ~ 320 TELL1 boards– individual fragment < 100

bytes– pack several triggers into a

Multi Event Packet (MEP)

• Synchronisation of readout boards (and detector electronics) via standard LHC TTC (timing & trigger control) system

• Trigger master in LHCb: the Readout Supervisor (“Odin”)

• Global back-pressure mechanism (“throttle”)

Page 8: DAQ for the VELO testbeam

DAQ for the VELO testbeam, N. Neufeld 8

ACDC3 DAQlbh8lx01 lbh8lx02

ODIN

CCPC

TELL1

CCPC

Control Board

Temperature Board

Gb eth.

TTCrx

TTCrx

TTC

16

6

Analog Data cables 15m

ECS LV

20m

Buff Buff

Buff

Repeater Boards

Gb eth.

optical

HP5412

Ethernet (data)

LongKaptons

Buff

Hughin

OR

Gaudi EB

Drawing by K. Akiba

Page 9: DAQ for the VELO testbeam

DAQ for the VELO testbeam, N. Neufeld 9

ACDC3 Scintillation TriggerB1

A2

A1

B2

A0

1

15

15

3.8

Type A

Unit: cm

Type B

20

3 PMT

Light guide

1/4 thick scintillator

5

15

30

10

1.73.3

Unit: cm

Trigger = A0 (A1A2) (B1B2) efficiency~90%

1. For straight through use only 2 upstream ones (in AND).

2. For interaction with detector use 2 upstream (in AND) .AND. (B1 .OR. B2).

3. For interaction with target we use 2 upstream (in AND) .AND. B2 and an extra scintillator (in OR).

4. 1 spill is about 4s + no beam for 12s.

Drawing by JC Wang & R Mountain

Straight track 6k – 9k / spill

Interaction with detector

5k / spill

Interaction with target

1k / spill

Random trigger 10 kHz

Page 10: DAQ for the VELO testbeam

DAQ for the VELO testbeam, N. Neufeld 10

Readout Board TELL1• Originally developed

for the VeLo, now common board for (almost) all of LHCb

• 64 analogue copper inputs

• Parallelised FPGA processing

• Power-only backplane

• 4 x 1 Gigabit Ethernet output

• Control via embedded i486 Microcontroller on separate LAN for control and monitoring

• 84 for the complete VeLo

• ~ 320 for all of LHCb

Page 11: DAQ for the VELO testbeam

DAQ for the VELO testbeam, N. Neufeld 11

• 12 read-out boards Tell1s• Non Zero Suppressed Data: 4~5 kB x12 per event:

– 2.0 kEvent/s max DAQ with 1 PC (1 Gigabit link can handle 120 MB/s).

• Too many packets to route, buffering problem in the network:– “LabSetup” switch is insufficient– Problem typical for push-protocol --> next slide

• Use production equipment from the main DAQ– HP5412 handles the ACDC3

packet rate without problems– now installed in Point 8– for full LHCb will need even

more powerful equipment (exists)

Running the ACDC3 DAQ

Page 12: DAQ for the VELO testbeam

DAQ for the VELO testbeam, N. Neufeld 12

Congestion

2

1 32 2

Bang

• "Bang" translates into random, uncontrolled packet-loss

• In Ethernet this is perfectly valid behavior and implemented by very cheap devices

• Higher Level protocols are supposed to handle the packet loss due to lack of buffering

• This problem comes from synchronized sources sending to the same destination at the same time

• "Bang" translates into random, uncontrolled packet-loss

• In Ethernet this is perfectly valid behavior and implemented by very cheap devices

• Higher Level protocols are supposed to handle the packet loss due to lack of buffering

• This problem comes from synchronized sources sending to the same destination at the same time

Page 13: DAQ for the VELO testbeam

DAQ for the VELO testbeam, N. Neufeld 13

EventFormatter

EVENT

MEP

RESULTMoore/Vetra

Sender/DiskWriter

Data input

Data output (file)

Event Builder Shared Memory

DAQ Software

•Data are copied only once•Communication between event-builder, trigger process and disk-writer via shared memory•Use the standard LHCb (offline) software framework: GAUDI

Page 14: DAQ for the VELO testbeam

DAQ for the VELO testbeam, N. Neufeld 14

Getting “real”

• Running non stop for 3 weeks

Page 15: DAQ for the VELO testbeam

DAQ for the VELO testbeam, N. Neufeld 15

To trust is good to control is better

lbh8wico01

Lbtbxp04

ODIN

CCPC

TELL1

CCPC

Control Board

SPECS

Temperature Board

ELMB

SPECS

Ethernet (control)

TTCrx

TTCrx

TTC

16

6

Temperature interlock

Analog Data cables 15m

20m

PVSS

USB-CAN

optical

ISEG HV

LongKaptons

CERN Network

ECS LV

Buff Buff

Buff

Repeater Boards

Buff

lbh8lxco01

CAEN LV

USB-CAN

Page 16: DAQ for the VELO testbeam

DAQ for the VELO testbeam, N. Neufeld 16

(Run-) Controls Software• Velo Front-end, Tell1s, Trigger and Velo

Temperatures controlled by PVSS, the backbone of LHCb’s Experiment Control System (ECS)

• Some slow controls were not yet integrated:– ISEG HV and CAEN– Cooling monitoring (LabView)

• Event-builder and data-transfer– scripts (worked reasonably well because “farm”

consisted of maximum of two PCs)

• TELL1 – scripts (was painful, but PVSS integration was late :-()

Page 17: DAQ for the VELO testbeam

DAQ for the VELO testbeam, N. Neufeld 17

PVSS in action

Online PVSS: TELL1 Control FSM

Velo PVSS:

Temperatures display and FE-Configuration

Page 18: DAQ for the VELO testbeam

DAQ for the VELO testbeam, N. Neufeld 18

Summary• ACDC DAQ worked well - first “field” use of LHCb’s data

acquisition • Discovered and fixed at lot of small problems in soft- and

hardware– no show-stoppers for LHCb!– Learned a lot of useful lessons

• Acquired around 3.5 TB of data = ~ 60 Million Events• Nominal DAQ with non-ZS events and 12 boards (up to 4

kHz, typically 2 kHz) – DAQ tested at rates up to 100 kHz with ZS (1 tenth of nominal

rate, with a 1% network and farm) --> scalability is ok!• Main problem: (after everything else had been fixed)

– rate to storage• local disk can only handle ~ 30 MB/s,• rate to CERN tape-storage (CASTOR) limited 30 to 50 MB/s

– rate into a single PC (120 MB/s)

Page 19: DAQ for the VELO testbeam

And now…on to the LHCb cavern

Page 20: DAQ for the VELO testbeam

DAQ for the VELO testbeam, N. Neufeld 20

372 long distance cables

Page 21: DAQ for the VELO testbeam

DAQ for the VELO testbeam, N. Neufeld 21

84 TELL1s

Page 22: DAQ for the VELO testbeam

DAQ for the VELO testbeam, N. Neufeld 22

2 x 7 Control boards

Page 23: DAQ for the VELO testbeam

VeLo and its DAQ ready to go!