The Back-End Electronics of the Time Projection Chambers in the T2K Experiment D. Calvet 1, I....
-
date post
22-Dec-2015 -
Category
Documents
-
view
214 -
download
0
Transcript of The Back-End Electronics of the Time Projection Chambers in the T2K Experiment D. Calvet 1, I....
The Back-End Electronics of the Time Projection Chambers in the T2K
Experiment
D. Calvet1, I. Mandjavidze1, B. Andrieu2, O. Le Dortz2, D. Terront2,
A. Vallereau2, C. Gutjahr3, K. Mizouchi3, C. Ohlmann3, F. Sanchez4
1CEA/DSM/IRFU/SEDI, Saclay, France2CNRS/IN2P3/LPNHE, Paris, France,
3TRIUMF, Vancouver, Canada4IFAE, Barcelona, Spain
RT2010 – Lisboa 23-28 May [email protected]
Tokai to Kamioka (T2K) experiment
Main Physics Goal: neutrino oscillation
• μ disappearance for improved accuracy on 23
• e appearance to improve sensitivity to 13
50 kT water
2
Experiment now taking data
RT2010 – Lisboa 23-28 May 2010
Summary of features
• Amplification: MicroMegas modules segmented in 7 x 10 mm pads
• Over 124.000 pads to instrument ~9 m2 sensitive [email protected]
T2K Time-Projection Chambers
1 of 3 TPCs shown
1 m
2 m
3
RT2010 – Lisboa 23-28 May [email protected]
T2K TPC-FGD readout: AFTER chip
4
0
0.5
1
1.5
2
2.5
3
0 100 200 300 400 500 600
V(Vsk)
1.2
1.4
1.6
1.8
2
2.2
2.4
0.0E+00 5.0E-08 1.0E-07 1.5E-07 2.0E-07 2.5E-07 3.0E-07 3.5E-07 4.0E-07 4.5E-07 5.0E-07
V(Vpzc)
0.9
1.1
1.3
1.5
1.7
1.9
2.1
2.3
0.0E+00 5.0E-08 1.0E-07 1.5E-07 2.0E-07 2.5E-07 3.0E-07 3.5E-07 4.0E-07 4.5E-07 5.0E-07
V(vcsa)
1.9
2
2.1
2.2
2.3
2.4
2.5
2.6
2.7
0.E+00 1.E-04 2.E-04 3.E-04 4.E-04 5.E-04 6.E-04 7.E-04 8.E-04 9.E-04 1.E-03
pole zero cancellation
Cs
16 values
+-
Cp
Rp
Vdc
CSA
in
Cl
Cdet
Cf
Rf4 Rf*Cf (100us)
AD
C inVdcinG-
2
Cg2
+-
2*Cg2
Gain-2
x511
Cm
Memory Cell
ckw ckr
r1
vic
m
+-
r1
r2
r2Buffer
Cs
Sallen&Key Filter
16 values
Cg
Cs
2*Cg
+-
V(Vinsca)
0.6
0.8
1
1.2
1.4
1.6
1.8
2
2.2
2.4
2.6
0.0E+00 5.0E-08 1.0E-07 1.5E-07 2.0E-07 2.5E-07 3.0E-07 3.5E-07 4.0E-07 4.5E-07 5.0E-07
0
0.5
1
1.5
2
2.5
0.00E+00 5.00E-08 1.00E-07 1.50E-07 2.00E-07 2.50E-07 3.00E-07 3.50E-07 4.00E-07 4.50E-07
Q max. MIP Noise <
120fC 75 103 e 750 e
240fC 150 103 e 1500 e
360fC 225 103 e 2250 e
600fC 375 103 e 3750 e
Tp(5%-100%) ns116 610 1054 1546200 695 1134 1626412 912 1343 1834505 993 1421 1912
ckw: 1MHz-50MHzckr: 20MHz
Noise <2mV
ADCin2V/10MIPs
• 7272 Analog Channels Analog Channels• SSwitched witched CCapacitor apacitor AArray: rray: 511511 cells cells• 44 Gains; Gains; 1616 Peaking Time values (100ns to 2µs) Peaking Time values (100ns to 2µs)• Input Current Polarity:Input Current Polarity: positive positive oror negative negative• Readout:Readout: 76*511 cells at 20MHz [external 12bits ADC] 76*511 cells at 20MHz [external 12bits ADC]
P. Baron et al.
RT2010 – Lisboa 23-28 May [email protected]
TPC Read-Out Diagram
72 x 2 Gbps fibres
124.414 detector pads
72 Front-EndElectronic Modules
18 Quad-optical link DataConcentrator Cards
24-ports Gbit Ethernet Switch
TPC DAQ PC
On-line databaseGlobal event builderRun controlMass storage
Nd 280 network
Slave Clock
Module
18
Master Clock
Module
On-detectorin magnet
Off-detectorback-endelectronics
~20 m
Clock/Trigger
Control/Data
5
Back-end elements
• Data Concentrator Cards (DCCs) to aggregate data from front-end
• Gbit Ethernet private network and PC for TPC local event building
RT2010 – Lisboa 23-28 May [email protected]
Data Concentrator CardsMain requirements
• Fanout global 100 MHz clock, synchronous trigger to front-end with low skew (< 40 ns)
• Aggregate data from multiple front-ends (typ. 4 modules per DCC, i.e. 7000 channels). Target DAQ rate: 20 Hz
• Configure front-end and read-back run-time settings
• Interfaces
• to TPC DAQ PC through standard Gbit Ethernet switch
• to front-end via 72 duplex optical links – 2 x 144 Gbps aggregate
• to Slave Clock Module via dedicated LVDS links
6
Design strategy
• Development of dedicated board too slow to meet project deadlines
→ Extend platform used for single module readout; scale it up to full size system by trivial duplication
→ Common hardware design with T2K Fine-Grain Detector (FGD) toshare development effort
RT2010 – Lisboa 23-28 May [email protected]
Doing more with an evaluation board3+1 optical ports
Gbit Ethernet
RJ45
100 MHz ref. ClockTrigger input
PLL
7
RT2010 – Lisboa 23-28 May [email protected]
DCC Close-up
Design risks and benefits
• Optical link board easily adds 3 SFP transceivers to ML405
• External clock distribution requires hardware modification of ML405
→ Lowest cost, minimal design effort and shortest deployment time solution
1 of 18 DCCs
Optical link extension
card
Clockboard
External ClockTrigger input
Xilinx ML405EvaluationPlatform
8
RT2010 – Lisboa 23-28 May [email protected]
DCC Transmitters Eye Diagram
9
On-board SFP Link via SMA
Link via SATA#1 Link via SATA#2
Prototype: Local 25 MHz local oscillator + LMK03000 PLL multiplication for 200 MHz RocketIO referenceFinal design: Silicon Laboratories Si5326 PLL
RT2010 – Lisboa 23-28 May [email protected]
DCC Optical Transmitter Jitter
10
Optical link onSATA#2 connector
Remarks
• Clock quality not significantly degraded by add-on cards and modifications on ML405 board
• In real experiment 100 MHz ref. clock have to use cat. 6 cable (~1 m)
→ All 72 links in final system running stably – Estimated BER < 10-15
RT2010 – Lisboa 23-28 May [email protected]
Clock/Trigger fanout capability
11
Remarks
• Phase of recovered clock on Virtex 2 Pro / 4 RocketIO w.r.t. phase of sender not predictable from one system configuration to the next – but remains stable (<~2 ns) after link synchronized
• Limited control of skew of synchronization signals sufficient for this TPC
→ R&D on Virtex 5: predictable latency achievable with GTP transceivers
DCCClock
FEM#1Clock
FEM#2Clock
0
2
4
6
8
10
12
14
320 324 328 332 336 340 344
delay (ns)
occ
ure
nce
DCC->FEM path delay
DCC to FEM path delay variations over 100 FPGA reconfigurations (16 ns primary reference clock)
Recovered clock on 2 FEMs compared to DCC reference clock (16 ns primary reference clock)
RT2010 – Lisboa 23-28 May [email protected]
TPC Backend Electronics Crate
Principles
• Each DCC mounted on an aluminum plate
• 6 DCCs per 6U crate (i.e. 1 crate reads-out 1 TPC or 1 FGD)
• All cables and fibers at the back; CompactFlash (firmware) at the front
→ Most practical way found to rack-mount non-standard form factor cards
1 of 3 DCC crates
12
RT2010 – Lisboa 23-28 May 2010
Complete TPC Back-end SystemFront-end
Power & Slow Control Rack
Back-end Electronics Rack
Front-end slow control PC
Front-endpower supplies
(2 Wiener PL508)
Crates of DCCs
Slave clockmodule
Ethernet switch
Power supplies
Power supply
RT2010 – Lisboa 23-28 May [email protected]
DCC Embedded Firmware/Software
Multi-portMemorycontroller
Cache controller
PowerPC 405
(300 MHz)
Data-Side-On Chip Memory bus
32-bit 100 MHz
to/from front-end via optical links
Virtex 4 FPGA
RocketIO1K-32bit
rxtx
RocketIO1K-32bit
rxtx
RocketIO1K-32bit
rxtx
RocketIO1K-32bit
rxtx
Processor Local Bus32/64-bit100 MHz
D-CacheI-Cache
EthernetMAC
User Logic
PHY
128 MBDRAM
to/from TPC DAQ PC
DMA
14
Principles
• Use hard IP blocks of Virtex 4: transceivers, processor, Ethernet MAC…
• Finite state machine in user logic pulls data from 4 front-ends in parallel
• Standalone server program in PowerPC 405 unloads front-end data, elementary checks, uncapsulates in UDP/IP frames sent by MAC
RT2010 – Lisboa 23-28 May 2010
Hard-wired data collection FSM
NextChannel
ZS
F(0 to 5)
ZS, FEC, ASIC, Channel
FiniteState
Machine
Send Enable<3..0>Sender 0
1896 entryLUT
counters A(0 to 3)
C(0 to 78)
1111ZS 0000
ZS 0011
Rx FIFO 0 Free > 2 KB
Sender 1
Sender 2
Sender 3
Rx FIFO 1 Free > 2 KB
Rx FIFO 2 Free > 2 KB
Rx FIFO 3 Free > 2 KB
&
request
Suspend
PowerPC 405(300 MHz)
PLB 32-bit 100 MHz
Startevent
Abort
Operation
• Requests data channel by channel from 4 front-end modules in parallel
• Suspends if any FIFO cannot store data of next request
• Next request posted automatically when data of pending request entirely received or pre-defined timeout has elapsed
To RocketIO
TX
From RocketIORX Fifo’s
RT2010 – Lisboa 23-28 May 2010
T2K Nd280 DAQ systemTPC DAQ PC
…
fetpcdcc0
fetpcdcc17
Local eventbuilder
Cascade
to/from DCCsvia private
Gbit Ethernet switch
nd280network
Global eventbuilder
FGD DAQ PCPODDAQ PCxxx
DAQ PC
Global DAQ PC
On-line database,Mass storage, etc.
Architecture
• Local event building in TPC DAQ PC
• Bridge to MIDAS global event builder via Cascade
• Private nd280 network provides access to services shared by other detectors in the experiment
See: M. Thorpe et al.,“The T2K near Detector Data
Acquisition Systems”,this conference
See: R. Poutissou et al.,“Cascading MIDAS DAQ
Systems and Using MySQL Database for Storing History
Information”,this conference
RT2010 – Lisboa 23-28 May 2010
TPC DAQ System Performance
Global system performance in running conditions of T2K
• 33 Hz event taking rate in zero-suppressed mode (required: 20 Hz)
• 50 ms latency for laser calibration events (1 DCC in full readout mode)
• Max. DCC throughput: 16 MB/s (limitation: PPC405 I/O over DS-OCM)
• Global throughput linear with # of DCCs until GbE link saturation
→ All requirements met – system in exploitation
Event acquisition time for 1 DCCin full readout mode
Event acquisition rate for 1 front-endmodule in zero-suppressed mode
T2K laserevents
T2K beam,cosmic events
RT2010 – Lisboa 23-28 May 2010
Experience Return
System robustness and stability
• All optical links very stable despite sub-optimal clocking; BER < 10-15
• Clock skew stable in operation but variation from one power-up to next
• No hardware failure after ~6 months of operation
• Need improve handling of oversized events to minimize deadtime
→ Successful assembly of production system from evaluation hardware
Beam spill + Cosmic events
Typical TPC event size: ~65 kB (spill or cosmic)750 kB on 1 DCC for laser calibration events
Mean: 3.5 kB
A cosmic event seen at T2K Nd280
RT2010 – Lisboa 23-28 May [email protected]
Summary• Purpose of the development
A 1-to-72 clock-trigger fanout tree plus a 72-to-1 data aggregation system. Bridges TPC front-end 144 Gbps optical path to 1 Gbit Ethernet on DAQ side
• Key elements
18 Xilinx Virtex 4 evaluation boards with minimal add-ons + off-the-shelf PC and Ethernet networking products
• Performance and limitations
Clock distribution stable during operation but each link settle to a different skew (~40 ns p.p.) at system power-up due to Virtex 2Pro / 4 RocketIO limitations – other R&D show this can be much better controlled with Virtex 5
Limited I/O capability of PPC405, DOCM faster than PLB. Our application: 130 Mbit/s usage of Gbe link – by-pass processor if one needs to fully exploit Gbe
• System pros and cons
Low-cost and shortest deployment time solution. Evaluation kits not meant for production systems…but it works!
19
See poster: Y. Moudden et al., “The level 2 trigger of the H.E.S.S 28 meter Cerenkov telescope”
RT2010 – Lisboa 23-28 May [email protected]
Outlook• FPGA board design is getting more and more complex
Manufacturer has to provide many guidelines, notes, reference design schematics…but still need skilled designers / layout engineers to build your own board. Time consuming, technical risks at design and production
• Many applications require same common blocks beyond FPGA itself
De-coupling capacitors, external memory (DDR2, 3), configuration Flash, several high speed serial links, many customizable user I/O, flexible clocking
• Evaluation boards
Originally to help designers make their own board design. Brings very latest FPGA technology instantly to anyone at low cost and no risk. An eval board has a lot in common with the core of a typical end product – why re-invent?
• Think evaluation boards (have them made) as macro-components?
Worked for us. Improvements: lower-profile (2 cm stacking), smaller form factor, access to more user I/O and high speed serial links, flexible clocking
→ Towards a better eval. board paradigm?: FPGA + SDRAM + Flash, I/O connectors on a general purpose module directly re-usable by customer; the element for demo is the carrier board (largest diversity of interface standards)
20