DUNE DAQ Hardware Meeting (24 Oct 2018)

Present:

David Cussans (Bristol)

Patrick Dunne (Imperial)

Kunal Kothekar (Bristol)

Kostas Manoulopolos (RAL)

Erdem Motuk (UCL)

Dave Newbold (RAL)

Alessandro Thea (RAL)

Roy Wastie (Oxford)

Alan Watson (Birmingham)

Tom Williams (RAL)

 

Dave Newbold (DN) opened with a description of the framework to test processing algorithms and a description of IPBus.

There was then discussion of the interface between blocks. See document by DN at https://docs.google.com/document/d/1Z7fnTA3x5RCPmcrfhEYIXSWOpSaPwef_DzbXECrxsY8/edit?usp=sharing

How to load constants into the processing blocks was discussed. DN cited CMS experience to say that having a large number of small IPBus end-points would use significant routing resources. He proposed sending "magic packets" in the data stream carrying the configuration data.

Alessandro Thea (AT) gave a live demonstration of how to build a simulation project for Modelsim/Questasim. DN and AT then ran Python scripts that used IPBus to load a text file from Phil Rodrigues into the (simulated) memory buffer , play the data through the processing blocks and read output of the processing blocks. ( The simulation used a simulated Ethernet interface to communicate with the scripts).

A transcript of the commands needed to build the simulation using ipbb are attached to the agenda page.

Kunal Kothekar said that he would start writing a "User Guide" for IPBus Build ( ipbb )

Some time was spent discussing how to save and load context in the processing blocks. DN said he would investigate if it was feasible to write firmware that was applicable to both filtering and hit-finding that would:

  1. strip the header from each incoming data packet and pass it unmodified to the output
  2. At the start of each data packet:

Ports are likely to be:
clk
reset
d(15:0)
q(15:0)
valid
last
d_config(15:0)
q_config(15:0)
load
store

 

There was discussion of how to interface the output of the compressed data to the 10-second buffer in DRAM. The memory interface firmware block currently has 512 bits clocked at 300MHz ( which maps onto the 64 bits of the DDR4 memory ). The output of each the ~ 40 compression blocks per APA are 16-bits wide, clocking at 200MHz. It will be challenging to multiplex the data. It was agreed to move away from the idea of storing each block of data independently but rather concatenate the data from all channels corresponding to one time-slice. This has the advantage of reducing the number of packets of data stored in the 10-second buffer to ~ 39k which may mean that it is feasible to store pointers in internal FPGA memory (rather than stored inside the external memory )

Erdem Motuk said that he would try to implement at least one compressor --> external memory interface multiplexor to get an idea of FPGA resource usage.

Roy Wastie is working on the interface between Zync PS and SSD. He will also start investigating the interface between PL and PS for writing to PL. Reading the data back after writing will be done from the PS side.

There was discussion of what Vivado version and Modelsim versions to use. CMS Trigger is using 2016.4 because of bugs in 2017.X . It was reported that some versions of Modelsim don't work well with the IP simulation produced by some versions of Vivado. ( Modelsim versions: Bristol - 10.6b , Imperial 10.5c ). It was decided to try to use 2018.2.2 and investigate which version of Modelsim to use.

Kunal Kothehar volunteered to set up a Twicki page with information about DAQ firmware. This is now at https://twiki.cern.ch/twiki/bin/view/Sandbox/UKDuneDaqSP