[subexp-daq] dropping of data to analysis that cannot keep up

Håkan T Johansson f96hajo at chalmers.se
Tue Apr 8 11:23:07 CEST 2025


Dear Günter,

nice to hear from you!

The f_user function would get the trigger, so you should see it at least 
there.

How to get that into nurdlib I think is a question for Hans...

Curiosity: in what way do you want to do a different readout depending on 
the trigger that happened?


I also think the SIS3316 code modifications you had have been merged.

Could you please try the current version and see if that works?

Best regards,
Håkan





On Tue, 8 Apr 2025, Weber, Guenter Dr. wrote:

> 
> Dear friends,
> 
> 
> sorry for coming up with a very basic question: How can I define different
> trigger types and make the readout procedure of a specific module dependent
> on the type of trigger?
> 
> 
> Example:
> 
> I have two different trigger types, 1 and 2. To let the DAQ know, which type
> of trigger I have, they are plugged to the input channels 1 and 2 of our
> VOLUM4B. Now I would like to know in the readout of the SIS3316 module,
> which type triggered the current event. How do I do this?
> 
> 
> 
> Thank you very much!
> 
> 
> 
> 
> Best greetings from Jena
> 
> Günter
> 
> 
> 
> P.S. As I got distracted by some hardware issues in the lab, we did not have
> a proper hand-over of our modifications of the SIS3316 code in Nurdlib. Did
> you in the meantime took over these modifications into the main branch or
> how is the situation?
> 
> 
> 
> ____________________________________________________________________________
> Von: subexp-daq <subexp-daq-bounces at lists.chalmers.se> im Auftrag von Håkan
> T Johansson <f96hajo at chalmers.se>
> Gesendet: Sonntag, 16. Februar 2025 11:33:35
> An: Bajzek, Martin
> Cc: Hubbard, Nicolas James Dr.; subexp-daq at lists.chalmers.se
> Betreff: [subexp-daq] dropping of data to analysis that cannot keep up  
> 
> Hi,
> 
> when the point of DAQ being faster than online is reached, we face the
> issue of somehow dropping old data.
> 
> Left with default settings, ucesb is not really helpful.  Old data will
> pile up in the '--server' until that is full, and only then dropped.  If
> the server is configured with a large amount of memory for buffers
> ('--server' option 'size='), this can be significant.  This leads to
> online analysis lagging reality, loosing the 'online' feeling and useful
> rapid feedback of changing conditions...
> 
> There is a 'dropold=N' (seconds) option which will not serve buffers to
> the client older than the specified time.  It may be reasonable to set
> this to at most two times the spill length, if the beam has a pulse
> structure.  Otherwise a few seconds ought to be enough.
> 
> 
> A dry-run demo (3 instances), using the 'mbuffer' program to limit rates:
> 
> ---
> 
> # Generate data (at 10 MB/s):
> 
> file_input/empty_file --lmd \
>    | mbuffer -r 10M -q \
>    | empty/empty --file=- --server=trans,size=1G
> 
> # Serve the data, with dropold (can be removed):
> 
> empty/empty --trans=localhost --server=trans:8000,size=1G,dropold=10s
> 
> # 'Analysis' (at 1 MB/s).
> # Look at the data (buffer times)
> # Note: dd is used here to strip the 16-byte transport protocol startup
> # message.
> 
> nc localhost 8000 \
>    | mbuffer -r 1M -q \
>    | dd bs=16 skip=1 \
>    | empty/empty --file=- --print-buffer
> 
> ---
> 
> Note: the buffer times only relates to the middle server, not the original
> data generation.  But in this case is good enough to show the difference.
> If 'dropold' is not used in the middle instance, then the last instance
> (doing --print-buffer) will be seen to lag further and further.
> 
> 
> Cheers,
> Håkan
> 
>


More information about the subexp-daq mailing list