[subexp-daq] dropping of data to analysis that cannot keep up
Håkan T Johansson
f96hajo at chalmers.se
Sun Feb 16 11:33:35 CET 2025
Hi,
when the point of DAQ being faster than online is reached, we face the
issue of somehow dropping old data.
Left with default settings, ucesb is not really helpful. Old data will
pile up in the '--server' until that is full, and only then dropped. If
the server is configured with a large amount of memory for buffers
('--server' option 'size='), this can be significant. This leads to
online analysis lagging reality, loosing the 'online' feeling and useful
rapid feedback of changing conditions...
There is a 'dropold=N' (seconds) option which will not serve buffers to
the client older than the specified time. It may be reasonable to set
this to at most two times the spill length, if the beam has a pulse
structure. Otherwise a few seconds ought to be enough.
A dry-run demo (3 instances), using the 'mbuffer' program to limit rates:
---
# Generate data (at 10 MB/s):
file_input/empty_file --lmd \
| mbuffer -r 10M -q \
| empty/empty --file=- --server=trans,size=1G
# Serve the data, with dropold (can be removed):
empty/empty --trans=localhost --server=trans:8000,size=1G,dropold=10s
# 'Analysis' (at 1 MB/s).
# Look at the data (buffer times)
# Note: dd is used here to strip the 16-byte transport protocol startup
# message.
nc localhost 8000 \
| mbuffer -r 1M -q \
| dd bs=16 skip=1 \
| empty/empty --file=- --print-buffer
---
Note: the buffer times only relates to the middle server, not the original
data generation. But in this case is good enough to show the difference.
If 'dropold' is not used in the middle instance, then the last instance
(doing --print-buffer) will be seen to lag further and further.
Cheers,
Håkan
More information about the subexp-daq
mailing list