From g.weber at hi-jena.gsi.de Tue Apr 8 11:11:44 2025 From: g.weber at hi-jena.gsi.de (Weber, Guenter Dr.) Date: Tue, 8 Apr 2025 09:11:44 +0000 Subject: [subexp-daq] dropping of data to analysis that cannot keep up In-Reply-To: References: Message-ID: <8baa75f2a8c14b63a1413cfc017a76cc@hi-jena.gsi.de> Dear friends, sorry for coming up with a very basic question: How can I define different trigger types and make the readout procedure of a specific module dependent on the type of trigger? Example: I have two different trigger types, 1 and 2. To let the DAQ know, which type of trigger I have, they are plugged to the input channels 1 and 2 of our VOLUM4B. Now I would like to know in the readout of the SIS3316 module, which type triggered the current event. How do I do this? Thank you very much! Best greetings from Jena G?nter P.S. As I got distracted by some hardware issues in the lab, we did not have a proper hand-over of our modifications of the SIS3316 code in Nurdlib. Did you in the meantime took over these modifications into the main branch or how is the situation? ________________________________ Von: subexp-daq im Auftrag von H?kan T Johansson Gesendet: Sonntag, 16. Februar 2025 11:33:35 An: Bajzek, Martin Cc: Hubbard, Nicolas James Dr.; subexp-daq at lists.chalmers.se Betreff: [subexp-daq] dropping of data to analysis that cannot keep up Hi, when the point of DAQ being faster than online is reached, we face the issue of somehow dropping old data. Left with default settings, ucesb is not really helpful. Old data will pile up in the '--server' until that is full, and only then dropped. If the server is configured with a large amount of memory for buffers ('--server' option 'size='), this can be significant. This leads to online analysis lagging reality, loosing the 'online' feeling and useful rapid feedback of changing conditions... There is a 'dropold=N' (seconds) option which will not serve buffers to the client older than the specified time. It may be reasonable to set this to at most two times the spill length, if the beam has a pulse structure. Otherwise a few seconds ought to be enough. A dry-run demo (3 instances), using the 'mbuffer' program to limit rates: --- # Generate data (at 10 MB/s): file_input/empty_file --lmd \ | mbuffer -r 10M -q \ | empty/empty --file=- --server=trans,size=1G # Serve the data, with dropold (can be removed): empty/empty --trans=localhost --server=trans:8000,size=1G,dropold=10s # 'Analysis' (at 1 MB/s). # Look at the data (buffer times) # Note: dd is used here to strip the 16-byte transport protocol startup # message. nc localhost 8000 \ | mbuffer -r 1M -q \ | dd bs=16 skip=1 \ | empty/empty --file=- --print-buffer --- Note: the buffer times only relates to the middle server, not the original data generation. But in this case is good enough to show the difference. If 'dropold' is not used in the middle instance, then the last instance (doing --print-buffer) will be seen to lag further and further. Cheers, H?kan -------------- next part -------------- An HTML attachment was scrubbed... URL: From f96hajo at chalmers.se Tue Apr 8 11:23:07 2025 From: f96hajo at chalmers.se (=?ISO-8859-15?Q?H=E5kan_T_Johansson?=) Date: Tue, 8 Apr 2025 11:23:07 +0200 Subject: [subexp-daq] dropping of data to analysis that cannot keep up In-Reply-To: <8baa75f2a8c14b63a1413cfc017a76cc@hi-jena.gsi.de> References: <8baa75f2a8c14b63a1413cfc017a76cc@hi-jena.gsi.de> Message-ID: Dear G?nter, nice to hear from you! The f_user function would get the trigger, so you should see it at least there. How to get that into nurdlib I think is a question for Hans... Curiosity: in what way do you want to do a different readout depending on the trigger that happened? I also think the SIS3316 code modifications you had have been merged. Could you please try the current version and see if that works? Best regards, H?kan On Tue, 8 Apr 2025, Weber, Guenter Dr. wrote: > > Dear friends, > > > sorry for coming up with a very basic question: How can I define different > trigger types and make the readout procedure of a specific module dependent > on the type of trigger? > > > Example: > > I have two different trigger types, 1 and 2. To let the DAQ know, which type > of trigger I have, they are plugged to the input?channels 1 and 2 of our > VOLUM4B. Now I would like to know in the readout of the SIS3316 module, > which type triggered the current event. How do I do this? > > > > Thank you very much! > > > > > Best greetings from Jena > > G?nter > > > > P.S. As I got distracted by some hardware issues in the lab, we did not have > a proper hand-over of our modifications of the SIS3316 code in Nurdlib. Did > you in the meantime took over these modifications into the main branch or > how is the situation? > > > > ____________________________________________________________________________ > Von: subexp-daq im Auftrag von H?kan > T Johansson > Gesendet: Sonntag, 16. Februar 2025 11:33:35 > An: Bajzek, Martin > Cc: Hubbard, Nicolas James Dr.; subexp-daq at lists.chalmers.se > Betreff: [subexp-daq] dropping of data to analysis that cannot keep up ? > > Hi, > > when the point of DAQ being faster than online is reached, we face the > issue of somehow dropping old data. > > Left with default settings, ucesb is not really helpful.? Old data will > pile up in the '--server' until that is full, and only then dropped.? If > the server is configured with a large amount of memory for buffers > ('--server' option 'size='), this can be significant.? This leads to > online analysis lagging reality, loosing the 'online' feeling and useful > rapid feedback of changing conditions... > > There is a 'dropold=N' (seconds) option which will not serve buffers to > the client older than the specified time.? It may be reasonable to set > this to at most two times the spill length, if the beam has a pulse > structure.? Otherwise a few seconds ought to be enough. > > > A dry-run demo (3 instances), using the 'mbuffer' program to limit rates: > > --- > > # Generate data (at 10 MB/s): > > file_input/empty_file --lmd \ > ?? | mbuffer -r 10M -q \ > ?? | empty/empty --file=- --server=trans,size=1G > > # Serve the data, with dropold (can be removed): > > empty/empty --trans=localhost --server=trans:8000,size=1G,dropold=10s > > # 'Analysis' (at 1 MB/s). > # Look at the data (buffer times) > # Note: dd is used here to strip the 16-byte transport protocol startup > # message. > > nc localhost 8000 \ > ?? | mbuffer -r 1M -q \ > ?? | dd bs=16 skip=1 \ > ?? | empty/empty --file=- --print-buffer > > --- > > Note: the buffer times only relates to the middle server, not the original > data generation.? But in this case is good enough to show the difference. > If 'dropold' is not used in the middle instance, then the last instance > (doing --print-buffer) will be seen to lag further and further. > > > Cheers, > H?kan > > From g.weber at hi-jena.gsi.de Tue Apr 8 11:44:30 2025 From: g.weber at hi-jena.gsi.de (Weber, Guenter Dr.) Date: Tue, 8 Apr 2025 09:44:30 +0000 Subject: [subexp-daq] dropping of data to analysis that cannot keep up In-Reply-To: References: <8baa75f2a8c14b63a1413cfc017a76cc@hi-jena.gsi.de>, Message-ID: <5603c03aeabd4f02ab1eacbca2cff6b9@hi-jena.gsi.de> Dear H?kan, thank you for the quick reply. The SIS3316 modules are digitizers which we not only want to use for recording pulses from the detector (trigger type 1), but also for monitoring the baseline behaviour of the signal (trigger type 2). For type 1 we look into the metadata of each SIS3316 channel and only read out the 2^14 samples of the detector signal trace if there is a pulse (indicated by the flag in the metadata). For type 2 we would like to avoid this selection and readout all the traces. Best greetings G?nter ---------------- G?nter Weber Helmholtz-Institut Jena Fr?belstieg 3 07743 Jena Germany Phone: +49-3641-947605 www.hi-jena.de GSI Helmholtzzentrum f?r Schwerionenforschung Planckstrasse 1 64291 Darmstadt Germany www.gsi.de ________________________________ Von: subexp-daq im Auftrag von H?kan T Johansson Gesendet: Dienstag, 8. April 2025 11:23:07 An: Discuss use of Nurdlib, TRLO II, drasi and UCESB. Betreff: Re: [subexp-daq] dropping of data to analysis that cannot keep up Dear G?nter, nice to hear from you! The f_user function would get the trigger, so you should see it at least there. How to get that into nurdlib I think is a question for Hans... Curiosity: in what way do you want to do a different readout depending on the trigger that happened? I also think the SIS3316 code modifications you had have been merged. Could you please try the current version and see if that works? Best regards, H?kan On Tue, 8 Apr 2025, Weber, Guenter Dr. wrote: > > Dear friends, > > > sorry for coming up with a very basic question: How can I define different > trigger types and make the readout procedure of a specific module dependent > on the type of trigger? > > > Example: > > I have two different trigger types, 1 and 2. To let the DAQ know, which type > of trigger I have, they are plugged to the input channels 1 and 2 of our > VOLUM4B. Now I would like to know in the readout of the SIS3316 module, > which type triggered the current event. How do I do this? > > > > Thank you very much! > > > > > Best greetings from Jena > > G?nter > > > > P.S. As I got distracted by some hardware issues in the lab, we did not have > a proper hand-over of our modifications of the SIS3316 code in Nurdlib. Did > you in the meantime took over these modifications into the main branch or > how is the situation? > > > > ____________________________________________________________________________ > Von: subexp-daq im Auftrag von H?kan > T Johansson > Gesendet: Sonntag, 16. Februar 2025 11:33:35 > An: Bajzek, Martin > Cc: Hubbard, Nicolas James Dr.; subexp-daq at lists.chalmers.se > Betreff: [subexp-daq] dropping of data to analysis that cannot keep up > > Hi, > > when the point of DAQ being faster than online is reached, we face the > issue of somehow dropping old data. > > Left with default settings, ucesb is not really helpful. Old data will > pile up in the '--server' until that is full, and only then dropped. If > the server is configured with a large amount of memory for buffers > ('--server' option 'size='), this can be significant. This leads to > online analysis lagging reality, loosing the 'online' feeling and useful > rapid feedback of changing conditions... > > There is a 'dropold=N' (seconds) option which will not serve buffers to > the client older than the specified time. It may be reasonable to set > this to at most two times the spill length, if the beam has a pulse > structure. Otherwise a few seconds ought to be enough. > > > A dry-run demo (3 instances), using the 'mbuffer' program to limit rates: > > --- > > # Generate data (at 10 MB/s): > > file_input/empty_file --lmd \ > | mbuffer -r 10M -q \ > | empty/empty --file=- --server=trans,size=1G > > # Serve the data, with dropold (can be removed): > > empty/empty --trans=localhost --server=trans:8000,size=1G,dropold=10s > > # 'Analysis' (at 1 MB/s). > # Look at the data (buffer times) > # Note: dd is used here to strip the 16-byte transport protocol startup > # message. > > nc localhost 8000 \ > | mbuffer -r 1M -q \ > | dd bs=16 skip=1 \ > | empty/empty --file=- --print-buffer > > --- > > Note: the buffer times only relates to the middle server, not the original > data generation. But in this case is good enough to show the difference. > If 'dropold' is not used in the middle instance, then the last instance > (doing --print-buffer) will be seen to lag further and further. > > > Cheers, > H?kan > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hans.tornqvist at chalmers.se Wed Apr 9 14:47:26 2025 From: hans.tornqvist at chalmers.se (=?Windows-1252?Q?Hans_T=F6rnqvist?=) Date: Wed, 9 Apr 2025 12:47:26 +0000 Subject: [subexp-daq] dropping of data to analysis that cannot keep up In-Reply-To: <5603c03aeabd4f02ab1eacbca2cff6b9@hi-jena.gsi.de> References: <8baa75f2a8c14b63a1413cfc017a76cc@hi-jena.gsi.de>, <5603c03aeabd4f02ab1eacbca2cff6b9@hi-jena.gsi.de> Message-ID: Dear G?nter, Let?s see if I can remember how this is done with tags? Modules can have several tags, and each tag can have its soft counter increased. Nurdlib keeps track of the relation between tags and module event counters to make sure they?re all in sync (to be precise, there?s a counter per existing combo of tags, not per tag). For example, in main.cfg: TAGS(?1?) STRUCK_SIS3316(?) {?} TAGS(?1?, ?2?) STRUCK_SIS3316(?) {?} Both modules have tag ?1?, only the 2nd module has tag ?2?. Both r3bfuser and the minimal nurdlib fuser bind the tag names ?1?..?15? to the trigger number (around lines 250 and 308 in my latest nurdlib version). So, if trigger 2 has fired, the fuser will tell nurdlib to increment the nurdlib-internal soft counter for tag ?2?, and the event counters of all modules with tag ?2? are expected to have incremented. Note that this does not affect the readout logic. Counters and payloads of all modules are still read on every readout, and what?s supposed to be unchanged or empty must remain so. It?s not a commonly requested feature and I have not tested it for some time, I?m crossing my fingers that it works for you (: Cheers, Hans From: subexp-daq on behalf of Weber, Guenter Dr. Date: Tuesday, 8 April 2025 at 11:44 To: Discuss use of Nurdlib, TRLO II, drasi and UCESB. Subject: Re: [subexp-daq] dropping of data to analysis that cannot keep up Dear H?kan, thank you for the quick reply. The SIS3316 modules are digitizers which we not only want to use for recording pulses from the detector (trigger type 1), but also for monitoring the baseline behaviour of the signal (trigger type 2). For type 1 we look into the metadata of each SIS3316 channel and only read out the 2^14 samples of the detector signal trace if there is a pulse (indicated by the flag in the metadata). For type 2 we would like to avoid this selection and readout all the traces. Best greetings G?nter ---------------- G?nter Weber Helmholtz-Institut Jena Fr?belstieg 3 07743 Jena Germany Phone: +49-3641-947605 www.hi-jena.de GSI Helmholtzzentrum f?r Schwerionenforschung Planckstrasse 1 64291 Darmstadt Germany www.gsi.de ________________________________ Von: subexp-daq im Auftrag von H?kan T Johansson Gesendet: Dienstag, 8. April 2025 11:23:07 An: Discuss use of Nurdlib, TRLO II, drasi and UCESB. Betreff: Re: [subexp-daq] dropping of data to analysis that cannot keep up Dear G?nter, nice to hear from you! The f_user function would get the trigger, so you should see it at least there. How to get that into nurdlib I think is a question for Hans... Curiosity: in what way do you want to do a different readout depending on the trigger that happened? I also think the SIS3316 code modifications you had have been merged. Could you please try the current version and see if that works? Best regards, H?kan On Tue, 8 Apr 2025, Weber, Guenter Dr. wrote: > > Dear friends, > > > sorry for coming up with a very basic question: How can I define different > trigger types and make the readout procedure of a specific module dependent > on the type of trigger? > > > Example: > > I have two different trigger types, 1 and 2. To let the DAQ know, which type > of trigger I have, they are plugged to the input channels 1 and 2 of our > VOLUM4B. Now I would like to know in the readout of the SIS3316 module, > which type triggered the current event. How do I do this? > > > > Thank you very much! > > > > > Best greetings from Jena > > G?nter > > > > P.S. As I got distracted by some hardware issues in the lab, we did not have > a proper hand-over of our modifications of the SIS3316 code in Nurdlib. Did > you in the meantime took over these modifications into the main branch or > how is the situation? > > > > ____________________________________________________________________________ > Von: subexp-daq im Auftrag von H?kan > T Johansson > Gesendet: Sonntag, 16. Februar 2025 11:33:35 > An: Bajzek, Martin > Cc: Hubbard, Nicolas James Dr.; subexp-daq at lists.chalmers.se > Betreff: [subexp-daq] dropping of data to analysis that cannot keep up > > Hi, > > when the point of DAQ being faster than online is reached, we face the > issue of somehow dropping old data. > > Left with default settings, ucesb is not really helpful. Old data will > pile up in the '--server' until that is full, and only then dropped. If > the server is configured with a large amount of memory for buffers > ('--server' option 'size='), this can be significant. This leads to > online analysis lagging reality, loosing the 'online' feeling and useful > rapid feedback of changing conditions... > > There is a 'dropold=N' (seconds) option which will not serve buffers to > the client older than the specified time. It may be reasonable to set > this to at most two times the spill length, if the beam has a pulse > structure. Otherwise a few seconds ought to be enough. > > > A dry-run demo (3 instances), using the 'mbuffer' program to limit rates: > > --- > > # Generate data (at 10 MB/s): > > file_input/empty_file --lmd \ > | mbuffer -r 10M -q \ > | empty/empty --file=- --server=trans,size=1G > > # Serve the data, with dropold (can be removed): > > empty/empty --trans=localhost --server=trans:8000,size=1G,dropold=10s > > # 'Analysis' (at 1 MB/s). > # Look at the data (buffer times) > # Note: dd is used here to strip the 16-byte transport protocol startup > # message. > > nc localhost 8000 \ > | mbuffer -r 1M -q \ > | dd bs=16 skip=1 \ > | empty/empty --file=- --print-buffer > > --- > > Note: the buffer times only relates to the middle server, not the original > data generation. But in this case is good enough to show the difference. > If 'dropold' is not used in the middle instance, then the last instance > (doing --print-buffer) will be seen to lag further and further. > > > Cheers, > H?kan > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From g.weber at hi-jena.gsi.de Fri Apr 11 17:06:42 2025 From: g.weber at hi-jena.gsi.de (Weber, Guenter Dr.) Date: Fri, 11 Apr 2025 15:06:42 +0000 Subject: [subexp-daq] dropping of data to analysis that cannot keep up In-Reply-To: References: <8baa75f2a8c14b63a1413cfc017a76cc@hi-jena.gsi.de>, <5603c03aeabd4f02ab1eacbca2cff6b9@hi-jena.gsi.de>, Message-ID: Dear Hans, thank you for the explanation. Just to sure, as every thing starts with the VULOM4B, the following lines in vulom.trlo should set the VULOM in a state, where it accepts 5 types of triggers on the input channels 1 to 5, right? SECTION(module_trigger) { all_or_mask(1) <= ECL_IN(1) | ECL_IN(2) | ECL_IN(3) | ECL_IN(4) | ECL_IN(5); TRIG_LMU_AUX(1) <= ALL_OR(1); TRIG_LMU_OUT(1) <= TRIG_LMU_AUX(1); } If this is fine, then after Easter time I will start to see if/how I can use this information during readout of the modules. Best greetings G?nter ________________________________ Von: subexp-daq im Auftrag von Hans T?rnqvist Gesendet: Mittwoch, 9. April 2025 14:47:26 An: Discuss use of Nurdlib, TRLO II, drasi and UCESB. Betreff: Re: [subexp-daq] dropping of data to analysis that cannot keep up Dear G?nter, Let?s see if I can remember how this is done with tags? Modules can have several tags, and each tag can have its soft counter increased. Nurdlib keeps track of the relation between tags and module event counters to make sure they?re all in sync (to be precise, there?s a counter per existing combo of tags, not per tag). For example, in main.cfg: TAGS(?1?) STRUCK_SIS3316(?) {?} TAGS(?1?, ?2?) STRUCK_SIS3316(?) {?} Both modules have tag ?1?, only the 2nd module has tag ?2?. Both r3bfuser and the minimal nurdlib fuser bind the tag names ?1?..?15? to the trigger number (around lines 250 and 308 in my latest nurdlib version). So, if trigger 2 has fired, the fuser will tell nurdlib to increment the nurdlib-internal soft counter for tag ?2?, and the event counters of all modules with tag ?2? are expected to have incremented. Note that this does not affect the readout logic. Counters and payloads of all modules are still read on every readout, and what?s supposed to be unchanged or empty must remain so. It?s not a commonly requested feature and I have not tested it for some time, I?m crossing my fingers that it works for you (: Cheers, Hans From: subexp-daq on behalf of Weber, Guenter Dr. Date: Tuesday, 8 April 2025 at 11:44 To: Discuss use of Nurdlib, TRLO II, drasi and UCESB. Subject: Re: [subexp-daq] dropping of data to analysis that cannot keep up Dear H?kan, thank you for the quick reply. The SIS3316 modules are digitizers which we not only want to use for recording pulses from the detector (trigger type 1), but also for monitoring the baseline behaviour of the signal (trigger type 2). For type 1 we look into the metadata of each SIS3316 channel and only read out the 2^14 samples of the detector signal trace if there is a pulse (indicated by the flag in the metadata). For type 2 we would like to avoid this selection and readout all the traces. Best greetings G?nter ---------------- G?nter Weber Helmholtz-Institut Jena Fr?belstieg 3 07743 Jena Germany Phone: +49-3641-947605 www.hi-jena.de GSI Helmholtzzentrum f?r Schwerionenforschung Planckstrasse 1 64291 Darmstadt Germany www.gsi.de ________________________________ Von: subexp-daq im Auftrag von H?kan T Johansson Gesendet: Dienstag, 8. April 2025 11:23:07 An: Discuss use of Nurdlib, TRLO II, drasi and UCESB. Betreff: Re: [subexp-daq] dropping of data to analysis that cannot keep up Dear G?nter, nice to hear from you! The f_user function would get the trigger, so you should see it at least there. How to get that into nurdlib I think is a question for Hans... Curiosity: in what way do you want to do a different readout depending on the trigger that happened? I also think the SIS3316 code modifications you had have been merged. Could you please try the current version and see if that works? Best regards, H?kan On Tue, 8 Apr 2025, Weber, Guenter Dr. wrote: > > Dear friends, > > > sorry for coming up with a very basic question: How can I define different > trigger types and make the readout procedure of a specific module dependent > on the type of trigger? > > > Example: > > I have two different trigger types, 1 and 2. To let the DAQ know, which type > of trigger I have, they are plugged to the input channels 1 and 2 of our > VOLUM4B. Now I would like to know in the readout of the SIS3316 module, > which type triggered the current event. How do I do this? > > > > Thank you very much! > > > > > Best greetings from Jena > > G?nter > > > > P.S. As I got distracted by some hardware issues in the lab, we did not have > a proper hand-over of our modifications of the SIS3316 code in Nurdlib. Did > you in the meantime took over these modifications into the main branch or > how is the situation? > > > > ____________________________________________________________________________ > Von: subexp-daq im Auftrag von H?kan > T Johansson > Gesendet: Sonntag, 16. Februar 2025 11:33:35 > An: Bajzek, Martin > Cc: Hubbard, Nicolas James Dr.; subexp-daq at lists.chalmers.se > Betreff: [subexp-daq] dropping of data to analysis that cannot keep up > > Hi, > > when the point of DAQ being faster than online is reached, we face the > issue of somehow dropping old data. > > Left with default settings, ucesb is not really helpful. Old data will > pile up in the '--server' until that is full, and only then dropped. If > the server is configured with a large amount of memory for buffers > ('--server' option 'size='), this can be significant. This leads to > online analysis lagging reality, loosing the 'online' feeling and useful > rapid feedback of changing conditions... > > There is a 'dropold=N' (seconds) option which will not serve buffers to > the client older than the specified time. It may be reasonable to set > this to at most two times the spill length, if the beam has a pulse > structure. Otherwise a few seconds ought to be enough. > > > A dry-run demo (3 instances), using the 'mbuffer' program to limit rates: > > --- > > # Generate data (at 10 MB/s): > > file_input/empty_file --lmd \ > | mbuffer -r 10M -q \ > | empty/empty --file=- --server=trans,size=1G > > # Serve the data, with dropold (can be removed): > > empty/empty --trans=localhost --server=trans:8000,size=1G,dropold=10s > > # 'Analysis' (at 1 MB/s). > # Look at the data (buffer times) > # Note: dd is used here to strip the 16-byte transport protocol startup > # message. > > nc localhost 8000 \ > | mbuffer -r 1M -q \ > | dd bs=16 skip=1 \ > | empty/empty --file=- --print-buffer > > --- > > Note: the buffer times only relates to the middle server, not the original > data generation. But in this case is good enough to show the difference. > If 'dropold' is not used in the middle instance, then the last instance > (doing --print-buffer) will be seen to lag further and further. > > > Cheers, > H?kan > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hans.tornqvist at chalmers.se Mon Apr 14 18:44:08 2025 From: hans.tornqvist at chalmers.se (=?Windows-1252?Q?Hans_T=F6rnqvist?=) Date: Mon, 14 Apr 2025 16:44:08 +0000 Subject: [subexp-daq] dropping of data to analysis that cannot keep up In-Reply-To: References: <8baa75f2a8c14b63a1413cfc017a76cc@hi-jena.gsi.de>, <5603c03aeabd4f02ab1eacbca2cff6b9@hi-jena.gsi.de>, Message-ID: Dear G?nter, That looks fine if I understand what you want to do. This will only fire trigger 1 if any module sends a trigger, which is fine, but you asked about different readout for different triggers so I?m supposing at some point there will be another trigger configured :) Cheers, Hans From: subexp-daq on behalf of Weber, Guenter Dr. Date: Friday, 11 April 2025 at 17:06 To: Discuss use of Nurdlib, TRLO II, drasi and UCESB. Subject: Re: [subexp-daq] dropping of data to analysis that cannot keep up Dear Hans, thank you for the explanation. Just to sure, as every thing starts with the VULOM4B, the following lines in vulom.trlo should set the VULOM in a state, where it accepts 5 types of triggers on the input channels 1 to 5, right? SECTION(module_trigger) { all_or_mask(1) <= ECL_IN(1) | ECL_IN(2) | ECL_IN(3) | ECL_IN(4) | ECL_IN(5); TRIG_LMU_AUX(1) <= ALL_OR(1); TRIG_LMU_OUT(1) <= TRIG_LMU_AUX(1); } If this is fine, then after Easter time I will start to see if/how I can use this information during readout of the modules. Best greetings G?nter ________________________________ Von: subexp-daq im Auftrag von Hans T?rnqvist Gesendet: Mittwoch, 9. April 2025 14:47:26 An: Discuss use of Nurdlib, TRLO II, drasi and UCESB. Betreff: Re: [subexp-daq] dropping of data to analysis that cannot keep up Dear G?nter, Let?s see if I can remember how this is done with tags? Modules can have several tags, and each tag can have its soft counter increased. Nurdlib keeps track of the relation between tags and module event counters to make sure they?re all in sync (to be precise, there?s a counter per existing combo of tags, not per tag). For example, in main.cfg: TAGS(?1?) STRUCK_SIS3316(?) {?} TAGS(?1?, ?2?) STRUCK_SIS3316(?) {?} Both modules have tag ?1?, only the 2nd module has tag ?2?. Both r3bfuser and the minimal nurdlib fuser bind the tag names ?1?..?15? to the trigger number (around lines 250 and 308 in my latest nurdlib version). So, if trigger 2 has fired, the fuser will tell nurdlib to increment the nurdlib-internal soft counter for tag ?2?, and the event counters of all modules with tag ?2? are expected to have incremented. Note that this does not affect the readout logic. Counters and payloads of all modules are still read on every readout, and what?s supposed to be unchanged or empty must remain so. It?s not a commonly requested feature and I have not tested it for some time, I?m crossing my fingers that it works for you (: Cheers, Hans From: subexp-daq on behalf of Weber, Guenter Dr. Date: Tuesday, 8 April 2025 at 11:44 To: Discuss use of Nurdlib, TRLO II, drasi and UCESB. Subject: Re: [subexp-daq] dropping of data to analysis that cannot keep up Dear H?kan, thank you for the quick reply. The SIS3316 modules are digitizers which we not only want to use for recording pulses from the detector (trigger type 1), but also for monitoring the baseline behaviour of the signal (trigger type 2). For type 1 we look into the metadata of each SIS3316 channel and only read out the 2^14 samples of the detector signal trace if there is a pulse (indicated by the flag in the metadata). For type 2 we would like to avoid this selection and readout all the traces. Best greetings G?nter ---------------- G?nter Weber Helmholtz-Institut Jena Fr?belstieg 3 07743 Jena Germany Phone: +49-3641-947605 www.hi-jena.de GSI Helmholtzzentrum f?r Schwerionenforschung Planckstrasse 1 64291 Darmstadt Germany www.gsi.de ________________________________ Von: subexp-daq im Auftrag von H?kan T Johansson Gesendet: Dienstag, 8. April 2025 11:23:07 An: Discuss use of Nurdlib, TRLO II, drasi and UCESB. Betreff: Re: [subexp-daq] dropping of data to analysis that cannot keep up Dear G?nter, nice to hear from you! The f_user function would get the trigger, so you should see it at least there. How to get that into nurdlib I think is a question for Hans... Curiosity: in what way do you want to do a different readout depending on the trigger that happened? I also think the SIS3316 code modifications you had have been merged. Could you please try the current version and see if that works? Best regards, H?kan On Tue, 8 Apr 2025, Weber, Guenter Dr. wrote: > > Dear friends, > > > sorry for coming up with a very basic question: How can I define different > trigger types and make the readout procedure of a specific module dependent > on the type of trigger? > > > Example: > > I have two different trigger types, 1 and 2. To let the DAQ know, which type > of trigger I have, they are plugged to the input channels 1 and 2 of our > VOLUM4B. Now I would like to know in the readout of the SIS3316 module, > which type triggered the current event. How do I do this? > > > > Thank you very much! > > > > > Best greetings from Jena > > G?nter > > > > P.S. As I got distracted by some hardware issues in the lab, we did not have > a proper hand-over of our modifications of the SIS3316 code in Nurdlib. Did > you in the meantime took over these modifications into the main branch or > how is the situation? > > > > ____________________________________________________________________________ > Von: subexp-daq im Auftrag von H?kan > T Johansson > Gesendet: Sonntag, 16. Februar 2025 11:33:35 > An: Bajzek, Martin > Cc: Hubbard, Nicolas James Dr.; subexp-daq at lists.chalmers.se > Betreff: [subexp-daq] dropping of data to analysis that cannot keep up > > Hi, > > when the point of DAQ being faster than online is reached, we face the > issue of somehow dropping old data. > > Left with default settings, ucesb is not really helpful. Old data will > pile up in the '--server' until that is full, and only then dropped. If > the server is configured with a large amount of memory for buffers > ('--server' option 'size='), this can be significant. This leads to > online analysis lagging reality, loosing the 'online' feeling and useful > rapid feedback of changing conditions... > > There is a 'dropold=N' (seconds) option which will not serve buffers to > the client older than the specified time. It may be reasonable to set > this to at most two times the spill length, if the beam has a pulse > structure. Otherwise a few seconds ought to be enough. > > > A dry-run demo (3 instances), using the 'mbuffer' program to limit rates: > > --- > > # Generate data (at 10 MB/s): > > file_input/empty_file --lmd \ > | mbuffer -r 10M -q \ > | empty/empty --file=- --server=trans,size=1G > > # Serve the data, with dropold (can be removed): > > empty/empty --trans=localhost --server=trans:8000,size=1G,dropold=10s > > # 'Analysis' (at 1 MB/s). > # Look at the data (buffer times) > # Note: dd is used here to strip the 16-byte transport protocol startup > # message. > > nc localhost 8000 \ > | mbuffer -r 1M -q \ > | dd bs=16 skip=1 \ > | empty/empty --file=- --print-buffer > > --- > > Note: the buffer times only relates to the middle server, not the original > data generation. But in this case is good enough to show the difference. > If 'dropold' is not used in the middle instance, then the last instance > (doing --print-buffer) will be seen to lag further and further. > > > Cheers, > H?kan > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From f96hajo at chalmers.se Sun Apr 20 12:34:52 2025 From: f96hajo at chalmers.se (=?ISO-8859-15?Q?H=E5kan_T_Johansson?=) Date: Sun, 20 Apr 2025 12:34:52 +0200 Subject: [subexp-daq] Accessing subevents' procID inside UNPACK_EVENT_USER_FUNCTION In-Reply-To: <5f75dad1bc9c4b4ba53650d0cd3b317b@gsi.de> References: , <0329fa51-ac54-f237-77b7-32b4a2d3d820@chalmers.se> <89aa4118e69c46e285e136d648c49bf2@gsi.de>, <004a60d3-2380-3bff-bb32-8559862c47ac@chalmers.se>, <7975db365b2640fc9fd2f5ed3e72cfbe@gsi.de> <5f75dad1bc9c4b4ba53650d0cd3b317b@gsi.de> Message-ID: Hi! Thanks to some insistent requesting by Martin, ucesb now finally can give away the subevent header values inside a subevent. When a parameter in a SUBEVENT declaration is one of the subevent header values (type, subtype, control, subcrate/crate, procid) and it is not an argument being matched in the EVENT declaration, then the header value is passed along. (If matched, the fixed value is sent.) It can then be remembered using ENCODE for a MEMBER variable. One still would need to be careful if several subevents match such a 'catch-all' specification. Example: --- SUBEVENT(SUBEV_WITH_HEADER_PARAM, subcrate, procid) { MEMBER(DATA32 value32); ENCODE(value32,(value = procid)); } EVENT { ... whp = SUBEV_WITH_HEADER_PARAM(type=76); } --- Side-note: the xtst/xtst.spec unpacker is not a actual experiment. It is used to test the specification parser and code generator, so may give ideas of what the parser might eat. Cheers, H?kan