<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1">
<meta name="Generator" content="Microsoft Exchange Server">
<!-- converted from text --><style><!-- .EmailQuote { margin-left: 1pt; padding-left: 4pt; border-left: #800000 2px solid; } --></style>
</head>
<body>
<meta content="text/html; charset=UTF-8">
<style type="text/css" style="">
<!--
p
{margin-top:0;
margin-bottom:0}
-->
</style>
<div dir="ltr">
<div id="x_divtagdefaultwrapper" dir="ltr" style="font-size:14pt; color:#000000; font-family:Calibri,Helvetica,sans-serif">
<p>Dear <span>Håkan</span>,</p>
<p><br>
</p>
<p>thank you very much for the hint to the JSON implementation.</p>
<p><br>
</p>
<p>We now wrote a PYTHON script that streams the JSON output of UCESB/HBOOK/STRUCT_WRITER. It looks like this:<br>
</p>
<p><br>
</p>
<p></p>
<div><span style="font-size:10pt"> hbookproc = subprocess.Popen(r'../../ucesb/jena_test/jena_test stream://localhost:8001 --ntuple=UNPACK,STRUCT,- | ../../ucesb/hbook/struct_writer - --dump=compact_json',
</span><br>
<span style="font-size:10pt"> executable=r'/bin/bash',
</span><br>
<span style="font-size:10pt"> shell=True, </span>
<br>
<span style="font-size:10pt"> stdout=subprocess.PIPE)</span><br>
<span style="font-size:10pt"> while not self._stop_event.is_set() and (currline := hbookproc.stdout.readline()):</span><br>
<span style="font-size:10pt"> # generate content to be streamed </span>
<br>
<span style="font-size:10pt"> msg = 'data: '+currline.decode('utf-8')+'\n\n'</span><br>
<span style="font-size:10pt"> self._announcer.announce(msg)</span></div>
<br>
<p></p>
<p>The client code looks like this:</p>
<p><br>
</p>
<div><span style="font-size:10pt">import sseclient</span><br>
<span style="font-size:10pt">import json</span><br>
<span style="font-size:10pt"> </span><br>
<span style="font-size:10pt">messages = sseclient.SSEClient('http://10.141.184.131:5000/listen')</span><br>
<span style="font-size:10pt"> </span><br>
<span style="font-size:10pt">for msg in messages:</span><br>
<span style="font-size:10pt"> #TODO customize json.loads to account for unchanging packet structure</span><br>
<span style="font-size:10pt"> try:</span><br>
<span style="font-size:10pt"> data = json.loads(msg.data.encode())</span><br>
<span style="font-size:10pt"> except Exception as e:</span><br>
<span style="font-size:10pt"> print(f"Got error: {msg.data}")</span><br>
<span style="font-size:10pt"> else:</span><br>
<span style="font-size:10pt"> print(data)</span></div>
<br>
<p></p>
<p>On the receiving end we now get the events in the following form:</p>
<p><br>
</p>
<p></p>
<div><span style="font-size:10pt">{'TRIGGER': 1, 'EVENTNO': 4271780, 'ts_wr_subsystem_id': 0, 'ts_wr_t1': 0, 'ts_wr_t2': 0, 'ts_wr_t3': 0, 'ts_wr_t4': 0, 'vme_header_failure': 2147483648, 'vme_header_continous_event_counter': 0, 'vme_header_time_stamp': 1712669650,
'vme_header_clock_counter_stamp': 0, 'vme_header_iped': 0, 'vme_header_multi_events': 0, 'vme_header_multi_trlo_ii_counter0': 0, 'vme_header_multi_scaler_counter0': 0, 'vme_header_multi_adctdc_counter0': 0, 'vme_timestamps_time_hi': 101985, 'vme_timestamps_time_lo':
1356225930, 'vme_scaler_n': 16, 'vme_scaler_nI': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16], 'vme_scaler_data': [4380234, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1]}
</span><br>
</div>
<br>
<p></p>
<p>This is really great. The only remaining question for now is if there is a way to get the content of an array (in the above case the counts in the 16 scaler channels) without the additional information on the scaler channel numbers? In case of the SIS3316
digitizers a single pulse trace can have a length of tens of thousands of samples. Somehow I do not like the idea that without necessity we always transfer an additional array with just the numbers from 1 to n.</p>
<p>Bottom line: is there an easy way to get rid of the '..._nI' part of the array object in the JSON output?</p>
<p><br>
</p>
<p><br>
</p>
<p><br>
</p>
<p><br>
</p>
<p>Thank you very much and best greetings</p>
<p>Günter<br>
</p>
<p><br>
</p>
<br>
</div>
<hr tabindex="-1" style="display:inline-block; width:98%">
<div id="x_divRplyFwdMsg" dir="ltr"><font face="Calibri, sans-serif" color="#000000" style="font-size:11pt"><b>Von:</b> subexp-daq <subexp-daq-bounces@lists.chalmers.se> im Auftrag von Håkan T Johansson <f96hajo@chalmers.se><br>
<b>Gesendet:</b> Donnerstag, 4. April 2024 23:08:07<br>
<b>An:</b> Discuss use of Nurdlib, TRLO II, drasi and UCESB.<br>
<b>Betreff:</b> Re: [subexp-daq] Question on UCESB</font>
<div> </div>
</div>
</div>
<font size="2"><span style="font-size:10pt;">
<div class="PlainText"><br>
Dear Günter,<br>
<br>
the easiest way is probably to use the json dump from the ntuple writer.<br>
<br>
How to get there is not very well documented...:<br>
<br>
Example with dummy data:<br>
<br>
file_input/empty_file --lmd | \<br>
empty/empty --file=- --ntuple=STRUCT,- | \<br>
hbook/struct_writer - --dump=json<br>
<br>
It can also be operated in a 'server' mode for the 'external' data:<br>
<br>
file_input/empty_file --lmd | \<br>
empty/empty --file=- --ntuple=STRUCT,SERVER<br>
<br>
And then the dumper could be started like this:<br>
<br>
hbook/struct_writer localhost --dump=json<br>
<br>
Instead of json also compact_json exist, which will produce less <br>
whitespace.<br>
<br>
Cheers,<br>
Håkan<br>
<br>
<br>
<br>
On Thu, 4 Apr 2024, Weber, Guenter Dr. wrote:<br>
<br>
> <br>
> Dear friends,<br>
> <br>
> <br>
> we now (think that we have) understoof how *.spec-Files work. For a minimum<br>
> setup with just the VULOM (Timestamp and 16 scaler channels) we compiled out<br>
> own UCESB example. The output of an event looks like this:<br>
> <br>
> <br>
> Event 203 Type/Subtype 10 1 Size 140 Trigger 1<br>
> SubEv ProcID 1 Type/Subtype 10 1 Size 24 Ctrl 9 Subcrate <br>
> 1<br>
> 00000200 03e1a48c 04e1e9dd 05e109e1 06e10000 f1a2000a<br>
> SubEv ProcID 1 Type/Subtype 20 2 Size 84 Ctrl 9 Subcrate <br>
> 1<br>
> 80000000 660ebf44 000009e1 e9dda48c 00000010 0001a871 00000000 00000000<br>
> 00000000 00000000 00000000 00000000 00000000 00000001 00000001 00000001<br>
> 00000001 00000001 00000001 00000001 00000001<br>
> <br>
> Event 203 Trigger 1<br>
> <br>
> .RAW.TIMESTAMP.VULOM.HI: 0x000009e1=2529<br>
> .RAW.TIMESTAMP.VULOM.LO: 0xe9dda48c=-371350388<br>
> .RAW.VULOM.SCALER[0]: 0x0001a871=108657<br>
> .RAW.VULOM.SCALER[1]: 0x00000000=0<br>
> .RAW.VULOM.SCALER[2]: 0x00000000=0<br>
> .RAW.VULOM.SCALER[3]: 0x00000000=0<br>
> .RAW.VULOM.SCALER[4]: 0x00000000=0<br>
> .RAW.VULOM.SCALER[5]: 0x00000000=0<br>
> .RAW.VULOM.SCALER[6]: 0x00000000=0<br>
> .RAW.VULOM.SCALER[7]: 0x00000000=0<br>
> .RAW.VULOM.SCALER[8]: 0x00000001=1<br>
> .RAW.VULOM.SCALER[9]: 0x00000001=1<br>
> .RAW.VULOM.SCALER[10]: 0x00000001=1<br>
> .RAW.VULOM.SCALER[11]: 0x00000001=1<br>
> .RAW.VULOM.SCALER[12]: 0x00000001=1<br>
> .RAW.VULOM.SCALER[13]: 0x00000001=1<br>
> .RAW.VULOM.SCALER[14]: 0x00000001=1<br>
> .RAW.VULOM.SCALER[15]: 0x00000001=1<br>
> <br>
> (produced with "--data --dump=RAW --print")<br>
> <br>
> We now would like to take the easisest possible route to transport the RAW<br>
> data to Pyhton, where our main analysis is living. Unfortunately,<br>
> ext_data_client.h and the code behind it does not really feel inviting to be<br>
> converted into Python. Is there any other way to generate a data stream from<br>
> UCESB? So far, we had only success with writing the data into a ROOT file<br>
> and then using uproot in Pyhton to read the file. But this is no solution<br>
> for online analysis where we would need a data stream.<br>
> <br>
> We also had a look how Bastian did this with UCESB_IN (part of NUPELINE),<br>
> but we felt a bit overwhelmed. Idealy, we could access the data stream from<br>
> UCESB with such a simple Python code:<br>
> <br>
> import socket<br>
> import sys<br>
> import numpy as np<br>
> <br>
> sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)<br>
> server_address = ("10.141.184.131", 8001)<br>
> print(sys.stderr, 'connecting to %s port %s' % server_address)<br>
> sock.connect(server_address)<br>
> print("Connected")<br>
> data = sock.recv(80)<br>
> print( data )<br>
> t = np.dtype('u4, u4, u8, (16)u4') /* for our test data: trigger type,<br>
> event number, timestamp, 16 scaler channels */<br>
> a = np.frombuffer(data, dtype=t)<br>
> sock.close()<br>
> <br>
> However, of course just get 'a magic word' from UCESB as we have not<br>
> implemented the correct protocoll to acccess the data. In an idea case, we<br>
> would be able to avoid implementing this protocoll (or find an easy way to<br>
> do it).<br>
> <br>
> <br>
> <br>
> Thank you very much and best greetings from Jena.<br>
> <br>
> <br>
> Günter<br>
> <br>
> <br>
> <br>
> <br>
> <br>
> <br>
> <br>
> <br>
> <br>
> <br>
> <br>
> <br>
> <br>
> <br>
> <br>
> ----------------<br>
> <br>
> Günter Weber<br>
> <br>
> Helmholtz-Institut Jena<br>
> Fröbelstieg 3<br>
> 07743 Jena<br>
> Germany<br>
> Phone: +49-3641-947605<br>
> <a href="http://www.hi-jena.de">www.hi-jena.de</a><br>
> <br>
> GSI Helmholtzzentrum für Schwerionenforschung<br>
> Planckstrasse 1<br>
> 64291 Darmstadt<br>
> Germany<br>
> www.gsi.de<br>
> <br>
> ____________________________________________________________________________<br>
> Von: subexp-daq <subexp-daq-bounces@lists.chalmers.se> im Auftrag von Håkan<br>
> T Johansson <f96hajo@chalmers.se><br>
> Gesendet: Donnerstag, 4. April 2024 06:39:32<br>
> An: Discuss use of Nurdlib, TRLO II, drasi and UCESB.<br>
> Betreff: Re: [subexp-daq] Question on UCESB <br>
> <br>
> On Wed, 3 Apr 2024, Weber, Guenter Dr. wrote:<br>
> <br>
> ><br>
> > Dear friends,<br>
> ><br>
> ><br>
> > we now had a brief look into UCESB and UPEXPS.<br>
> ><br>
> ><br>
> > Is our intepretation correct, that *.spec-Files are used for mapping<br>
> between<br>
> > the raw data within an LMD event and "interpreted" data that is then used<br>
> > for further analysis?<br>
> <br>
> Yes. The .spec files contain the data format descriptions, and also the<br>
> mappings of channel names (in the SIGNAL statements).<br>
> <br>
> > If true, why is the folder SPEC containing only spec<br>
> > files for a few of the modules available in NURDLIB?<br>
> <br>
> The ucesb/spec/ directory comntain files where I or users have sent me<br>
> patches/commits with those data format specifications.<br>
> <br>
> > Is it just the case<br>
> > that nobody found time yet or is there a design decision behind this?<br>
> <br>
> If users place / keep them elsewhere (like e.g. upexps) longterm, there is<br>
> not much I can do... :-)<br>
> <br>
> Not a design decision. Except that the stuff in (the generic spec/<br>
> directory) should not be experiment specific.<br>
> <br>
> > We are<br>
> > now ondering what is the best wyo add a spec file for the new module that<br>
> we<br>
> > added to NURDLIB.<br>
> <br>
> Sure! Yes, please!<br>
> <br>
> > Also, if this made sense, we could add a spec file for the<br>
> > STRUCK digitizers whcih currently does only exist within UPEXPS.<br>
> <br>
> Yes. But we also need to know where it came from, since ucesb is<br>
> publically available, and just for good form want to keep the license in<br>
> order. I do want to make a mess like this, but to avoid issues down the<br>
> road.<br>
> <br>
> > To us there it is not really clear where UCESB ends and UPEXPS begins.<br>
> Could<br>
> > you explain what exactly is the purpose of both packages? What is UPEXPS<br>
> > doing that could/should not be a part of UCESB?<br>
> <br>
> Generally, ucesb/ is (expect for the fast that it has some (old) example<br>
> and test directories, not experiment-specific.<br>
> <br>
> upexps (or any other user repo) would contain the signal mappings for<br>
> sure.<br>
> <br>
> Some .spec files would likely be better to have somewhere under<br>
> ucesb/spec/<br>
> <br>
> Cheers,<br>
> Håkan<br>
> <br>
></div>
</span></font>
</body>
</html>