You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In attempts to take data at NP02 and NP04 with long readout windows (e.g. ~1 second) during the week of 13/17-Feb, we would see the last N TriggerRecord sequence elements have very small size, and there would be warning messages in the ReadoutApp log files complaining about not being able to fulfill DataRequests because the data was not in the latency buffer. (These particular tests were done with many DataRequests per trigger. In this mode, each trigger results in a sequence of TriggerRecords instead of just one TR.)
Here are excerpts from Slack posts from Giovanna that summarize our understanding of what is happening:
I think that indeed the issue you see, when requests are split, is that for each link you (by default) have only 4 threads that can handle the requests in parallel, Since the sending is part of the request handling those take non negligible time and by when you get to the last requests you indeed risk not finding the data anymore. The best way to proceed, in my opinion, is to decouple the sending for fragments from the requests handling which is latency critical. Basically this means going back to the "fragment sender" module decoupled via a MPMC queue from the request handlers.
I just confirmed this with a run chopping the 1 second window, Request handling gets progressively delayed, until we fall off the buffer.
So, the point of this Issue is to have us to consider adding back the fragment sender module inside the Readout Apps. A suitably-sized Queue between the DataLinkHandlers and the FragmetSender module will provide buffering so that the DLHs will quickly service the DataRequests (while the data is still in the latency buffer) and the FragmentSender can take whatever time it needs to send the data downstream to the TRBuilders.
One question is whether we want to instantiate the FragmentSender all of the time or just when we're running with long readout windows.
The text was updated successfully, but these errors were encountered:
In attempts to take data at NP02 and NP04 with long readout windows (e.g. ~1 second) during the week of 13/17-Feb, we would see the last N TriggerRecord sequence elements have very small size, and there would be warning messages in the ReadoutApp log files complaining about not being able to fulfill DataRequests because the data was not in the latency buffer. (These particular tests were done with many DataRequests per trigger. In this mode, each trigger results in a sequence of TriggerRecords instead of just one TR.)
Here are excerpts from Slack posts from Giovanna that summarize our understanding of what is happening:
So, the point of this Issue is to have us to consider adding back the fragment sender module inside the Readout Apps. A suitably-sized Queue between the DataLinkHandlers and the FragmetSender module will provide buffering so that the DLHs will quickly service the DataRequests (while the data is still in the latency buffer) and the FragmentSender can take whatever time it needs to send the data downstream to the TRBuilders.
One question is whether we want to instantiate the FragmentSender all of the time or just when we're running with long readout windows.
The text was updated successfully, but these errors were encountered: