Replies: 5 comments 3 replies
-
You are free to use whatever buffers you want in the receive ring, or deal with starvation. The latter will terminate your multishot and then just re-arm it when you are ready to do so. So that can either be your throttle point ("I'm too slow at dealing with receives, my pool must be too small"). Might be nicer to simply replenish the receive pool with a new buffer when you hang on to your buffer, and then either free the one you hung on to when the ~borrowed_buf() destructor is invoked, or recycle it back into the receive side if it's still low. Doing something like that would then mean you need to know which buffer each BID refers to, whereas a strict recycling would always be basic pointer math to find a buffer from a BID, or vice versa. |
Beta Was this translation helpful? Give feedback.
-
In general, getting a multishot receive terminated if you can't keep up is perfectly fine, and expected. If the sender is flooding you with data quicker than you can deal with it, that will ALWAYS happen. Dealing with that situation (eg deferring until some processing and sends have been done) and then rearming the multishot receive when you are ready for it should be a perfectly normal flow. |
Beta Was this translation helpful? Give feedback.
-
Interesting. This has actually given me a lot to stew on. For some reason, I had created these really artificial restrictions around the buf ring structure. In reality, you're right, I can do all sorts of cool stuff like replace buffers I borrow. I think for now, this is more than I want to deal with and I'll stick with a naive-but-working implementation of the code and stew on better, more elegant solutions for later. There's no reason a great solution should make the development wait. |
Beta Was this translation helpful? Give feedback.
-
Also, TIL there's |
Beta Was this translation helpful? Give feedback.
-
Added some man page additions (and SEE ALSO) links to hopefully make this less of an issue in the future. |
Beta Was this translation helpful? Give feedback.
-
I have an application that uses multishot recv with TCP. Everything is working great but now I'm trying to layer TLS on top of it and I'm having some trouble coming up with implementations.
This is a C++ application so what I do for plain TCP is, I hand a user a class that temporarily owns one of the buffers from the buf ring. In the class' destructor is where I return the buffer back to the
io_uring_buf_ring
.This enables users to inspect the buffer contents without needing to allocate and
memcpy()
out the contents. But it does mean the buffer isn't available to the ongoing multishot recv.For TLS, I'm not so sure what the right approach is. TLS transforms the data received from the buffers and needs a place to write the plaintext to. Right now, it's easy to just allocate a buffer and keep copying to that but I've also noted that for TLS, the plaintext data is smaller than the received records so theoretically, one can write the plaintext output in the buffers that are currently being borrowed from the ring.
I would like this approach the most, but I'm concerned about buffer starvation. If we used the buffers from the buf ring to store the output plaintext, it'd mean we must delay all calls to
io_uring_buf_ring_add()
and thenio_uring_buf_ring_advance()
until the user was done with the plaintext where "done" is defined as: destructor of~borrowed_buf()
runs.More plainly put, with the way TLS is modeled, we have an input buffer sequence and an output buffer sequence we can store in-place. This would mean that the full output is available but only after receiving the appropriate amount of CQEs which isn't deterministic.
In practice, how should application developers handle using
io_uring_buf_ring
-style code when it comes to actually processing input data in a meaningful way? Now that I've written it all out, I think just allocating and copying is the best.Beta Was this translation helpful? Give feedback.
All reactions