-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RabbitMQ output plugin does not engage backpressure when queue is full #58
Comments
Any chance any of the maintainers (@edmocosta @mashhurs perhaps?) could confirm the bugreport or at least acknowledge it? Thanks in advance! |
Just realised that there is an open PR that possibly addresses this very same issue: #57 Any reason why this is not getting any traction from the Logstash/Logstash Plugins teams? |
@IvanRibakov can you share debug or trace level of logs? The attached logs with info level only shows until pipeline started, rest is dark. |
Hi, @mashhurs. I'm not sure if you wanted more details from the P.S. I recently learned about RabbitMQ's Blocked Connections feature. I want to highlight specifically that it is NOT the kind of back-pressure I'm talking about. Blocked Connection events are published based on the RabbitMQ resource alarms (disk space, RAM, CPU), while I'm trying to engage back-pressure based on the RabbitMQ queue depth. |
Hi @mashhurs , thanks for reaching out the other day. Did you get a chance to look at the logs? Do they provide the information you were looking for? |
Hi @mashhurs, do you happen to have any news? Am I even barking up the right tree? Can you confirm if the behaviour I'm expecting is even meant to be supported by Logstash? |
Hi @mashhurs, is there anything I can do to help produce at least some answer on this issue? As I stated before, from the current documentation it's not very clear whether the backpressure that I'm talking about is even meant to be supported. So getting some clarity on that would already have value for me (and for anyone else wondering the same). I thought that "Steps to reproduce" provide fairly comprehensive way to reproduce the issue, but let me know if I can somehow improve it. |
Hi @IvanRibakov, I'm not sure the behavior you are expecting is what the configuration would indicate. I think the expected behavior for reject-publish is this:
I believe the observed behavior stated is consistent with this from the example provided. If I understand this correctly, the reject-publish argument doesn't actually apply backpressure, it merely prioritizes the older messages in the queue by not publishing new messages. My recommendation is to test this with reject-publish-dlx, to dead letter the rejected messages so you can determine how the queue is handled. |
HI @flexitrev, thanks for joining the conversation.
I'm assuming above is from the RabbitMQ perspective? If so, the RabbitMQs behaviour was clear to me. What I haven't seen (missed?) was any Logstash documentation that explains what happens to the ingressed events when they fail to be delivered.
I'm inclined to agree with above based on everything I've seen/read since I created this issue. The only thing left that puzzles me a bit is how come someone had a need to apply backpressure based on the RabbitMQ resource availability, but not in other scenarios that could lead to data loss (like queue overflow) |
Logstash information:
logstash:8.12.2
Docker imageOS version
Docker host:
Description of the problem including expected versus actual behavior:
I have configured queue with following arguments:
Expected behaviour:
Observed behaviour:
Related materials:
Steps to reproduce:
docker compose up
http://<container_ip>:15672/#/queues/%2F/i3logs
), browsei3logs
Queue stats, confirm that queue has 1000 messages in the "Ready" stateAck mode: Reject requeue false
,Messages: 100
)Docker compose service definition:
rabbitmq_conf/definitions.json
rabbitmq_conf/rabbitmq.conf
logstash.conf
Provide logs (if relevant):
INFO logs
The text was updated successfully, but these errors were encountered: