-
Notifications
You must be signed in to change notification settings - Fork 522
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Problem] How can I improve tempo query performance #4239
Comments
1000 spans/second is quite low and I'm surprised you're seeing performance issues. Query performance can be quite difficult to debug remotely, but we will do our best. Can you provide an example of a traceql query that is having issues and the corresponding query frontend logs? We can start there and try to figure out what is causing issues. |
Hi @joe-elliott
Here is my query-frontend log
In this example, I only query with microservice name. And here is my grafana frontend shows streaming result I can give more information if you need, |
Any Update? |
Still waiting for the reply, |
Howdy, apologies for letting this sit. Been overwhelmed a bit by repo activity lately.
This is a brutal query. The obvious issue is the number of regexes. The less obvious issue is that Tempo is really fast at evaluating a bunch of &&'ed conditions and way slower and mixed sets of conditions. The good news:
Expect this query to improve in 2.7 |
thanks joe, |
I am using the latest version of tempo-distributed (v2.6.1), and my data volume is approximately 1,000 records per second, with a retention period of 21 days totaling around 900 GB. When performing TraceQL queries, I’m encountering significant performance bottlenecks, especially when querying span or resource attributes.
According to this article,
https://grafana.com/docs/tempo/latest/operations/backend_search/
Here are the improvements I've implemented so far:
.http.request.method = "GET" → span.http.request.method = "GET"
However, despite these adjustments, the performance is still below acceptable levels. Are there any additional optimizations I could make?
my helm chart values is as following:
The text was updated successfully, but these errors were encountered: