You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
--interval parameter is supposed to define the sampling interval. However, it would appear this functionality is missing and the only thing it does is it multiplies the amount of samples by that number and displays that value as cpu time. The actual sampling interval seems to be the same regardless of the value. As a result, using a smaller sample interval results in the cpu time being wrong. It would also appear that the % are wrong with high sampling interval as well. For testing, I tried performing a test on my plugin with the following results in a thread of my choice:
All of these were ran after each other without any changes. As a result, they should all give (almost) identical results. But as we can see, the differences are way too high.
The absolute value in ms is not the only issue here, however. I have an integrated CPU usage tracking in my plugin, according to which the 2 last reports have correct % usage, while the first 2 have it completely wrong.
Reproduction Steps
Specify --interval parameter followed by a number.
Expected Behaviour
All 4 of the test reports made should provide nearly identical results for both usage % and ms.
Platform Information
Minecraft Version: -
Platform Type: Proxy
Platform Brand: Velocity
Platform Version: 3.3.0-SNAPSHOT
Spark Version
v1.10.74
Logs and Configs
No response
Extra Details
This begs the question: Does the CPU sampling actually work correctly even with default interval? Based on my tests, it doesn't.
The text was updated successfully, but these errors were encountered:
After trying to create my own sampler I came to the conclusion that this is caused by the profiling thread being overloaded and therefore not profiling at the configured rate. Nothing can be done about that I guess.
Description
--interval
parameter is supposed to define the sampling interval. However, it would appear this functionality is missing and the only thing it does is it multiplies the amount of samples by that number and displays that value as cpu time. The actual sampling interval seems to be the same regardless of the value. As a result, using a smaller sample interval results in the cpu time being wrong. It would also appear that the % are wrong with high sampling interval as well. For testing, I tried performing a test on my plugin with the following results in a thread of my choice:/sparkv profiler start --timeout 30 --thread * --interval 10
: https://spark.lucko.me/Lpkpd8Z2YY/sparkv profiler start --timeout 30 --thread *
: https://spark.lucko.me/qvt8mFMmRk/sparkv profiler start --timeout 30 --thread * --interval 1
: https://spark.lucko.me/ZZ9o0IKwly/sparkv profiler start --timeout 30 --thread * --interval 0.1
: https://spark.lucko.me/HAB6DJnnc6All of these were ran after each other without any changes. As a result, they should all give (almost) identical results. But as we can see, the differences are way too high.
The absolute value in
ms
is not the only issue here, however. I have an integrated CPU usage tracking in my plugin, according to which the 2 last reports have correct % usage, while the first 2 have it completely wrong.Reproduction Steps
Specify
--interval
parameter followed by a number.Expected Behaviour
All 4 of the test reports made should provide nearly identical results for both usage % and
ms
.Platform Information
Spark Version
v1.10.74
Logs and Configs
No response
Extra Details
This begs the question: Does the CPU sampling actually work correctly even with default interval? Based on my tests, it doesn't.
The text was updated successfully, but these errors were encountered: