Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

lock_context does not release lock when it no longer owns it #6668

Open
GitHK opened this issue Nov 5, 2024 · 3 comments
Open

lock_context does not release lock when it no longer owns it #6668

GitHK opened this issue Nov 5, 2024 · 3 comments
Assignees
Labels
bug buggy, it does not work as expected
Milestone

Comments

@GitHK
Copy link
Contributor

GitHK commented Nov 5, 2024

What happened?

It was detected that the resource-usage-tracker service did not have a lock inside redis and it was reporting the following error: redis.exceptions.LockNotOwnedError: Cannot reacquire a lock that's no longer owned.

What triggered this situation?

The cause is currently unknown.

What could trigger this situation?

  • Lock is removed manually or somehow expires.
  • Redis database becoming unavailable for a short amount of time.

Why is this bad?

Immagine having multiple instances relying on this lock, if somehow it is freed (and the lock_context does nothing), a different instance will acquire it and use it as if the resource (protected by the lock) was free to be used.

How does it present itself?

Consider the following code:

async with lock_context(...):
    # some user defined
    # code here

Two situations compete to producing the issue:

  • lock_context creates a task which raises redis.exceptions.LockNotOwnedError but only logs the issue without handling it.
  • lock_context (context manager) has no way of stopping the execution of the code defined by the user.

Is there a way to reproduce it in a test?

Yes, apply the following changes in your repo changes.diff.zip

Then run one of the following tests to make the issue appear:

  • tests/test_redis_utils.py::test_possible_regression_lock_extension_fails_if_key_is_missing
  • tests/test_redis_utils.py::test_possible_regression_first_extension_delayed_and_expires_key

What was already tried?

To mitigate the issue it was proposed to try and stop the user defined code.
The only possible solution for a context manger is using signals (which are only available on linux).

When running test tests/test_redis_utils.py::test_context_manager_timing_out the code hangs unexpectedly inside the signal's handler functions. This means the entire asyncio runtime will be halted.

This approach cannot be used.

tests/test_redis_utils.py::test_context_manager_timing_out which produces a very unexpected result.

    def handler(signum, frame):
>       raise RuntimeError(f"Operation timed out after {after} seconds")
E       RuntimeError: Operation timed out after 10 seconds

tests/test_redis_utils.py:190: RuntimeError
> /home/silenthk/work/pr-osparc-redis-lock-issues/packages/service-library/tests/test_redis_utils.py(190)handler()
-> raise RuntimeError(f"Operation timed out after {after} seconds")

Different proposals

Feel free to suggest more below.

  • @GitHK: I would do the following. The locking mechanism should receive a task (which runs the user defined code) which it can cancel if something goes wrong. This solution no longer uses a context manger, but uses a battle tested method of stopping the running user defined code.
@GitHK GitHK added the bug buggy, it does not work as expected label Nov 5, 2024
@GitHK GitHK changed the title lock_context does not realise lock is no longer owned lock_context does not release lock is no longer owned Nov 5, 2024
@GitHK GitHK changed the title lock_context does not release lock is no longer owned lock_context does not release lock when it no longer owns the lock Nov 5, 2024
@GitHK GitHK changed the title lock_context does not release lock when it no longer owns the lock lock_context does not release lock when it no longer owns it Nov 5, 2024
@GitHK GitHK added this to the Event Horizon milestone Nov 5, 2024
@matusdrobuliak66
Copy link
Contributor

matusdrobuliak66 commented Nov 5, 2024

Additional notes/observations:

  • During the same period, a similar issue was observed in storage — the same Cannot reacquire a lock error was logged. The storage background task was running properly, and the lock key was present in the Redis lock DB table. (This is strange, the Resource Tracker issue differed because the key was missing there. However, why this error was logged in storage is unknown)
  • OpenTelemetry tracing was introduced recently; could this be causing some blocking behavior?
  • I would suggest not rushing but instead taking two steps:
    • Analyze if there might be any blocking issues that could have caused this.
    • To make the platform more robust, implement a mechanism to handle this situation.
      • Easiest solution: Consider marking the application as unhealthy (when it restarts, it should gracefully resolve the issue by itself).

@GitHK
Copy link
Contributor Author

GitHK commented Nov 5, 2024

Additional notes/observations:

  • During the same period, a similar issue was observed in storage — the same Cannot reacquire a lock error was logged. The storage background task was running properly, and the lock key was present in the Redis lock DB table. (This is strange, the Resource Tracker issue differed because the key was missing there. However, why this error was logged in storage is unknown)

@matusdrobuliak66 can we please make a different issue out of this one? It is something different and I don't want to mix them.

@matusdrobuliak66
Copy link
Contributor

Additional notes/observations:

  • During the same period, a similar issue was observed in storage — the same Cannot reacquire a lock error was logged. The storage background task was running properly, and the lock key was present in the Redis lock DB table. (This is strange, the Resource Tracker issue differed because the key was missing there. However, why this error was logged in storage is unknown)

@matusdrobuliak66 can we please make a different issue out of this one? It is something different and I don't want to mix them.

Yes, no problem. I just wanted to make a note of it, because since we don't fully understand the issue yet, we can't be certain that those two are not somehow interconnected.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug buggy, it does not work as expected
Projects
None yet
Development

No branches or pull requests

6 participants