-
Notifications
You must be signed in to change notification settings - Fork 481
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A redis based lock for distributed tusd instances #1129
base: main
Are you sure you want to change the base?
Conversation
8aa8e2e
to
5285a4c
Compare
Thank you very much for this PR! I will have a more thorough look next week. There are a few things I would like to add to this PR are: documentation for the package itself, documentation for our homepage (https://tus.github.io/tusd), integration with the CLI (so that it can also be used with the tusd binary), and maybe more in-depth tests to ensure that the release notifications also work properly. But I can take care of those :) |
Awesome, would be great if this gets merged. @whytheplatypus, I'm wondering if you've had a chance to run this on production and how it looks so far? |
Just thought I would point this out as you implemented your own locking design. https://github.com/go-redsync/redsync is available and I likely will be switching to https://github.com/redis/rueidis soon and https://github.com/redis/rueidis/blob/main/rueidislock/lock.go is a route. |
@pcfreak30 I am not sure what you mean here. This PR is using the redsync package already instead of implementing its own locking approach. Do you want to imply that using the lock from rueidis is preferable over redsync? |
woops i did not see that. I saw some of the pub/sub code and was thinking the locking system was re-implemented vs using a established library. Other that that I would look at |
we have! so far it's looking solid. We've been able to scale horizontally cleanly and seen an increase in stability, though I don't have before and after numbers for you. We did have to make one change in the lock exchange behavior, using short lived a pub sub channel per lock rather than using only one long lived one. This was because that one channel was getting full, while the short lived channels don't and also seem to perform better. It probably could have been done by adjusting some redis config but the change also simplified the code and so far has been working well at scale. I'll try and get that change in this PR in the next couple days. |
maybe we can look at getting an interface in front of the needed interactions with redis, so that swapping libraries is an option. |
@whytheplatypus im about to switch to this in a fork of |
This introduces a redis backed locker. This is probably most useful for tusd instances that are distributed, ephemeral, or have to scale up and down.
It uses the redis pub/sub feature to request a lock be released.
The tests use what looks like a pretty nice little redis implementation meant for testing.