Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lock TTL is 2 billion seconds #5

Open
OscarVanL opened this issue Jul 23, 2023 · 2 comments
Open

Lock TTL is 2 billion seconds #5

OscarVanL opened this issue Jul 23, 2023 · 2 comments

Comments

@OscarVanL
Copy link

In the examples when the NewRedisMemoLock is initialised a 5-15 second lockTimeout is provided.

This is what's passed to SetNX when acquiring the lock.

However, when I open a terminal into my redis instance and do KEYS * to list the locks and then TTL ${resourceTag}/lock:${resID} it returns this:

127.0.0.1:6379> TTL report/lock:123
(integer) 19999999845

When I run it a few seconds later:

127.0.0.1:6379> TTL report/lock:123
(integer) 19999999835

Which seems to suggest the actual TTL applied onto the lock is 2 billion seconds.

This presents an issue running the examples, because nowhere in the RedisMemoLock actually explicitly frees the lock, once the examples have been called once we can never re-aquire the lock.

Do you have any idea why redis is not respecting the lockTimeout provided by us?

Redis Version

Running redis via docker with default settings.

Redis version=7.0.10, bits=64, commit=00000000, modified=0, pid=1, just started
2023-07-23 01:21:40 1:C 23 Jul 2023 00:21:40.091 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
@OscarVanL
Copy link
Author

OscarVanL commented Jul 23, 2023

I can see in one of the forks @tbrown1979 made a fix to tidy-up the lock by calling Del after publishing the result:

55184fd

But that does not address this weird TTL behaviour (suppose the caller that acquired the lock crashed, then that resource would get indefinitely locked).

@OscarVanL
Copy link
Author

I can see this 2 billion seconds issue is confined to the GetResourceRenewable function when renew is called. I suspect it must be some glitch with the renewLockLuaScript:

    if redis.call('GET', KEYS[1]) == ARGV[1]
    then 
        redis.call('EXPIRE', KEYS[1], ARGV[2]) 
        return 1
    else 
        return 0
    end

I personally do not need the renewable lock functionality so am just going to delete it, but this serves as a warning to anyone else who was considering using this functionality, it is probably currently in a broken state.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant