-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DefaultRedisCacheWriter.clean() uses blocking KEYS command [DATAREDIS-1151] #1721
Comments
chengchen commented Hello, any chance to move this small fix forward? We would very like to have this improvement as well. We have configured our Redis timeout at 1s, and cleaning the cache is causing timeout quite often. It would be very appreciated if you could prioritize it, many thanks! |
Mark Paluch commented This is indeed a small change looking from a command perspective, but it introduces another level of complexity. |
chengchen commented Mark Paluch Thanks for the quick feedback!
It seems to me that each operation is atomic but between 2 operations you still could leak some entries in the situation you described with
This is the main issue from my perspective. As different users configure different timeouts based on their criteria, it's probably better to leave the batch size configurable. For example, on our side we batch it with 10k keys to
There is already a PR proposed by someone, but there is some issue with it (DEL 1 by 1) |
chengchen commented Forgot to precise: In some cases, it's just not possible to load all the keys in 1 shot (it could simply consume too much memory), at least that's the case for us. That's why we are bypassing the Redis cache and removing entries by ourselves. Obviously, it would be better if it could be addressed by Spring Data Redis |
chengchen commented Mark Paluch I think this issue is not as minor as you think :) If the dataset is big enough, you either have timeout and / or OOM errors |
shenjianeng commented spring data redis 1.5.x , |
chengchen commented Hello Mark Paluch, hope you are doing well. I provided an example (not a definite PR) to show what's I have thought about improving this part: https://github.com/spring-projects/spring-data-redis/pull/547/files
Even if we hacked our own way to remove entries for some cache with big quantity, we still rely on Spring Data Redis for some cache with small quantity, but now these cache starts to have this issue as well... So it's kind of important for us to solve this issue from the root. Hope you understand! |
Mark Paluch commented After reiterating a few times, it makes sense to reconsider this requirement. The Pull Request at #532 issues a It would make sense to have a way to configure the Then, we would extract the current |
A potential way out could be introducing a clean strategy that either uses |
We now support a configurable BatchStrategy for RedisCache. The implementations consist of KEYS (default) and SCAN. Since SCAN is not supported with Jedis Cluster and SCAN requires a batch size, we default to KEYS. Closes #1721.
We now support a configurable BatchStrategy for RedisCache. The implementations consist of KEYS (default) and SCAN. Since SCAN is not supported with Jedis Cluster and SCAN requires a batch size, we default to KEYS. Closes #1721.
eyison opened DATAREDIS-1151 and commented
when I use
clean
method in classDefaultRedisCacheWriter
,it will excutekeys
command which is block,it may take a lot of time and cause other commands time outAffects: 2.1.17 (Lovelace SR17), 2.2.7 (Moore SR7), 2.3 GA (Neumann)
Reference URL:
spring-data-redis/src/main/java/org/springframework/data/redis/cache/DefaultRedisCacheWriter.java
Line 182 in 2bcb06a
Issue Links:
("is duplicated by")
("is duplicated by")
keys
command toscan
in DefaultRedisCacheWriter("is duplicated by")
Referenced from: pull request #532
1 votes, 4 watchers
The text was updated successfully, but these errors were encountered: