-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Redis Running out of hand (Production) #1922
Comments
@CarlosQ96 I have issued some troubleshooting commands on our redis host, to help check if we can have more visibility on this: Some Redis-CLI Troubleshooting Outputs:
General Monitoring Logs:
|
Adding these finds in the issue after scanning redis for big keys: Command:
Output:
CC: @CarlosQ96 |
what is the update on the final solution for this issue? |
@divine-comedian, final solution would be to remove bull jobs from Redis memory, @CarlosQ96 will say more about that. |
Still not sure how to get the memory down notification jobs are executed in the future is eating up the memory, many unknowns |
What is filling the memory are the bull queust (notification center mostly) that have no expiration I removed non expirable keys in staging with
memory went down 370mbs. In prod which has over 4-5x more stuck keys, Will. decrease by 1gb++. @kkatusic and me working on adding expiration to all enqueus using the bull system. |
excellent! So next steps is prepare for release at the end of next sprint and monitor to ensure memory usage drops. Anything else we need to do for this issue? @CarlosQ96 |
should not see any unexpired keys inside of bull queues - will check this before we merge to prod. |
The latest pushes have cause our redis instance on production to go wild and consume so much memory. (> 66% of System Memory)
Check the
![Image](https://private-user-images.githubusercontent.com/52987806/407644135-a351ba21-d089-488f-bef4-87e540f83894.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3Mzk1Nzc0MjUsIm5iZiI6MTczOTU3NzEyNSwicGF0aCI6Ii81Mjk4NzgwNi80MDc2NDQxMzUtYTM1MWJhMjEtZDA4OS00ODhmLWJlZjQtODdlNTQwZjgzODk0LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTAyMTQlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwMjE0VDIzNTIwNVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWFhODhhZjZlODFjMThjNTY4NDNjNjc2MGI2MTRjNTI1NDY2Y2I0NmY1NzkyNzUwOWFlOGVkYTkyNjc2MjljNDgmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.y-fSmHGlntUQBorewdZRFa-KX0GS3aqt20WhgeWC9dM)
MEM %
Tab:This has been going on for more than two weeks as you see below:
Note that this is on Production Only and not on staging
The text was updated successfully, but these errors were encountered: