Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Azure DevOps pipelines running out of memory #4388

Closed
lukelloydagi opened this issue Dec 12, 2024 · 12 comments
Closed

Azure DevOps pipelines running out of memory #4388

lukelloydagi opened this issue Dec 12, 2024 · 12 comments
Labels
O: stale 🤖 This issue or pull request is stale, it will be closed if there is no activity question Further information is requested

Comments

@lukelloydagi
Copy link

Is anyone else having issues with Azure DevOps pipelines running out of memory recently?

I'm trying to figure out if it's MegaLinter that's grown or underlying MS infrastructure (which we've seen a lot of lately).

Would reducing CPU usage with PARALLEL_PROCESS_NUMBER reduce memory usage too?

image

@lukelloydagi lukelloydagi added the question Further information is requested label Dec 12, 2024
@lukelloydagi
Copy link
Author

lukelloydagi commented Dec 12, 2024

Well...that answers that, the PARALLEL_PROCESS_NUMBER setting made no difference 😞

@lukelloydagi
Copy link
Author

Tried using the megalinter-dotnet image to see if a smaller image size would make a difference....it did not 😞

@nvuillam
Copy link
Member

nvuillam commented Dec 14, 2024

@lukelloydagi Are you using custom self-hosted Azure Runners ?

I have many project using Azure, and have not detected similar issues :/

@lukelloydagi
Copy link
Author

@nvuillam, I've reverted to v8.1 and I don't seem to have the issue, I only have the issue with v8.2+ using either default or dotnet flavor. Is there anything in those releases that's likely to be the cause? If not I will have to raise an issue with MS 😭

@nvuillam
Copy link
Member

@lukelloydagi just to see if it is dotnet related, please can you try to run it with DOCUMENTATION or CI_LIGHT flavor ?

@lukelloydagi
Copy link
Author

@lukelloydagi just to see if it is dotnet related, please can you try to run it with DOCUMENTATION or CI_LIGHT flavor ?

@nvuillam, I can do but none of those flavors contain arm-ttk linters so would not lint my codebase.

It's odd as it seems to only have the issue with v8.2+ on one of my bigger ARM template repos.
It's only configured to lint changed files so size of repo shouldn't matter?
Also I would have thought size of repo (and image for that matter) would only effect disk space not memory?

@lukelloydagi
Copy link
Author

@nvuillam, if I reduce the linters (by only specifying a list using the ENABLE configuration variable) then I also don't get the issue. It therefore clearly relates to the linters that are running against the repo and maybe it's the linter(s) that are scanning the entire repo.

Has anything changed between v8.2+ in how the linters are running/consuming memory?

@nvuillam
Copy link
Member

nvuillam commented Dec 18, 2024

@lukelloydagi If a linter cli_lint_mode is in project mode, it does not lint only new files, but all files

By playing with enabling/disabling linters, did you identify the linter(s) that makes your CI job fail ?

I don't think we changed anything memory related between 8.2 & 8.3 :/ What changed are the linter versions, that have been upgraded, so maybe one of them has new performance issues or a memory leak :/

Copy link
Contributor

This issue has been automatically marked as stale because it has not had recent activity.
It will be closed in 14 days if no further activity occurs.
Thank you for your contributions.

If you think this issue should stay open, please remove the O: stale 🤖 label or comment on the issue.

@github-actions github-actions bot added the O: stale 🤖 This issue or pull request is stale, it will be closed if there is no activity label Jan 18, 2025
@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Feb 1, 2025
@lukelloydagi
Copy link
Author

@lukelloydagi Are you using custom self-hosted Azure Runners ?

I have many project using Azure, and have not detected similar issues :/

Sorry for the delay in responding, I've just been kind of living with it for now as it only affects my one repo but it's starting to become annoying so assigning some time to fix.

I'm using the Azure hosted pipeline agents but I am going to spin up a self-hosted agent as I'm sure it will fix the issue by giving it a higher spec VM.

I'm still trying to find an alternative way around too as I don't really want the additional cost of running self-hosted agents :(

@lukelloydagi
Copy link
Author

@nvuillam, interestingly I've just updated my pipeline to use v8.4.2 and the problems seems to disappear 😎 Maybe there was a memory leak somewhere in v8.2 & v8.3 🤷‍♂️

@nvuillam
Copy link
Member

nvuillam commented Mar 6, 2025

@nvuillam, interestingly I've just updated my pipeline to use v8.4.2 and the problems seems to disappear 😎 Maybe there was a memory leak somewhere in v8.2 & v8.3 🤷‍♂️

Wonderful, thanks for the feedback 😊

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
O: stale 🤖 This issue or pull request is stale, it will be closed if there is no activity question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants