Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow batch ratio to be tuned #1648

Closed

Conversation

ndb-rkang
Copy link

@ndb-rkang ndb-rkang commented Jan 31, 2025

Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.

Motivation

Current logic automatically determines batch ratio based on available VRAM. For certain complex, scanned PDFs, higher batch ratio leads to NPU OOM. Unclear if this is an issue with CUDA-based acceleration

Modification

Added an env var "MINERU_OVERRIDE_BATCH_RATIO" to tune batch ratio manually. Alternatively we can also change this to a ratio to be multiplied by vram-based batch ratio.

BC-breaking (Optional)

No - no backwards compatibility issue (unless MINERU_OVERRIDE_BATCH_RATIO is already used for some other application)

Use cases (Optional)

Processing scanned pdfs with high resource consumption

Checklist

Before PR:

  • Pre-commit or other linting tools are used to fix the potential lint issues.
  • Bug fixes are fully covered by unit tests, the case that causes the bug should be added in the unit tests.
  • The modification is covered by complete unit tests. If not, please add more unit test to ensure the correctness.
  • The documentation has been modified accordingly, like docstring or example tutorials.

After PR:

  • If the modification has potential influence on downstream or other related projects, this PR should be tested with those projects.
  • CLA has been signed and all committers have signed the CLA in this PR.

Copy link
Contributor


Thank you for your submission, we really appreciate it. Like many open-source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution. You can sign the CLA by just posting a Pull Request Comment same as the below format.


I have read the CLA Document and I hereby sign the CLA


You can retrigger this bot by commenting recheck in this Pull Request. Posted by the CLA Assistant Lite bot.

@jon-snowman
Copy link

I have read the CLA Document and I hereby sign the CLA

@myhloli
Copy link
Collaborator

myhloli commented Feb 8, 2025

After testing, we were unable to reproduce the OOM issue caused by batch on the NPU device. Please try the following solutions:

  1. Use the npu-smi info command to check if the memory of card 0 is occupied by other applications by more than 50%. If this happens, you can configure the device-mode in the magic-pdf.json file to another card, such as npu:1.

  2. If there are no other spare NPU cards in the device and there are other tasks on card 0 that cannot be shut down, resulting in insufficient remaining memory, you can set the system variable VIRTUAL_VRAM_SIZE to a value lower than the remaining memory. The application will automatically configure an appropriate batch ratio to ensure that the OOM issue does not occur.

@myhloli myhloli closed this Feb 8, 2025
@github-actions github-actions bot locked and limited conversation to collaborators Feb 8, 2025
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants