Anyone know what comfy-aimdo is? Had to disable it due to Torch Dynamo recompilation issues #12805
Replies: 1 comment
-
|
comfy-aimdo is the Comfy AI Model Dynamic Offloader https://github.com/Comfy-Org/comfy-aimdo It's great in like... 90% of cases. They're doing a lot of active bugfixing and whatnot now that it's centre stage of memory management, but people are still finding edge cases where it doesn't fit. You can disable the new dynamic vram part with the switch --disable-dynamic-vram. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I recently noticed something new showing up in my ComfyUI ver 16 logs called comfy-aimdo, and I’m trying to understand what exactly it does and whether others have run into issues with it.
When starting ComfyUI ver 16, I was seeing messages like:
aimdo: src-win/cuda-detour.c:77:INFO:aimdo_setup_hooks: found driver ..., installing 4 hooks
aimdo: src-win/cuda-detour.c:61:DEBUG:install_hook_entrys: hooks successfully installed
aimdo: src/control.c:66:INFO:comfy-aimdo inited for GPU ...
During generation I also kept getting a lot of warnings from PyTorch such as:
torch._dynamo hit config.recompile_limit (128)
Lib\site-packages\torch_dynamo\convert_frame.py
This seemed to happen more often when Triton kernels were being built for faster inference.
I temporarily disabled the initialization and the aimdo hooks and warnings stopped.
but I’m still curious.
Questions:
What exactly is comfy-aimdo supposed to do?
Is it part of newer ComfyUI builds or something experimental?
Has anyone else seen Dynamo recompilation warnings or Triton issues when it’s enabled?
Are there situations where it’s beneficial to keep it on?
is it meant for GPUs with 24GB VRAM or less?
Curious if anyone else has run into this or knows more about how aimdo is supposed to behave.
Beta Was this translation helpful? Give feedback.
All reactions