-
Notifications
You must be signed in to change notification settings - Fork 384
Description
I have a dual-boot laptop with an RTX4060. Running Windows 11 and POP_OS linux.
Under Windows 11, I followed the install instructions and everything worked out of the box.
Under POP_OS linux, I followed the install instructions and I get the following error:
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 30.00 MiB. GPU 0 has a total capacty of 7.63 GiB of which 8.69 MiB is free. Including non-PyTorch memory, this process has 7.61 GiB memory in use. Of the allocated memory 7.34 GiB is allocated by PyTorch, and 167.80 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Error: Failed to run MagicQuill.
It seems like it thinks the VRAM has already been reserved. I have no idea why.
Any ideas on what I can do?