Skip to content

Pytorch Cuda Out of Memory Error #126

@Katipo1

Description

@Katipo1

I have a dual-boot laptop with an RTX4060. Running Windows 11 and POP_OS linux.
Under Windows 11, I followed the install instructions and everything worked out of the box.
Under POP_OS linux, I followed the install instructions and I get the following error:

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 30.00 MiB. GPU 0 has a total capacty of 7.63 GiB of which 8.69 MiB is free. Including non-PyTorch memory, this process has 7.61 GiB memory in use. Of the allocated memory 7.34 GiB is allocated by PyTorch, and 167.80 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Error: Failed to run MagicQuill.

It seems like it thinks the VRAM has already been reserved. I have no idea why.
Any ideas on what I can do?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions