Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OOM #9

Open
EveningLin opened this issue Nov 29, 2024 · 1 comment
Open

OOM #9

EveningLin opened this issue Nov 29, 2024 · 1 comment

Comments

@EveningLin
Copy link

有点奇怪的OOM,我使用的是两张图片,infer_video运行报错
`
Testing on Dataset: /data/workplace/mymymy/data
Running VFI method : gma+pervfi
TMP (temporary) Dir: /tmp/tmpx0ci_4ln
VIS (visualize) Dir: output
Building VFI model...
Done
1
0%| | 0/1 [00:06<?, ?it/s]
Traceback (most recent call last):
File "/data/workplace/mymymy/frame_inp/PerVFI/infer_video.py", line 110, in
outs = inferRGB(*inps) # e.g., [I2]
^^^^^^^^^^^^^^^
File "/data/workplace/mymymy/frame_inp/PerVFI/infer_video.py", line 84, in inferRGB
tenOut = infer(*inputs, time=t)
^^^^^^^^^^^^^^^^^^^^^^
File "/data/workplace/mymymy/frame_inp/PerVFI/build_models.py", line 41, in infer
pred = model.inference_rand_noise(I1, I2, heat=0.3, time=time)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/miniconda3/envs/pervfi/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/data/workplace/mymymy/frame_inp/PerVFI/models/pipeline.py", line 61, in inference_rand_noise
fflow, bflow = flows if flows is not None else self.compute_flow(img0, img1)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/miniconda3/envs/pervfi/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/data/workplace/mymymy/frame_inp/PerVFI/models/flow_estimators/init.py", line 113, in infer
_, fflow = model(I1, I2, test_mode=True, iters=20)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/miniconda3/envs/pervfi/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/miniconda3/envs/pervfi/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/workplace/mymymy/frame_inp/PerVFI/models/flow_estimators/gma/network.py", line 102, in forward
corr_fn = CorrBlock(fmap1, fmap2, radius=self.args.corr_radius)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/workplace/mymymy/frame_inp/PerVFI/models/flow_estimators/gma/corr.py", line 25, in init
corr = CorrBlock.corr(fmap1, fmap2)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/workplace/mymymy/frame_inp/PerVFI/models/flow_estimators/gma/corr.py", line 66, in corr
return corr / torch.sqrt(torch.tensor(dim).float())
~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 63.50 GiB. GPU 0 has a total capacty of 79.15 GiB of which 2.48 GiB is free. Process 2395277 has 11.18 GiB memory in use. Including non-PyTorch memory, this process has 65.47 GiB memory in use. Of the allocated memory 64.69 GiB is allocated by PyTorch, and 281.08 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

`

@EveningLin
Copy link
Author

因为图片太大了,可以resize一下就行

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant