Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CUDA functions should not use CudaMalloc for temporary memory #521

Open
ianmccul opened this issue Nov 24, 2024 · 1 comment
Open

CUDA functions should not use CudaMalloc for temporary memory #521

ianmccul opened this issue Nov 24, 2024 · 1 comment
Labels

Comments

@ianmccul
Copy link
Contributor

CudaMalloc shouldn't be used to allocate temporary memory for a CUDA kernel. CudaMalloc is very slow, and it synchronizes the device, which is catastrophic if you are running multiple kernels at the same time.

There needs to be some sub-allocator that will allocate some memory at the start of the program and use that for temporary storage.

@ianmccul
Copy link
Contributor Author

See also cudaMallocAsync, added in CUDA 11.2. https://developer.nvidia.com/blog/using-cuda-stream-ordered-memory-allocator-part-1/

@yingjerkao yingjerkao added the gpu label Dec 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants