Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(gpu): enforce tighter bounds on compression output #2075

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

pdroalves
Copy link
Contributor

Compression output should be num_glwe * glwe_mask_size + num_lwes.

closes: https://github.com/zama-ai/tfhe-rs-internal/issues/918

PR content/description

Check-list:

  • Tests for the changes have been added (for bug fixes / features)
  • Docs have been added / updated (for bug fixes / features)
  • Relevant issues are marked as resolved/closed, related issues are linked in the description
  • Check for breaking changes (including serialization changes) and add them to commit message following the conventional commit specification

@cla-bot cla-bot bot added the cla-signed label Feb 17, 2025
@pdroalves pdroalves force-pushed the pa/fix/compression_packing branch from 0a52484 to 8593ee6 Compare February 17, 2025 20:47
@pdroalves pdroalves force-pushed the pa/fix/compression_packing branch from 8593ee6 to 7807162 Compare February 17, 2025 20:48
@pdroalves pdroalves requested a review from tmontaigu February 17, 2025 20:48
@pdroalves
Copy link
Contributor Author

@tmontaigu I hope this PR fixes the issues you pointed in https://github.com/zama-ai/tfhe-rs-internal/issues/918.

@pdroalves pdroalves marked this pull request as ready for review February 17, 2025 20:49
@pdroalves pdroalves force-pushed the pa/fix/compression_packing branch 9 times, most recently from d8fb5ae to c9cd74f Compare February 18, 2025 12:22
@pdroalves pdroalves force-pushed the pa/fix/compression_packing branch from c9cd74f to 6e79255 Compare February 18, 2025 12:24
@IceTDrinker
Copy link
Member

@pdroalves there was a question on our end about it, do you really need to do the packing on GPU ? we could imagine changing the storage format from CPU to GPU, as this seems to just add complexity for you with no tangible benefits (apart from lower VRAM usage)

@pdroalves
Copy link
Contributor Author

Performance wise, no. Packing is probably too simple to present any speedup on the GPU (although, I haven't measured that).

Locality wise, yes. We need that to keep compression result in the GPU. I could move data back to the CPU, apply CPU's packing, and move back to the GPU but that would be much more expensive.

@IceTDrinker
Copy link
Member

Performance wise, no. Packing is probably too simple to present any speedup on the GPU (although, I haven't measured that).

Locality wise, yes. We need that to keep compression result in the GPU. I could move data back to the CPU, apply CPU's packing, and move back to the GPU but that would be much more expensive.

no but I mean the GPU data would never be serialized, and when you decompress you have to unpack today, isn't that wasteful ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants