Skip to content

Fix semi-sync training with 1GPU per FT replica #1505

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

bentherien
Copy link

When using semi-sync training with a single GPU per replica, no mesh is created in ParallelDims, causing an error when all-reducing the loss.

The slicing error occurs on this line:

dist_utils.dist_mean(loss, parallel_dims.world_mesh["dp_cp"], ft_pg),

This PR prevents this from happening by creating a default mesh when no parallelism is used.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Meta Open Source bot. label Jul 31, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Meta Open Source bot.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant