Skip to content

Conversation

@lilinsiman
Copy link
Contributor

@lilinsiman lilinsiman commented Jan 7, 2026

What this PR does / why we need it?

The condition for determining padding in the fullgraph overlay with MTP and PCP has been modified to accommodate corner cases where the shape capture size is manually specified.

Does this PR introduce any user-facing change?

no

How was this patch tested?

ut and tests

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses a bug in the padding logic for full graph mode, specifically in scenarios involving MTP (Multi-Token Prediction) and PCP (Prefill Context Parallelism). The modification correctly constrains the padding condition by ensuring that the number of input tokens does not exceed the maximum size of a captured graph. By using min(max_decode_tokens, self.cudagraph_batch_sizes[-1]), the change prevents the padding logic from being erroneously applied to batches that are too large for graph replay and would fall back to eager execution. This is a solid bug fix that enhances the robustness of the full graph execution path, particularly for cases with manually configured graph capture sizes.

Comment on lines 966 to 968
max_decode_tokens = self.scheduler_config.max_num_seqs * self.uniform_decode_query_len
if self.compilation_config.cudagraph_mode.decode_mode() == CUDAGraphMode.FULL and \
uniform_decode and self.uniform_decode_query_len <= num_input_tokens <= max_decode_tokens:
uniform_decode and self.uniform_decode_query_len <= num_input_tokens <= min(max_decode_tokens, self.cudagraph_batch_sizes[-1]):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggest

max_decode_tokens = min(self.scheduler_config.max_num_seqs * self.uniform_decode_query_len, self.cudagraph_batch_sizes[-1])

It's more resonable.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@github-actions
Copy link

github-actions bot commented Jan 7, 2026

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

@yiz-liu yiz-liu added ready read for review ready-for-test start test by label for PR and removed ready-for-test start test by label for PR labels Jan 8, 2026
@lilinsiman lilinsiman force-pushed the bugfix branch 8 times, most recently from 5396049 to 103d8c4 Compare January 9, 2026 08:41
Signed-off-by: lilinsiman <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready read for review ready-for-test start test by label for PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants