Skip to content

Conversation

@lgeiger
Copy link
Contributor

@lgeiger lgeiger commented Nov 18, 2025

Purpose

torch.repeat_interleave is surprisingly slow so this PR replaces it with numpy ops. This slightly speeds up TTFT by ~1.5%.

Part of #23884

Before:
Screenshot 2025-11-18 at 21 40 33
After:
Screenshot 2025-11-18 at 21 38 46

@lgeiger lgeiger requested a review from sighingnow as a code owner November 18, 2025 22:15
@mergify mergify bot added the qwen Related to Qwen models label Nov 18, 2025
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment on lines +577 to +581
cu_seqlens = np.repeat(grid_thw[:, 1] * grid_thw[:, 2], grid_thw[:, 0]).cumsum(
axis=0, dtype=np.int32
)
cu_seqlens = np.concatenate([np.zeros(1, dtype=np.int32), cu_seqlens])
cu_seqlens = torch.from_numpy(cu_seqlens)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Preserve cu_seqlens computation inside Torch tracing

The new cu_seqlens path now runs entirely in numpy (np.repeat, np.concatenate, torch.from_numpy), so when this forward is traced/exported the tensor is computed on the concrete sample input and baked into the graph as a constant. Subsequent executions of the traced model with different grid_thw values will therefore reuse the wrong sequence layout, breaking attention masking for any shape that differs from the trace input. The previous Torch-only code (with a torch.jit.is_tracing() dtype guard) kept this computation traceable and input-dependent.

Useful? React with 👍 / 👎.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request replaces torch.repeat_interleave with np.repeat to improve performance in qwen2_vl.py and qwen3_vl.py. The changes are logical and correctly implemented. By performing the cu_seqlens computation on the CPU with NumPy and then moving the tensor to the GPU, the code avoids unnecessary device transfers and leverages faster CPU operations, which should result in the claimed performance gain. The code modifications are clean and I have no concerns. Good work.

Copy link
Member

@DarkLight1337 DarkLight1337 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

@DarkLight1337 DarkLight1337 enabled auto-merge (squash) November 19, 2025 06:42
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Nov 19, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

qwen Related to Qwen models ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants