Skip to content

Comments

feat: Enable bfloat16 input/output tensor dtype#335

Open
yinggeh wants to merge 1 commit intomainfrom
yinggeh/tgh-26-onnx-backend-does-not-support-bfloat16-inputs
Open

feat: Enable bfloat16 input/output tensor dtype#335
yinggeh wants to merge 1 commit intomainfrom
yinggeh/tgh-26-onnx-backend-does-not-support-bfloat16-inputs

Conversation

@yinggeh
Copy link
Contributor

@yinggeh yinggeh commented Feb 14, 2026

No description provided.

@yinggeh yinggeh self-assigned this Feb 14, 2026
@yinggeh yinggeh added the enhancement New feature or request label Feb 14, 2026
@yinggeh yinggeh changed the title feat: Enable bfloat input/output tensor dtype feat: Enable bfloat16 input/output tensor dtype Feb 14, 2026
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR updates the ONNXRuntime backend’s datatype conversion utilities to support bfloat16 (BF16) tensors end-to-end, enabling BF16 to be recognized and translated between Triton, ONNX, and model config representations.

Changes:

  • Add ONNX BFLOAT16 ↔ Triton BF16 mappings in datatype conversion helpers.
  • Extend model-config datatype string conversions to accept/emit TYPE_BF16.
  • Update copyright year range.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants