Skip to content

✨[Feature] Debugging Partitioner #4100

@narendasan

Description

@narendasan

Is your feature request related to a problem? Please describe.

We want to be able to narrow down where inside graphs operators start numerically diverging from PyTorch numerics in an automatic fashion

Describe the solution you'd like

We should have a debugger mode which re runs compilation a number of times modifying how the graph gets partitioned. Using something like HLOs in PyTorch we can keep pytorch source and tensorrt paired so we can check intermediate subgraph outputs.

Describe alternatives you've considered

Using something like debug tensors, we can also try to output intermediate ops and match them to edges in the fx graph to instrument the compiled graph to check intermediate tensor acc.

Additional context

Metadata

Metadata

Assignees

Labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions