Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[PT2] FBC #3258

Merged
merged 14 commits into from
Feb 13, 2025
Merged

[PT2] FBC #3258

merged 14 commits into from
Feb 13, 2025

Conversation

AlexanderDokuchaev
Copy link
Collaborator

@AlexanderDokuchaev AlexanderDokuchaev commented Feb 6, 2025

Changes

  • Impanated FBC for experimental tracing
  • Save graph to GraphModelWrapper, to using graph like for NNCFNetwork
  • Add ConstantLayerAttributes to constant node
  • Check is_experimental_torch_tracing_enabled inside patch_torch_operators, to support optimum-intel

Related tickets

152996

@github-actions github-actions bot added NNCF PT Pull requests that updates NNCF PyTorch NNCF Common Pull request that updates NNCF Common experimental NNCF PTQ Pull requests that updates NNCF PTQ labels Feb 6, 2025
@AlexanderDokuchaev AlexanderDokuchaev marked this pull request as ready for review February 6, 2025 20:03
@AlexanderDokuchaev AlexanderDokuchaev requested a review from a team as a code owner February 6, 2025 20:03
weight_port_ids = [3]
bias_port_id = 4

if is_experimental_torch_tracing_enabled():
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm confuse that the same function from the PyTorch has different signature depends on the trace method. Maybe this is different function and you should reuse the approach from #3237.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Patched function traced only torch.nn.functional.batch_norm (that inside call torch.batch_norm, that have differ signature), that works only because in models most other used batchnorm as module nn.BatchNorm2d, that call used torch.nn.functional.batch_norm.

In experimental traced only torch.batch_norm.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@anzr299, could handle this case in your PR?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I can handle this case in my PR

@AlexanderDokuchaev
Copy link
Collaborator Author

AlexanderDokuchaev commented Feb 9, 2025

Model Metric value Compr. time Bias correction time RAM MiB
torchvision/resnet18 0.69152 0:00:19 0:00:00 1384
timm/wide_resnet50_2 0.81164 0:01:01 0:00:01 2703
timm/vgg11 0.68754 0:00:53 0:00:00 4072
timm/tf_inception_v3 0.77624 0:01:27 0:00:01 2061
timm/resnest14d 0.74876 0:00:28 0:00:00 1447
timm/regnetx_002 0.68476 0:00:35 0:00:00 1314
timm/mobilenetv3_small_050 0.4261 0:00:37 0:00:00 1252
timm/mobilenetv2_050 0.6528 0:00:42 0:00:00 1286
timm/inception_resnet_v2 0.803 0:03:57 0:00:04 3399
timm/hrnet_w18 0.7725 0:04:46 0:00:01 2867
timm/efficientnet_lite0 0.75186 0:00:42 0:00:01 1393
timm/efficientnet_b0 0.7711 0:01:02 0:00:00 1511
timm/dpn68 0.77158 0:01:09 0:00:00 1679
  1. found bug with extractor of batch_norm on develop version - fix will be in another PR (need to disable gradients)
  2. dla34: endless working quantization solver, need to align tracing of concat layer - fix will be in another PR

Copy link
Contributor

@alexsu52 alexsu52 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

weight_port_ids = [3]
bias_port_id = 4

if is_experimental_torch_tracing_enabled():
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@anzr299, could handle this case in your PR?

@alexsu52 alexsu52 merged commit aa3cc95 into openvinotoolkit:develop Feb 13, 2025
17 checks passed
shumaari pushed a commit to shumaari/nncf that referenced this pull request Feb 17, 2025
### Changes

- Impanated FBC for experimental tracing
- Save graph to GraphModelWrapper, to using graph like for NNCFNetwork
- Add ConstantLayerAttributes to constant node
- Check is_experimental_torch_tracing_enabled inside
patch_torch_operators, to support [optimum-intel
](https://github.com/huggingface/optimum-intel/blob/f601b8b1fda4477ed9f9e4293cf3c9d5cec4ad1b/optimum/intel/openvino/__init__.py#L49)

### Related tickets

152996

---------

Co-authored-by: Alexander Dokuchaev <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
experimental NNCF Common Pull request that updates NNCF Common NNCF PT Pull requests that updates NNCF PyTorch NNCF PTQ Pull requests that updates NNCF PTQ
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants