Skip to content

Commit 677cb0d

Browse files
xmfanfacebook-github-bot
authored andcommitted
expose option to collect sizes as dynamic (#141153) (#2594)
Summary: Pull Request resolved: #2594 This is to address recompiles from eager nodes that saved dynamic activations X-link: pytorch/pytorch#141153 Approved by: https://github.com/jansel ghstack dependencies: #141152 Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/db4e8a1d8a888bc3c407c13ba227f3f859349be3 Reviewed By: izaitsevfb Differential Revision: D66384254 Pulled By: xmfan fbshipit-source-id: 89d439353219252c9f5be0b44cf67957092860e3
1 parent 87f72eb commit 677cb0d

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

torchrec/distributed/train_pipeline/train_pipelines.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1624,7 +1624,7 @@ def get_compiled_autograd_ctx(
16241624
self.initialized = True
16251625
return contextlib.nullcontext()
16261626

1627-
return torch._dynamo.compiled_autograd.enable(
1627+
return torch._dynamo.compiled_autograd._enable(
16281628
# pyre-ignore
16291629
torch.compile(**self.compiled_autograd_options)
16301630
)

0 commit comments

Comments
 (0)