Skip to content

Commit f3dd556

Browse files
wanchaolfacebook-github-bot
authored andcommitted
fix test_jit canonicalize_tensor_iterator
Summary: Pull Request resolved: pytorch#17104 Differential Revision: D14089928 Pulled By: wanchaol fbshipit-source-id: 8b288514ab9ee8d24a11d39b75eef95783f28f20
1 parent 65e06df commit f3dd556

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

test/test_jit.py

+3-3
Original file line numberDiff line numberDiff line change
@@ -886,9 +886,9 @@ def f(x):
886886
traced = torch.jit.trace(f, (x,))
887887
f(x)
888888
graph = traced.graph_for(x)
889-
# There should be 4 int constants for the right sides of operators, plus two
890-
# for alpha arguments for add and sub
891-
self.assertTrue(str(traced.graph_for(x)).count(': int = prim::Constant'), 6)
889+
# There should be 4 int constants for the right sides of operators, plus one
890+
# for the alpha argument for add and sub
891+
self.assertTrue(str(traced.graph_for(x)).count(': int = prim::Constant') == 5)
892892

893893
# TODO: adapt this test to check that GraphExecutor treats them differently
894894
@unittest.skip("Need to be adjusted to Graph Executor")

0 commit comments

Comments
 (0)