Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add option to print inner graphs in debugprint function #1293

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

Aarsh-Wankar
Copy link
Contributor

@Aarsh-Wankar Aarsh-Wankar commented Mar 13, 2025

I added an option print_inner_graphs to print inner graphs in the debugprint function.

Description

A boolean argument print_inner_graphs is provided to the debugprint function, which defaults to True. In case we don't want to print the inner graphs, we can set it to False.
For example:

import pytensor
import pytensor.tensor as pt
from pytensor.compile.mode import get_default_mode

n = pt.iscalar("n")
x0 = pt.vector("x0")
xs, _ = pytensor.scan(lambda xtm1: xtm1 + 1, outputs_info=[x0], n_steps=n)

mode = get_default_mode().including("scan_save_mem")
fn = pytensor.function([n, x0], xs, mode=mode, on_unused_input="ignore")
fn.dprint()
# Output:
  Subtensor{start:stop} [id A] 9
   ├─ Scan{scan_fn, while_loop=False, inplace=all} [id B] 8
   │  ├─ n [id C]
   │  └─ SetSubtensor{:stop} [id D] 7
   │     ├─ AllocEmpty{dtype='float64'} [id E] 6
   │     │  ├─ Composite{...}.2 [id F] 0
   │     │  │  └─ n [id C]
   │     │  └─ Shape_i{0} [id G] 5
   │     │     └─ x0 [id H]
   │     ├─ Unbroadcast{0} [id I] 4
   │     │  └─ ExpandDims{axis=0} [id J] 3
   │     │     └─ x0 [id H]
   │     └─ 1 [id K]
   ├─ ScalarFromTensor [id L] 2
   │  └─ Composite{...}.1 [id F] 0
   │     └─ ···
   └─ ScalarFromTensor [id M] 1
      └─ Composite{...}.0 [id F] 0
         └─ ···
  
  Inner graphs:
  
  Scan{scan_fn, while_loop=False, inplace=all} [id B]
   ← Add [id N]
      ├─ [1.] [id O]
  ...
      │     ├─ maximum [id X] 't8'
      │     │  └─ ···
      │     └─ 1 [id W]
      └─ 1 [id U]
fn.dprint(print_inner_graphs = False)
# Output:
Subtensor{start:stop} [id A] 9
 ├─ Scan{scan_fn, while_loop=False, inplace=all} [id B] 8
 │  ├─ n [id C]
 │  └─ SetSubtensor{:stop} [id D] 7
 │     ├─ AllocEmpty{dtype='float64'} [id E] 6
 │     │  ├─ Composite{...}.2 [id F] 0
 │     │  │  └─ n [id C]
 │     │  └─ Shape_i{0} [id G] 5
 │     │     └─ x0 [id H]
 │     ├─ Unbroadcast{0} [id I] 4
 │     │  └─ ExpandDims{axis=0} [id J] 3
 │     │     └─ x0 [id H]
 │     └─ 1 [id K]
 ├─ ScalarFromTensor [id L] 2
 │  └─ Composite{...}.1 [id F] 0
 │     └─ ···
 └─ ScalarFromTensor [id M] 1
    └─ Composite{...}.0 [id F] 0
       └─ ···

Related Issue

Checklist

Type of change

  • New feature / enhancement
  • Bug fix
  • Documentation
  • Maintenance
  • Other (please specify):

📚 Documentation preview 📚: https://pytensor--1293.org.readthedocs.build/en/1293/

Copy link

codecov bot commented Mar 13, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 82.03%. Comparing base (c822a8e) to head (04ebe0a).

Additional details and impacted files

Impacted file tree graph

@@           Coverage Diff           @@
##             main    #1293   +/-   ##
=======================================
  Coverage   82.03%   82.03%           
=======================================
  Files         188      188           
  Lines       48567    48567           
  Branches     8675     8675           
=======================================
  Hits        39841    39841           
  Misses       6574     6574           
  Partials     2152     2152           
Files with missing lines Coverage Δ
pytensor/printing.py 51.43% <100.00%> (ø)
🚀 New features to boost your workflow:
  • Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@@ -322,7 +325,7 @@ def debugprint(
print_view_map=print_view_map,
)

if len(inner_graph_vars) > 0:
if len(inner_graph_vars) > 0 and print_inner_graphs:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's some logic above about collecting this inner_graph_vars, can we avoid doing that work when print_inner_graphs=False?

Also wonder if we could instead specify the "depth" of the inner graphs we are interested in, like we have the depth argument now for the depth of the graph. So if inner_graphs_depth=-1, we have the default behavior (show all inner_graphs), but if 0, we don't show any, and if 1 we show one level of inner graphs, but wouldn't show inner graphs inside other inner graphs, (and so on)?.

Can you spot if that would be feasible?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have added an extra argument inner_depth to the debugprint function, which controls the depth of the inner graph to be printed. For example:

import pytensor
import pytensor.tensor as pt
from pytensor.compile.mode import get_default_mode

n = pt.iscalar("n")
x0 = pt.vector("x0")
xs, _ = pytensor.scan(lambda xtm1: xtm1 + 1, outputs_info=[x0], n_steps=n)

mode = get_default_mode().including("scan_save_mem")
fn = pytensor.function([n, x0], xs, mode=mode, on_unused_input="ignore")
fn.dprint(inner_depth = 1)

# Subtensor{start:stop} [id A] 9
#  ├─ Scan{scan_fn, while_loop=False, inplace=all} [id B] 8
#  │  ├─ n [id C]
#  │  └─ SetSubtensor{:stop} [id D] 7
#  │     ├─ AllocEmpty{dtype='float64'} [id E] 6
#  │     │  ├─ Composite{...}.2 [id F] 0
#  │     │  │  └─ n [id C]
#  │     │  └─ Shape_i{0} [id G] 5
#  │     │     └─ x0 [id H]
#  │     ├─ Unbroadcast{0} [id I] 4
#  │     │  └─ ExpandDims{axis=0} [id J] 3
#  │     │     └─ x0 [id H]
#  │     └─ 1 [id K]
#  ├─ ScalarFromTensor [id L] 2
#  │  └─ Composite{...}.1 [id F] 0
#  │     └─ ···
#  └─ ScalarFromTensor [id M] 1
#     └─ Composite{...}.0 [id F] 0
#        └─ ···

# Inner graphs:

# Scan{scan_fn, while_loop=False, inplace=all} [id B]
#  ← Add [id N]

# Composite{...} [id F]
#  ← add [id O] 'o0'
#  ← add [id P] 'o1'
#  ← add [id Q] 'o2'

fn.dprint(inner_depth = 2)

# ...
# Inner graphs:

# Scan{scan_fn, while_loop=False, inplace=all} [id B]
#  ← Add [id N]
#     ├─ [1.] [id O]
#     └─ *0-<Vector(float64, shape=(?,))> [id P] -> [id D]

# Composite{...} [id F]
#  ← add [id Q] 'o0'
#     ├─ sub [id R]
#     └─ maximum [id S] 't13'
#  ← add [id T] 'o1'
#     ├─ sub [id U]
#     └─ maximum [id S] 't13'
#        └─ ···
#  ← add [id V] 'o2'
#     ├─ Switch [id W]
#     └─ 1 [id X]

Are these changes fine?

@Aarsh-Wankar Aarsh-Wankar force-pushed the print_inner_graphs_1285 branch from 86dec25 to c7d0a54 Compare March 14, 2025 10:47
@@ -374,7 +383,7 @@ def debugprint(
_debugprint(
ig_var,
prefix=prefix,
depth=depth,
depth=inner_depth,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

depth and inner_depth are not the same. Depth still makes sense alongside inner_depth. Inner depth tells how many inner graphs to step into, depth tells how many ops in a graph (or inner garph) to step into

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add option not to print inner graphs in debug_print
2 participants