Skip to content

Commit 0813ec0

Browse files
Merge pull request #158 from brandonwillard/remove-fgraph-property
Remove Variable.fgraph and Variable.clients properties
2 parents 48ac71e + 825b307 commit 0813ec0

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

106 files changed

+1728
-1891
lines changed

doc/extending/cop.txt

+5-5
Original file line numberDiff line numberDiff line change
@@ -177,13 +177,13 @@ There are less methods to define for an Op than for a Type:
177177
.. method:: c_cleanup_code_struct(node, name)
178178

179179
Allows you to specify code that will be inserted in the struct
180-
destructor of the Op. This is for cleaninp up allocations and
180+
destructor of the `Op`. This is for cleaninp up allocations and
181181
stuff like this when the thunk is released (when you "free" a
182182
compiled function using this op).
183183

184-
.. method:: infer_shape(node, (i0_shapes,i1_shapes,...))
184+
.. method:: infer_shape(fgraph, node, (i0_shapes,i1_shapes,...))
185185

186-
Allow optimizations to lift the Shape op over this op. An
186+
Allow optimizations to lift the `Shape` `Op` over this `Op`. An
187187
example of why this is good is when we only need the shape of a
188188
variable: we will be able to obtain it without computing the
189189
variable itself.
@@ -192,8 +192,8 @@ There are less methods to define for an Op than for a Type:
192192
the shape of one output.
193193

194194
For example, for the matrix-matrix product ``infer_shape`` will
195-
have as inputs (node, ((x0,x1), (y0,y1))) and should return
196-
[(x0, y1)]. Both the inputs and the return value may be Theano
195+
have as inputs ``(fgraph, node, ((x0,x1), (y0,y1)))`` and should return
196+
``[(x0, y1)]``. Both the inputs and the return value may be Theano
197197
variables.
198198

199199
.. method:: c_code_cache_version()

doc/extending/extending_theano.txt

+11-10
Original file line numberDiff line numberDiff line change
@@ -114,7 +114,7 @@ possibilities you may encounter or need. For that refer to
114114
def R_op(self, inputs, eval_points):
115115
pass
116116

117-
def infer_shape(node, input_shapes):
117+
def infer_shape(self, fgraph, node, input_shapes):
118118
pass
119119

120120
An op has to implement some methods defined in the the interface of
@@ -237,9 +237,9 @@ There are other methods that can be optionally defined by the op:
237237
:attr:`__props__` will also generate a suitable :func:`__str__` for your op.
238238
This requires development version after September 1st, 2014 or version 0.7.
239239

240-
The :func:`infer_shape` method allows to infer the shape of the op
241-
output variables, without actually computing the outputs.
242-
It takes as input ``node``, a reference to the op Apply node,
240+
The :func:`infer_shape` method allows an `Op` to infer the shape of its
241+
output variables without actually computing them.
242+
It takes as input ``fgraph``, a `FunctionGraph`; ``node``, a reference to the op Apply node;
243243
and a list of Theano symbolic Varables (``i0_shape``, ``i1_shape``, ...)
244244
which are the shape of the op input Variables.
245245
:func:`infer_shape` returns a list where each element is a tuple representing
@@ -302,7 +302,7 @@ Example: Op definition
302302
z = output_storage[0]
303303
z[0] = x * 2
304304

305-
def infer_shape(self, node, i0_shapes):
305+
def infer_shape(self, fgraph, node, i0_shapes):
306306
return i0_shapes
307307

308308
def grad(self, inputs, output_grads):
@@ -333,7 +333,7 @@ Example: Op definition
333333
z = output_storage[0]
334334
z[0] = x * 2
335335

336-
def infer_shape(self, node, i0_shapes):
336+
def infer_shape(self, fgraph, node, i0_shapes):
337337
return i0_shapes
338338

339339
def grad(self, inputs, output_grads):
@@ -508,7 +508,7 @@ and ``b`` are equal.
508508
z = output_storage[0]
509509
z[0] = self.a * x + self.b
510510

511-
def infer_shape(self, node, i0_shapes):
511+
def infer_shape(self, fgraph, node, i0_shapes):
512512
return i0_shapes
513513

514514
def grad(self, inputs, output_grads):
@@ -750,12 +750,13 @@ signature:
750750

751751
.. code-block:: none
752752

753-
def infer_shape(node, input_shapes):
753+
def infer_shape(fgraph, node, input_shapes):
754754
# ...
755755
return output_shapes
756756

757757
- `input_shapes` and `output_shapes` are lists of tuples that
758-
represent the shape of the corresponding inputs/outputs.
758+
represent the shape of the corresponding inputs/outputs, and `fgraph`
759+
is a `FunctionGraph`.
759760

760761
.. note::
761762

@@ -788,7 +789,7 @@ as_op Example
788789
from theano import function
789790
from theano.compile.ops import as_op
790791

791-
def infer_shape_numpy_dot(node, input_shapes):
792+
def infer_shape_numpy_dot(fgraph, node, input_shapes):
792793
ashp, bshp = input_shapes
793794
return [ashp[:-1] + bshp[-1:]]
794795

doc/extending/extending_theano_solution_1.py

+4-4
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ def perform(self, node, inputs, output_storage):
3434
z = output_storage[0]
3535
z[0] = x * y
3636

37-
def infer_shape(self, node, i0_shapes):
37+
def infer_shape(self, fgraph, node, i0_shapes):
3838
return [i0_shapes[0]]
3939

4040
def grad(self, inputs, output_grads):
@@ -71,7 +71,7 @@ def perform(self, node, inputs, output_storage):
7171
z1[0] = x + y
7272
z2[0] = x - y
7373

74-
def infer_shape(self, node, i0_shapes):
74+
def infer_shape(self, fgraph, node, i0_shapes):
7575
return [i0_shapes[0], i0_shapes[0]]
7676

7777
def grad(self, inputs, output_grads):
@@ -172,7 +172,7 @@ def test_infer_shape(self):
172172
from theano.compile.ops import as_op
173173

174174

175-
def infer_shape_numpy_dot(node, input_shapes):
175+
def infer_shape_numpy_dot(fgraph, node, input_shapes):
176176
ashp, bshp = input_shapes
177177
return [ashp[:-1] + bshp[-1:]]
178178

@@ -183,7 +183,7 @@ def numpy_add(a, b):
183183
return np.add(a, b)
184184

185185

186-
def infer_shape_numpy_add_sub(node, input_shapes):
186+
def infer_shape_numpy_add_sub(fgraph, node, input_shapes):
187187
ashp, bshp = input_shapes
188188
# Both inputs should have that same shape, so we just return one of them.
189189
return [ashp[0]]

doc/extending/fibby.txt

+3-3
Original file line numberDiff line numberDiff line change
@@ -110,7 +110,7 @@ TODO: talk about OPTIMIZATION STAGES
110110
# Remove any fibby(zeros(...))
111111
@theano.tensor.opt.register_specialize
112112
@theano.gof.local_optimizer([fibby])
113-
def fibby_of_zero(node):
113+
def fibby_of_zero(fgraph, node):
114114
if node.op == fibby:
115115
x = node.inputs[0]
116116
try:
@@ -124,8 +124,8 @@ tells Theano to use it in the specialization stage.
124124
The ``local_optimizer`` decorator builds a class instance around our global
125125
function. The ``[fibby]`` argument is a hint that our optimizer works on nodes
126126
whose ``.op`` attribute equals ``fibby``.
127-
The function here (``fibby_of_zero``) expects an ``Apply`` instance as an
128-
argument for parameter ``node``. It tests using
127+
The function here (``fibby_of_zero``) expects a ``FunctionGraph`` and an ``Apply`` instance as
128+
arguments for the parameters ``fgraph`` and ``node``. It tests using
129129
function ``get_scalar_constant_value``, which determines if a
130130
Variable (``x``) is guaranteed to be a constant, and if so, what constant.
131131

doc/extending/graphstructures.txt

+8-7
Original file line numberDiff line numberDiff line change
@@ -344,7 +344,7 @@ pairs:
344344
function outputs this variable.
345345

346346
In both types of pairs, the second element of the tuple is an index,
347-
such that: ``var.clients[*][0].inputs[index]`` or
347+
such that: ``fgraph.clients[var][*][0].inputs[index]`` or
348348
``fgraph.outputs[index]`` is that variable.
349349

350350

@@ -357,15 +357,16 @@ Sum{acc_dtype=float64} [id A] '' 1
357357
|TensorConstant{(1,) of 1.0} [id C]
358358
|<TensorType(float64, vector)> [id D]
359359
>>> # Sorted list of all nodes in the compiled graph.
360-
>>> topo = f.maker.fgraph.toposort()
361-
>>> topo[0].outputs[0].clients
360+
>>> fgraph = f.maker.fgraph
361+
>>> topo = fgraph.toposort()
362+
>>> fgraph.clients[topo[0].outputs[0]]
362363
[(Sum{acc_dtype=float64}(Elemwise{add,no_inplace}.0), 0)]
363-
>>> topo[1].outputs[0].clients
364+
>>> fgraph.clients[topo[1].outputs[0]]
364365
[('output', 0)]
365366

366367
>>> # An internal variable
367368
>>> var = topo[0].outputs[0]
368-
>>> client = var.clients[0]
369+
>>> client = fgraph.clients[var][0]
369370
>>> client
370371
(Sum{acc_dtype=float64}(Elemwise{add,no_inplace}.0), 0)
371372
>>> type(client[0])
@@ -374,10 +375,10 @@ Sum{acc_dtype=float64} [id A] '' 1
374375

375376
>>> # An output of the graph
376377
>>> var = topo[1].outputs[0]
377-
>>> client = var.clients[0]
378+
>>> client = fgraph.clients[var][0]
378379
>>> client
379380
('output', 0)
380-
>>> assert f.maker.fgraph.outputs[client[1]] is var
381+
>>> assert fgraph.outputs[client[1]] is var
381382

382383

383384
Automatic Differentiation

doc/extending/op.txt

+16-17
Original file line numberDiff line numberDiff line change
@@ -215,7 +215,7 @@ Optional methods or attributes
215215
will use your Op and build the graphs that you want and call that
216216
instead of the Op instance directly.
217217

218-
.. function:: infer_shape(node, shapes)
218+
.. function:: infer_shape(fgraph, node, shapes)
219219

220220
This function is needed for shape optimization. ``shapes`` is a
221221
list with one tuple for each input of the Apply node (which corresponds
@@ -254,7 +254,7 @@ Optional methods or attributes
254254
If you set `__props__`, this will be automatically generated.
255255
You can still overide it for custom output.
256256

257-
.. function:: do_constant_folding(node)
257+
.. function:: do_constant_folding(fgraph, node)
258258

259259
*Default:* Return True
260260

@@ -323,7 +323,7 @@ These are the function required to work with gradient.grad().
323323
return the gradient of the Op's output. theano.tensor.grad computes
324324
gradients; Op.grad is a helper function that computes terms that
325325
appear in gradients.
326-
326+
327327
If an Op has a single vector-valued output y and a single
328328
vector-valued input x, then the grad method will be passed x and a
329329
second vector z. Define J to be the Jacobian of y with respect to
@@ -393,7 +393,7 @@ These are the function required to work with gradient.grad().
393393
:math:`[grad_{x_1}(C), grad_{x_2}(C), ..., grad_{x_m}(C)]`, where
394394
:math:`(grad_{y}(Z))_i = \frac{\partial Z}{\partial y_i}` (and
395395
:math:`i` can stand for multiple dimensions).
396-
396+
397397
In other words, :func:`grad` does not return :math:`\frac{d f_i}{d
398398
x_j}`, but instead the appropriate dot product specified by the
399399
chain rule: :math:`\frac{d C}{d x_j} = \frac{d C}{d f_i} \cdot
@@ -402,7 +402,7 @@ These are the function required to work with gradient.grad().
402402

403403
Theano currently imposes the following constraints on the values
404404
returned by the grad method:
405-
405+
406406
1) They must be Variable instances.
407407
2) When they are types that have dtypes, they must never have an integer dtype.
408408

@@ -525,27 +525,27 @@ These are the function required to work with gradient.grad().
525525

526526
This function implements the application of the R-operator on the
527527
function represented by your op. Let assume that function is :math:`f`,
528-
with input :math:`x`, applying the R-operator means computing the
529-
Jacobian of :math:`f` and right-multiplying it by :math:`v`, the evaluation
530-
point, namely: :math:`\frac{\partial f}{\partial x} v`.
528+
with input :math:`x`, applying the R-operator means computing the
529+
Jacobian of :math:`f` and right-multiplying it by :math:`v`, the evaluation
530+
point, namely: :math:`\frac{\partial f}{\partial x} v`.
531531

532-
``inputs`` are the symbolic variables corresponding to the value of
532+
``inputs`` are the symbolic variables corresponding to the value of
533533
the input where you want to evaluate the jacobian, and ``eval_points``
534534
are the symbolic variables corresponding to the value you want to
535-
right multiply the jacobian with.
535+
right multiply the jacobian with.
536536

537537
Same conventions as for the grad method hold. If your op is not
538-
differentiable, you can return None. Note that in contrast to
538+
differentiable, you can return None. Note that in contrast to
539539
the method :func:`grad`, for :func:`R_op` you need to return the
540540
same number of outputs as there are ouputs of the op. You can think
541541
of it in the following terms. You have all your inputs concatenated
542-
into a single vector :math:`x`. You do the same with the evaluation
542+
into a single vector :math:`x`. You do the same with the evaluation
543543
points (which are as many as inputs and of the shame shape) and obtain
544-
another vector :math:`v`. For each output, you reshape it into a vector,
545-
compute the jacobian of that vector with respect to :math:`x` and
544+
another vector :math:`v`. For each output, you reshape it into a vector,
545+
compute the jacobian of that vector with respect to :math:`x` and
546546
multiply it by :math:`v`. As a last step you reshape each of these
547-
vectors you obtained for each outputs (that have the same shape as
548-
the outputs) back to their corresponding shapes and return them as the
547+
vectors you obtained for each outputs (that have the same shape as
548+
the outputs) back to their corresponding shapes and return them as the
549549
output of the :func:`R_op` method.
550550

551551
:ref:`List of op with r op support <R_op_list>`.
@@ -777,4 +777,3 @@ your disposal to create these objects as efficiently as possible.
777777

778778
**Exercise**: Make a generic DoubleOp, where the number of
779779
arguments can also be given as a parameter.
780-

doc/extending/optimization.txt

+12-12
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,7 @@ Global optimization
5757
A global optimization (or optimizer) is an object which defines the following
5858
methods:
5959

60-
.. class:: Optimizer
60+
.. class:: GlobalOptimizer
6161

6262
.. method:: apply(fgraph)
6363

@@ -75,7 +75,7 @@ methods:
7575

7676
This is the interface function called by Theano.
7777

78-
*Default:* this is defined by Optimizer as ``add_requirement(fgraph);
78+
*Default:* this is defined by GlobalOptimizer as ``add_requirement(fgraph);
7979
apply(fgraph)``.
8080

8181
See the section about :class:`FunctionGraph` to understand how to define these
@@ -89,14 +89,14 @@ A local optimization is an object which defines the following methods:
8989

9090
.. class:: LocalOptimizer
9191

92-
.. method:: transform(node)
92+
.. method:: transform(fgraph, node)
9393

94-
This method takes an :ref:`apply` node and returns either ``False`` to
95-
signify that no changes are to be done or a list of Variables which
96-
matches the length of the node's ``outputs`` list. When the
97-
LocalOptimizer is applied by a Navigator, the outputs of the node
98-
passed as argument to the LocalOptimizer will be replaced by the
99-
list returned.
94+
This method takes a :ref:`FunctionGraph` and an :ref:`Apply` node and
95+
returns either ``False`` to signify that no changes are to be done or a
96+
list of `Variable`s which matches the length of the node's ``outputs``
97+
list. When the `LocalOptimizer` is applied by a `Navigator`, the outputs
98+
of the node passed as argument to the `LocalOptimizer` will be replaced by
99+
the list returned.
100100

101101

102102

@@ -125,7 +125,7 @@ simplification described above:
125125
from theano import gof
126126
from theano.gof import toolbox
127127

128-
class Simplify(gof.Optimizer):
128+
class Simplify(gof.GlobalOptimizer):
129129
def add_requirements(self, fgraph):
130130
fgraph.attach_feature(toolbox.ReplaceValidate())
131131
def apply(self, fgraph):
@@ -252,7 +252,7 @@ The local version of the above code would be the following:
252252
.. testcode::
253253

254254
class LocalSimplify(gof.LocalOptimizer):
255-
def transform(self, node):
255+
def transform(self, fgraph, node):
256256
if node.op == true_div:
257257
x, y = node.inputs
258258
if x.owner and x.owner.op == mul:
@@ -471,7 +471,7 @@ Here are a few examples of how to use a Query on optdb to produce an
471471
Optimizer:
472472

473473
.. testcode::
474-
474+
475475
from theano.gof import Query
476476
from theano.compile import optdb
477477

doc/hpcs2011_tutorial/presentation.tex

+1-1
Original file line numberDiff line numberDiff line change
@@ -1390,7 +1390,7 @@ \subsection{Theano}
13901390
{\color{gray}# optional:}
13911391
def __init__(self, ...):
13921392
def grad(self, inputs, g):
1393-
def infer_shape(node, (i0_shapes, ...))
1393+
def infer_shape(fgraph, node, (i0_shapes, ...))
13941394
\end{Verbatim}
13951395
\end{frame}
13961396

doc/tutorial/debug_faq.txt

+3-3
Original file line numberDiff line numberDiff line change
@@ -363,11 +363,11 @@ shows how to print all inputs and outputs:
363363
from __future__ import print_function
364364
import theano
365365

366-
def inspect_inputs(i, node, fn):
366+
def inspect_inputs(fgraph, i, node, fn):
367367
print(i, node, "input(s) value(s):", [input[0] for input in fn.inputs],
368368
end='')
369369

370-
def inspect_outputs(i, node, fn):
370+
def inspect_outputs(fgraph, i, node, fn):
371371
print(" output(s) value(s):", [output[0] for output in fn.outputs])
372372

373373
x = theano.tensor.dscalar('x')
@@ -406,7 +406,7 @@ can be achieved as follows:
406406
# ``theano.compile.monitormode.detect_nan`` that will always
407407
# contain the current suggested version.
408408

409-
def detect_nan(i, node, fn):
409+
def detect_nan(fgraph, i, node, fn):
410410
for output in fn.outputs:
411411
if (not isinstance(output[0], numpy.random.RandomState) and
412412
numpy.isnan(output[0]).any()):

0 commit comments

Comments
 (0)