Skip to content

Framework ops #255

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 77 commits into from
Closed
Show file tree
Hide file tree
Changes from 38 commits
Commits
Show all changes
77 commits
Select commit Hold shift + click to select a range
c57a2e7
Merge pull request #3 from tensorflow/master
JimClarke5 Oct 8, 2020
09fc07e
Merge pull request #4 from tensorflow/master
JimClarke5 Oct 27, 2020
a99dcb4
Merge pull request #5 from tensorflow/master
JimClarke5 Nov 17, 2020
ba294ea
Merge pull request #6 from tensorflow/master
JimClarke5 Nov 19, 2020
04f419a
Merge pull request #7 from tensorflow/master
JimClarke5 Dec 30, 2020
02e7ebf
Merge pull request #8 from tensorflow/master
JimClarke5 Jan 29, 2021
e0c9ed8
Merge pull request #9 from tensorflow/master
JimClarke5 Feb 1, 2021
5b0374b
Merge pull request #10 from tensorflow/master
JimClarke5 Feb 11, 2021
e038bbd
Merge pull request #11 from tensorflow/master
JimClarke5 Feb 23, 2021
def3051
Merge pull request #13 from tensorflow/master
JimClarke5 Mar 3, 2021
11748ae
Merge pull request #15 from tensorflow/master
JimClarke5 Mar 21, 2021
dc94953
Moved high level tf.nn ops to framework.
JimClarke5 Mar 26, 2021
1878b60
Added FrameworkOps analogous to Ops.
JimClarke5 Mar 26, 2021
9225a48
Added FrameworkOps analogous to Ops.
JimClarke5 Mar 27, 2021
caab79b
Move l2Normalize to MathOps
JimClarke5 Mar 27, 2021
bd072f4
Reformat code, fix javadocs
JimClarke5 Mar 27, 2021
a9412ea
Merge pull request #16 from tensorflow/master
JimClarke5 Apr 9, 2021
d29262b
Add confusionMatrix() method. add Unit test
JimClarke5 Apr 16, 2021
2ff8dfe
Merge pull request #17 from tensorflow/master
JimClarke5 Apr 22, 2021
ee5e38a
Merge pull request #18 from tensorflow/master
JimClarke5 May 1, 2021
26394d6
Merge pull request #19 from tensorflow/master
JimClarke5 May 2, 2021
e0a4a26
Moved high level tf.nn ops to framework.
JimClarke5 Mar 26, 2021
28db4df
Added FrameworkOps analogous to Ops.
JimClarke5 Mar 26, 2021
ba24371
Added FrameworkOps analogous to Ops.
JimClarke5 Mar 27, 2021
4d3f17c
Move l2Normalize to MathOps
JimClarke5 Mar 27, 2021
9e07483
Reformat code, fix javadocs
JimClarke5 Mar 27, 2021
790bf35
Add confusionMatrix() method. add Unit test
JimClarke5 Apr 16, 2021
b4ca97a
Added linalg methods for matmul
JimClarke5 May 2, 2021
e83d26b
add nn ops for sigmoidCrossEntropyWithLogits, softmaxCrossEntropyWith…
JimClarke5 May 2, 2021
e4e65f2
Moved SetOps to FrameworkOps
JimClarke5 May 2, 2021
a2ed723
Added tensordot and reduceLogSumExp
JimClarke5 May 2, 2021
be1fe66
Added frameworkOps for nn and linalg
JimClarke5 May 2, 2021
7b51e7f
Modified to use FrameworkOps
JimClarke5 May 2, 2021
f1c63c0
move nn.raw classes to nn in core, remove nn.raw
JimClarke5 May 2, 2021
f4b75b9
Merge remote-tracking branch 'origin/Framework_Ops' into Framework_Ops
JimClarke5 May 2, 2021
043654b
Update FrameworkOps.java
JimClarke5 May 2, 2021
06c28df
Fix unusual regression error in confustion matrix. Needed to reduceA…
JimClarke5 May 3, 2021
8f33d21
javadoc fixes
JimClarke5 May 3, 2021
a24b8ca
Setting all the optimizers to have useLocking = True (#310)
Craigacp May 4, 2021
94f5b15
Load TF library before computing TString size (#322)
karllessard May 17, 2021
743475d
Update README.md
karllessard May 19, 2021
3648a96
Fix sometimes generating Javadoc for scope param in Ops (#291)
rnett May 21, 2021
ceae489
Use spotless plugin for formating (#308)
rnett May 23, 2021
0f7274e
Quick fix for spotless (#324)
rnett May 24, 2021
ace917b
Temporarily disabling Linux MKL-GPU
karllessard May 26, 2021
3b4533c
Fix Scope name collisions (#248)
rnett May 28, 2021
daeb257
Native functions v2 (#233)
rnett May 31, 2021
19e1c8d
Spotless updates (#331)
rnett Jun 1, 2021
23d6f0b
activations, constraints, initializers, losses, regularizers: move Op…
JimClarke5 Jun 2, 2021
7b5a1ca
Skip tests in check-format job
karllessard Jun 8, 2021
cea76cd
Upgrade for TensorFlow 2.5.0 (#303)
saudet Jun 10, 2021
caed0e8
Skip implementation-less TF_InitKernel
karllessard Jun 11, 2021
b997f12
Upgrade TF version in current snapshots
karllessard Jun 11, 2021
031a0c1
SavedModelBundle leak fix (#335)
Craigacp Jun 11, 2021
b38cc04
Use OP_NAME constant instead of hard coding (#328)
rnett Jun 16, 2021
4d8d24f
Moved high level tf.nn ops to framework.
JimClarke5 Mar 26, 2021
e483792
Added FrameworkOps analogous to Ops.
JimClarke5 Mar 26, 2021
9480126
Added FrameworkOps analogous to Ops.
JimClarke5 Mar 27, 2021
074794b
Move l2Normalize to MathOps
JimClarke5 Mar 27, 2021
7526b7e
Reformat code, fix javadocs
JimClarke5 Mar 27, 2021
0a163c6
Add confusionMatrix() method. add Unit test
JimClarke5 Apr 16, 2021
c234b9a
Added linalg methods for matmul
JimClarke5 May 2, 2021
e024f4b
add nn ops for sigmoidCrossEntropyWithLogits, softmaxCrossEntropyWith…
JimClarke5 May 2, 2021
b108b06
Moved SetOps to FrameworkOps
JimClarke5 May 2, 2021
13b6f0f
Added tensordot and reduceLogSumExp
JimClarke5 May 2, 2021
f1dbb01
Added frameworkOps for nn and linalg
JimClarke5 May 2, 2021
6174a32
Modified to use FrameworkOps
JimClarke5 May 2, 2021
5523896
move nn.raw classes to nn in core, remove nn.raw
JimClarke5 May 2, 2021
b750dd2
Moved high level tf.nn ops to framework.
JimClarke5 Mar 26, 2021
4468be2
Added FrameworkOps analogous to Ops.
JimClarke5 Mar 26, 2021
eb64cd0
Move l2Normalize to MathOps
JimClarke5 Mar 27, 2021
134a11d
Reformat code, fix javadocs
JimClarke5 Mar 27, 2021
1f9626c
Update FrameworkOps.java
JimClarke5 May 2, 2021
7860a71
Fix unusual regression error in confustion matrix. Needed to reduceA…
JimClarke5 May 3, 2021
d967a99
javadoc fixes
JimClarke5 May 3, 2021
e84981f
Rebase with latest master
JimClarke5 Jun 17, 2021
f69e17e
Merge branch 'Framework_Ops' of https://github.com/JimClarke5/java in…
JimClarke5 Jun 17, 2021
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
op {
graph_op_name: "SoftmaxCrossEntropyWithLogits"
endpoint {
name: "nn.raw.SoftmaxCrossEntropyWithLogits"
name: "nn.SoftmaxCrossEntropyWithLogits"
}
}
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
op {
graph_op_name: "SparseSoftmaxCrossEntropyWithLogits"
endpoint {
name: "nn.raw.SparseSoftmaxCrossEntropyWithLogits"
name: "nn.SparseSoftmaxCrossEntropyWithLogits"
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,6 @@
import org.tensorflow.op.nn.Relu;
import org.tensorflow.op.nn.Relu6;
import org.tensorflow.op.nn.Selu;
import org.tensorflow.op.nn.SigmoidCrossEntropyWithLogits;
import org.tensorflow.op.nn.Softmax;
import org.tensorflow.op.nn.SoftmaxCrossEntropyWithLogits;
import org.tensorflow.op.nn.Softsign;
Expand All @@ -103,16 +102,13 @@
* @see {@link Ops}
*/
public final class NnOps {
public final NnRawOps raw;

private final Scope scope;

private final Ops ops;

NnOps(Ops ops) {
this.scope = ops.scope();
this.ops = ops;
raw = new NnRawOps(ops);
}

/**
Expand Down Expand Up @@ -1797,56 +1793,6 @@ public <T extends TNumber> Selu<T> selu(Operand<T> features) {
return Selu.create(scope, features);
}

/**
* Computes sigmoid cross entropy given <code>logits</code>.
*
* <p>Measures the probability error in discrete classification tasks in which each class is
* independent and not mutually exclusive. For instance, one could perform multilabel
* classification where a picture can contain both an elephant and a dog at the same time.
*
* <p>For brevity, let <code>x = logits</code>, <code>z = labels</code>. The logistic loss in
* pseudo-code is
*
* <pre>
* z * -log(sigmoid(x)) + (1 - z) * -log(1 - sigmoid(x))
* = z * -log(1 / (1 + exp(-x))) + (1 - z) * -log(exp(-x) / (1 + exp(-x)))
* = z * log(1 + exp(-x)) + (1 - z) * (-log(exp(-x)) + log(1 + exp(-x)))
* = z * log(1 + exp(-x)) + (1 - z) * (x + log(1 + exp(-x))
* = (1 - z) * x + log(1 + exp(-x))
* = x - x * z + log(1 + exp(-x))
* </pre>
*
* <p>For <code>x < 0</code>, to avoid overflow in <code>exp(-x)</code>, we reformulate the above
*
* <pre>
* x - x * z + log(1 + exp(-x))
* = log(exp(x)) - x * z + log(1 + exp(-x))
* = - x * z + log(1 + exp(x))
* </pre>
*
* <p>Hence, to ensure stability and avoid overflow, the implementation uses this equivalent
* formulation
*
* <pre>
* max(x, 0) - x * z + log(1 + exp(-abs(x)))
* </pre>
*
* <p></ode>logits</code> and <code>labels</code> must have the same type and shape.
*
* <p>
*
* @param scope The TensorFlow scope
* @param labels the labels
* @param logits the logits of type float32 or float64
* @param <T> the type of labels and logits
* @return the component-wise logistic losses.
* @throws IllegalArgumentException if logits' and labels' do not have the same shape
*/
public <T extends TNumber> Operand<T> sigmoidCrossEntropyWithLogits(Operand<T> labels,
Operand<T> logits) {
return SigmoidCrossEntropyWithLogits.sigmoidCrossEntropyWithLogits(scope, labels, logits);
}

/**
* Computes softmax activations.
* For each batch {@code i} and class {@code j} we have
Expand All @@ -1864,54 +1810,20 @@ public <T extends TNumber> Softmax<T> softmax(Operand<T> logits) {
}

/**
* Computes softmax cross entropy between <code>logits</code> and <code>labels</code>.
*
* <p>Measures the probability error in discrete classification tasks in which the classes are
* mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is
* labeled with one and only one label: an image can be a dog or a truck, but not both.
*
* <p><b>NOTE:</b>
*
* <p>While the classes are mutually exclusive, their probabilities need not be. All that is
* required is that each row of <code>labels</code> is a valid probability distribution. If they
* are not, the computation of the gradient will be incorrect.
*
* <p>If using exclusive <code>labels</code> (wherein one and only one class is true at a time),
* see {@link org.tensorflow.op.NnOps#sparseSoftmaxCrossEntropyWithLogits}
* Computes softmax cross entropy cost and gradients to backpropagate.
* Inputs are the logits, not probabilities.
*
* <p>Usage:
*
* <pre>
* Operand&lt;TFloat32&gt; logits =
* tf.constant(new float[][] {{4.0F, 2.0F, 1.0F}, {0.0F, 5.0F, 1.0F}} );
* Operand&lt;TFloat32&gt; labels =
* tf.constant(new float[][] {{1.0F, 0.0F, 0.0F}, {0.0F, 0.8F, 0.2F}} );
* Operand&lt;TFloat32&gt; output =
* tf.nn.softmaxCrossEntropyWithLogits(labels, logits, -1);
* // output Shape = [2]
* // dataType = FLOAT (1)
* // values { 0.169846, 0.824745 }
* </pre>
*
* <p>Backpropagation will happen into both <code>logits</code> and <code>labels</code>. To
* disallow backpropagation into <code>labels</code>, pass label tensors through <code>
* tf.stopGradient</code> before feeding it to this function.
*
* @param scope current scope
* @param labels Each vector along the class dimension should hold a valid probability
* distribution e.g. for the case in which labels are of shape <code>[batch_size, num_classes]
* </code>, each row of <code>labels[i]</code> must be a valid probability distribution.
* @param logits Per-label activations, typically a linear output. These activation energies are
* interpreted as unnormalized log probabilities.
* @param axis The class dimension. -1 is the last dimension.
* @param <T> the number type of the operands
* @return the softmax cross entropy loss. Its type is the same as <code>logits</code> and its
* shape is the same as <code>labels</code> except that it does not have the last dimension of
* <code>labels</code>.
* @param <T> data type for {@code loss} output
* @param features batch_size x num_classes matrix
* @param labels batch_size x num_classes matrix
* The caller must ensure that each batch of labels represents a valid
* probability distribution.
* @param <T> data type for {@code SoftmaxCrossEntropyWithLogits} output and operands
* @return a new instance of SoftmaxCrossEntropyWithLogits
*/
public <T extends TNumber, U extends TNumber> Operand<T> softmaxCrossEntropyWithLogits(
Operand<U> labels, Operand<T> logits, int axis) {
return SoftmaxCrossEntropyWithLogits.softmaxCrossEntropyWithLogits(scope, labels, logits, axis);
public <T extends TNumber> SoftmaxCrossEntropyWithLogits<T> softmaxCrossEntropyWithLogits(
Operand<T> features, Operand<T> labels) {
return SoftmaxCrossEntropyWithLogits.create(scope, features, labels);
}

/**
Expand Down Expand Up @@ -2098,51 +2010,23 @@ public <T extends TType> SpaceToDepth<T> spaceToDepth(Operand<T> input, Long blo
}

/**
* Computes sparse softmax cross entropy between <code>logits</code> and <code>labels</code>.
*
* <p>Measures the probability error in discrete classification tasks in which the classes are
* mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is
* labeled with one and only one label: an image can be a dog or a truck, but not both.
*
* <p><b>NOTE:</b>
*
* <p>For this operation, the probability of a given label is considered exclusive. That is, soft
* classes are not allowed, and the <code>labels</code> vector must provide a single specific
* index for the true class for each row of <code>logits</code> (each minibatch entry). For soft
* softmax classification with a probability distribution for each entry, {@link
* org.tensorflow.op.NnOps#softmaxCrossEntropyWithLogits}.
*
* <p><b>WARNING:</b>
*
* <p>This op expects unscaled logits, since it performs a <code>softmax</code> on <code>logits
* </code> internally for efficiency. Do not call this op with the output of <code>softmax</code>,
* as it will produce incorrect results.
*
* <p>A common use case is to have logits of shape <code>[batchSize, numClasses]</code> and have
* labels of shape <code>[batchSize]</code>, but higher dimensions are supported, in which case
* the <code>dim</code>-th dimension is assumed to be of size <code>numClasses</code>. <code>
* logits</code> must have the <cod>dataType</cod> of <code>TFloat16</code>, <code>TFloat32</code>
* , or <code>TFloat64</code>, and <code>labels</code> must have the dtype of <code>TInt32</code>
* or <code>TInt64</code>.
*
* @param scope current scope
* @param labels <code>Tensor</code> of shape <code>[d_0, d_1, ..., d_{r-1}]</code> (where <code>r
* </code> is rank of <code>labels</code> and result) and the dataType is <code>TInt32</code>
* or <code>TInt64</code>. Each entry in <code>labels</code> must be an index in <code>[0,
* numClasses)</code>. Other values will raise an exception when this op is run on CPU, and
* return <code>NaN</code> for corresponding loss and gradient rows on GPU.
* @param logits Per-label activations (typically a linear output) of shape <code>[d_0, d_1, ...,
* d_{r-1}, numClasses]</code> and dataType of <code>TFloat16</code>, <code>TFloat32</code>,
* or <code>TFloat64</code>. These activation energies are interpreted as unnormalized log
* probabilities.
* @return A <code>Tensor</code> of the same shape as <code>labels</code> and of the same type as
* <code>logits</code> with the softmax cross entropy loss.
* @throws IllegalArgumentException If logits are scalars (need to have rank >= 1) or if the rank
* of the labels is not equal to the rank of the logits minus one.
*/
public <T extends TNumber, U extends TNumber> Operand sparseSoftmaxCrossEntropyWithLogits(
Operand<T> labels, Operand<U> logits) {
return SparseSoftmaxCrossEntropyWithLogits.sparseSoftmaxCrossEntropyWithLogits(scope, labels, logits);
* Computes softmax cross entropy cost and gradients to backpropagate.
* Unlike {@code SoftmaxCrossEntropyWithLogits}, this operation does not accept
* a matrix of label probabilities, but rather a single label per row
* of features. This label is considered to have probability 1.0 for the
* given row.
* <p>Inputs are the logits, not probabilities.
*
* @param <T> data type for {@code loss} output
* @param features batch_size x num_classes matrix
* @param labels batch_size vector with values in [0, num_classes).
* This is the label for the given minibatch entry.
* @param <T> data type for {@code SparseSoftmaxCrossEntropyWithLogits} output and operands
* @return a new instance of SparseSoftmaxCrossEntropyWithLogits
*/
public <T extends TNumber> SparseSoftmaxCrossEntropyWithLogits<T> sparseSoftmaxCrossEntropyWithLogits(
Operand<T> features, Operand<? extends TNumber> labels) {
return SparseSoftmaxCrossEntropyWithLogits.create(scope, features, labels);
}

/**
Expand Down

This file was deleted.

Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@

// This class has been generated, DO NOT EDIT!

package org.tensorflow.op.nn.raw;
package org.tensorflow.op.nn;

import org.tensorflow.Operand;
import org.tensorflow.Operation;
Expand All @@ -34,7 +34,7 @@
* @param <T> data type for {@code loss} output
*/
@Operator(
group = "nn.raw"
group = "nn"
)
public final class SoftmaxCrossEntropyWithLogits<T extends TNumber> extends RawOp {
/**
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@

// This class has been generated, DO NOT EDIT!

package org.tensorflow.op.nn.raw;
package org.tensorflow.op.nn;

import org.tensorflow.Operand;
import org.tensorflow.Operation;
Expand All @@ -38,7 +38,7 @@
* @param <T> data type for {@code loss} output
*/
@Operator(
group = "nn.raw"
group = "nn"
)
public final class SparseSoftmaxCrossEntropyWithLogits<T extends TNumber> extends RawOp {
/**
Expand Down
Binary file modified tensorflow-core/tensorflow-core-api/src/gen/resources/ops.pb
Binary file not shown.
Loading