You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: RELEASENOTES.md
+9-5Lines changed: 9 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,24 +2,28 @@
2
2
3
3
Releases, starting with 9/2/2021, are listed with the most recent release at the top.
4
4
5
-
6
5
# NuGet Version 0.104.0
7
6
8
7
This is a big change in implementation, but not as big in API surface area. Many of the builtin modules, but not all, were re-implemented in managed code calling into native code via the functional APIs. This has several advantages:
9
8
10
-
1. Align with the Pytorch implementations.
11
-
2. More easily expose module attributes as properties as Pytorch does.
12
-
3. In some cases, avoid native code altogether.
13
-
4. The builtin modules can serve as "best practice" examples for custom module authors.
9
+
1. Align with the Pytorch implementations.<br/>
10
+
2. More easily expose module attributes as properties as Pytorch does.<br/>
11
+
3. In some cases, avoid native code altogether.<br/>
12
+
4. The builtin modules can serve as "best practice" examples for custom module authors.<br/>
14
13
15
14
__Breaking Changes__:
16
15
17
16
The names of several arguments have been changed to align better with Pytorch naming. This may break code that passes such arguments by name, but will be caught at compile time.
18
17
18
+
The argument defaults for `torch.diagonal()` and `Tensor.diagonal()` arguments have been corrected.
19
+
19
20
__Issues fixed__:
20
21
21
22
#1397 Look into whether parameter creation from a tensor leads to incorrect dispose scope statistics. This bug was discovered during testing of the PR.<br/>
22
23
#1210 Attribute omissions.<br/>
24
+
#1210 Attribute omissions.<br/>
25
+
#1400 There may be an error in torchvision.transforms.GaussianBlur<br/>
Copy file name to clipboardExpand all lines: src/TorchSharp/LinearAlgebra.cs
+2-2Lines changed: 2 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -136,8 +136,8 @@ public static (Tensor, Tensor) slogdet(Tensor input)
136
136
/// </summary>
137
137
/// <param name="input">The input tensor</param>
138
138
/// <param name="offset">Which diagonal to consider. Default: 0 (main diagonal).</param>
139
-
/// <param name="dim1">First dimension with respect to which to take diagonal. Default: -1.</param>
140
-
/// <param name="dim2">Second dimension with respect to which to take diagonal. Default: -2.</param>
139
+
/// <param name="dim1">First dimension with respect to which to take diagonal. Default: -2.</param>
140
+
/// <param name="dim2">Second dimension with respect to which to take diagonal. Default: -1.</param>
141
141
/// <remarks>
142
142
/// Applying torch.diag_embed() to the output of this function with the same arguments yields a diagonal matrix with the diagonal entries of the input.
143
143
/// However, torch.diag_embed() has different default dimensions, so those need to be explicitly specified.
Copy file name to clipboardExpand all lines: src/TorchSharp/Tensor/Tensor.cs
+2-1Lines changed: 2 additions & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -3399,8 +3399,9 @@ public Tensor diagflat(long offset = 0)
3399
3399
/// Applying torch.diag_embed() to the output of this function with the same arguments yields a diagonal matrix with the diagonal entries of the input.
3400
3400
/// However, torch.diag_embed() has different default dimensions, so those need to be explicitly specified.
Copy file name to clipboardExpand all lines: src/TorchSharp/Tensor/torch.OtherOperations.cs
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -314,7 +314,7 @@ public static Tensor diag_embed(Tensor input, long offset = 0L, long dim1 = -2L,
314
314
/// Applying torch.diag_embed() to the output of this function with the same arguments yields a diagonal matrix with the diagonal entries of the input.
315
315
/// However, torch.diag_embed() has different default dimensions, so those need to be explicitly specified.
0 commit comments