Skip to content

Commit b04170e

Browse files
committed
Documentation for AD Backends
1 parent c9f7d3c commit b04170e

File tree

5 files changed

+203
-2
lines changed

5 files changed

+203
-2
lines changed

.github/workflows/Documentation.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ jobs:
2323
JULIA_DEBUG: "Documenter"
2424
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # For authentication with GitHub Actions token
2525
DOCUMENTER_KEY: ${{ secrets.DOCUMENTER_KEY }} # For authentication with SSH deploy key
26-
run: julia --project=docs/ --code-coverage=user docs/make.jl
26+
run: julia --project=docs/ --code-coverage=user --color=yes docs/make.jl
2727
- uses: julia-actions/julia-processcoverage@v1
2828
- uses: codecov/codecov-action@v3
2929
with:

docs/pages.jl

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,7 @@ pages = [
1313
"basics/nonlinear_functions.md",
1414
"basics/solve.md",
1515
"basics/nonlinear_solution.md",
16+
"basics/autodiff.md",
1617
"basics/termination_condition.md",
1718
"basics/diagnostics_api.md",
1819
"basics/sparsity_detection.md",

docs/src/basics/autodiff.md

Lines changed: 51 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,51 @@
1+
# Automatic Differentiation Backends
2+
3+
## Summary of Finite Differencing Backends
4+
5+
- [`AutoFiniteDiff`](@ref): Finite differencing, not optimal but always applicable.
6+
- [`AutoSparseFiniteDiff`](@ref): Sparse version of [`AutoFiniteDiff`](@ref).
7+
8+
## Summary of Forward Mode AD Backends
9+
10+
- [`AutoForwardDiff`](@ref): The best choice for dense problems.
11+
- [`AutoSparseForwardDiff`](@ref): Sparse version of [`AutoForwardDiff`](@ref).
12+
- [`AutoPolyesterForwardDiff`](@ref): Might be faster than [`AutoForwardDiff`](@ref) for
13+
large problems. Requires `PolyesterForwardDiff.jl` to be installed and loaded.
14+
15+
## Summary of Reverse Mode AD Backends
16+
17+
- [`AutoZygote`](@ref): The fastest choice for non-mutating array-based (BLAS) functions.
18+
- [`AutoSparseZygote`](@ref): Sparse version of [`AutoZygote`](@ref).
19+
- [`AutoEnzyme`](@ref): Uses `Enzyme.jl` Reverse Mode and should be considered
20+
experimental.
21+
22+
!!! note
23+
24+
If `PolyesterForwardDiff.jl` is installed and loaded, then `SimpleNonlinearSolve.jl`
25+
will automatically use `AutoPolyesterForwardDiff` as the default AD backend.
26+
27+
## API Reference
28+
29+
### Finite Differencing Backends
30+
31+
```@docs
32+
AutoFiniteDiff
33+
AutoSparseFiniteDiff
34+
```
35+
36+
### Forward Mode AD Backends
37+
38+
```@docs
39+
AutoForwardDiff
40+
AutoSparseForwardDiff
41+
AutoPolyesterForwardDiff
42+
```
43+
44+
### Reverse Mode AD Backends
45+
46+
```@docs
47+
AutoZygote
48+
AutoSparseZygote
49+
AutoEnzyme
50+
NonlinearSolve.AutoSparseEnzyme
51+
```

src/NonlinearSolve.jl

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ import PrecompileTools: @recompile_invalidations, @compile_workload, @setup_work
2424

2525
import SciMLBase: AbstractNonlinearAlgorithm, JacobianWrapper, AbstractNonlinearProblem,
2626
AbstractSciMLOperator, NLStats, _unwrap_val, has_jac, isinplace
27-
import SparseDiffTools: AbstractSparsityDetection
27+
import SparseDiffTools: AbstractSparsityDetection, AutoSparseEnzyme
2828
import StaticArraysCore: StaticArray, SVector, SArray, MArray, Size, SMatrix, MMatrix
2929
end
3030

@@ -40,6 +40,7 @@ const True = Val(true)
4040
const False = Val(false)
4141

4242
include("abstract_types.jl")
43+
include("adtypes.jl")
4344
include("timer_outputs.jl")
4445
include("internal/helpers.jl")
4546

src/adtypes.jl

Lines changed: 148 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,148 @@
1+
# This just documents the AD types from ADTypes.jl
2+
3+
"""
4+
AutoFiniteDiff(; fdtype = Val(:forward), fdjtype = fdtype, fdhtype = Val(:hcentral))
5+
6+
This uses [FiniteDiff.jl](https://github.com/JuliaDiff/FiniteDiff.jl). While not necessarily
7+
the most efficient, this is the only choice that doesn't require the `f` function to be
8+
automatically differentiable, which means it applies to any choice. However, because it's
9+
using finite differencing, one needs to be careful as this procedure introduces numerical
10+
error into the derivative estimates.
11+
12+
- Compatible with GPUs
13+
- Can be used for Jacobian-Vector Products (JVPs)
14+
- Can be used for Vector-Jacobian Products (VJPs)
15+
- Supports both inplace and out-of-place functions
16+
17+
### Keyword Arguments
18+
19+
- `fdtype`: the method used for defining the gradient
20+
- `fdjtype`: the method used for defining the Jacobian of constraints.
21+
- `fdhtype`: the method used for defining the Hessian
22+
"""
23+
AutoFiniteDiff
24+
25+
"""
26+
AutoSparseFiniteDiff()
27+
28+
Sparse Version of [`AutoFiniteDiff`](@ref) that uses
29+
[FiniteDiff.jl](https://github.com/JuliaDiff/FiniteDiff.jl) and the column color vector of
30+
the Jacobian Matrix to efficiently compute the Sparse Jacobian.
31+
32+
- Supports both inplace and out-of-place functions
33+
"""
34+
AutoSparseFiniteDiff
35+
36+
"""
37+
AutoForwardDiff(; chunksize = nothing, tag = nothing)
38+
AutoForwardDiff{chunksize, tagType}(tag::tagType)
39+
40+
This uses the [ForwardDiff.jl](https://github.com/JuliaDiff/ForwardDiff.jl) package. It is
41+
the fastest choice for square or wide systems. It is easy to use and compatible with most
42+
Julia functions which have loose type restrictions.
43+
44+
- Compatible with GPUs
45+
- Can be used for Jacobian-Vector Products (JVPs)
46+
- Supports both inplace and out-of-place functions
47+
48+
For type-stability of internal operations, a positive `chunksize` must be provided.
49+
50+
### Keyword Arguments
51+
52+
- `chunksize`: Count of dual numbers that can be propagated simultaneously. Setting this
53+
number to a high value will lead to slowdowns. Use
54+
[`NonlinearSolve.pickchunksize`](@ref) to get a proper value.
55+
- `tag`: Used to avoid perturbation confusion. If set to `nothing`, we use a custom tag.
56+
"""
57+
AutoForwardDiff
58+
59+
"""
60+
AutoSparseForwardDiff(; chunksize = nothing, tag = nothing)
61+
AutoSparseForwardDiff{chunksize, tagType}(tag::tagType)
62+
63+
Sparse Version of [`AutoForwardDiff`](@ref) that uses
64+
[ForwardDiff.jl](https://github.com/JuliaDiff/ForwardDiff.jl) and the column color vector of
65+
the Jacobian Matrix to efficiently compute the Sparse Jacobian.
66+
67+
- Supports both inplace and out-of-place functions
68+
69+
For type-stability of internal operations, a positive `chunksize` must be provided.
70+
71+
### Keyword Arguments
72+
73+
- `chunksize`: Count of dual numbers that can be propagated simultaneously. Setting this
74+
number to a high value will lead to slowdowns. Use
75+
[`NonlinearSolve.pickchunksize`](@ref) to get a proper value.
76+
- `tag`: Used to avoid perturbation confusion. If set to `nothing`, we use a custom tag.
77+
"""
78+
AutoSparseForwardDiff
79+
80+
"""
81+
AutoPolyesterForwardDiff(; chunksize = nothing)
82+
83+
Uses [`PolyesterForwardDiff.jl`](https://github.com/JuliaDiff/PolyesterForwardDiff.jl)
84+
to compute the jacobian. This is essentially parallelized `ForwardDiff.jl`.
85+
86+
- Supports both inplace and out-of-place functions
87+
88+
### Keyword Arguments
89+
90+
- `chunksize`: Count of dual numbers that can be propagated simultaneously. Setting
91+
this number to a high value will lead to slowdowns. Use
92+
[`NonlinearSolve.pickchunksize`](@ref) to get a proper value.
93+
"""
94+
AutoPolyesterForwardDiff
95+
96+
"""
97+
AutoZygote()
98+
99+
Uses [`Zygote.jl`](https://github.com/FluxML/Zygote.jl) package. This is the staple
100+
reverse-mode AD that handles a large portion of Julia with good efficiency.
101+
102+
- Compatible with GPUs
103+
- Can be used for Vector-Jacobian Products (VJPs)
104+
- Supports only out-of-place functions
105+
106+
For VJPs this is the current best choice. This is the most efficient method for long
107+
jacobians.
108+
"""
109+
AutoZygote
110+
111+
"""
112+
AutoSparseZygote()
113+
114+
Sparse version of [`AutoZygote`](@ref) that uses
115+
[`Zygote.jl`](https://github.com/FluxML/Zygote.jl) and the row color vector of
116+
the Jacobian Matrix to efficiently compute the Sparse Jacobian.
117+
118+
- Supports only out-of-place functions
119+
120+
This is efficient only for long jacobians or if the maximum value of the row color vector is
121+
significantly lower than the maximum value of the column color vector.
122+
"""
123+
AutoSparseZygote
124+
125+
"""
126+
AutoEnzyme()
127+
128+
Uses reverse mode [Enzyme.jl](https://github.com/EnzymeAD/Enzyme.jl). This is currently
129+
experimental, and not extensively tested on our end. We only support Jacobian construction
130+
and VJP support is currently not implemented.
131+
132+
- Supports both inplace and out-of-place functions
133+
"""
134+
AutoEnzyme
135+
136+
"""
137+
AutoSparseEnzyme()
138+
139+
Sparse version of [`AutoEnzyme`](@ref) that uses
140+
[Enzyme.jl](https://github.com/EnzymeAD/Enzyme.jl) and the row color vector of
141+
the Jacobian Matrix to efficiently compute the Sparse Jacobian.
142+
143+
- Supports both inplace and out-of-place functions
144+
145+
This is efficient only for long jacobians or if the maximum value of the row color vector is
146+
significantly lower than the maximum value of the column color vector.
147+
"""
148+
AutoSparseEnzyme

0 commit comments

Comments
 (0)