@@ -53,17 +53,23 @@ expressions; that is, it cannot process the control flow.
53
53
54
54
### Automatic Differentiation in C++
55
55
56
- Automated Differentiation implementations are based on Operator Overloading or
57
- Source Code Transformation. C++ allows operator overloading, making it
58
- possible to implement Automatic Differentiation. The derivative of a function
59
- can be evaluated at the same time as the function itself. Automatic
60
- Differentiation exploits the fact that every computer calculation consists of
61
- elementary mathematical operations and functions, and by applying the chain
62
- rule recurrently, partial derivatives of arbitrary order can be computed
63
- accurately. Following are some of its highlights:
64
-
65
- - Automatic Differentiation can calculate derivatives without any additional
66
- precision loss.
56
+ Automated Differentiation implementations are based on [ two major techniques] :
57
+ Operator Overloading and Source Code Transformation. Compiler Research Group's
58
+ focus has been on exploring the [ Source Code Transformation] technique, which
59
+ involves constructing the computation graph and producing a derivative at
60
+ compile time.
61
+
62
+ [ The source code transformation approach] enables optimization by retaining
63
+ all the complex knowledge of the original source code. The compute graph is
64
+ constructed before compilation and then transformed and compiled. It typically
65
+ uses a custom parser to build code representation and produce the transformed
66
+ code. It is difficult to implement (especially in C++), but it is very
67
+ efficient, since many computations and optimizations are done ahead of time.
68
+
69
+ ### Advantages of using Automatic Differentiation
70
+
71
+ - Automatic Differentiation can calculate derivatives without any [ additional
72
+ precision loss] .
67
73
68
74
- It is not confined to closed-form expressions.
69
75
@@ -104,22 +110,7 @@ place.
104
110
105
111
- Integration with Cling and ROOT for high-energy physics data analysis.
106
112
107
- ### Basics of using Clad
108
-
109
- Clad provides five API functions:
110
-
111
- - ` clad::differentiate ` to use Forward Mode Automatic Differentiation.
112
- - ` clad::gradient ` to use Reverse Mode Automatic Differentiation.
113
- - ` clad::hessian ` to construct a Hessian matrix using a combination of Forward
114
- Mode and Reverse Mode Automatic Differentiation.
115
- - ` clad::jacobian ` to construct a Jacobian matrix using Reverse Mode Automatic
116
- Differentiation.
117
- - ` clad::estimate-error ` to calculate the Floating-Point Error of the
118
- requested program using Reverse Mode Automatic Differentiation.
119
-
120
- These API functions label an existing function for differentiation and return
121
- a functor object that contains the generated derivative, which can be called
122
- by using the ` .execute ` method.
113
+ ### Clad Benchmarks (while using Automatic Differentiation)
123
114
124
115
[ Benchmarks] show that Clad is numerically faster than the conventional
125
116
Numerical Differentiation methods, providing Hessians that are 450x (~ dim/25
@@ -149,5 +140,10 @@ For more information on Clad, please view:
149
140
150
141
[ General benchmarks ] : https://indico.cern.ch/event/1005849/contributions/4227031/attachments/2221814/3762784/Clad%20--%20Automatic%20Differentiation%20in%20C%2B%2B%20and%20Clang%20.pdf
151
142
143
+ [ additional precision loss ] : https://compiler-research.org/assets/presentations/CladInROOT_15_02_2020.pdf
144
+
145
+ [ Source Code Transformation ] : https://compiler-research.org/assets/presentations/V_Vassilev-SNL_Accelerating_Large_Workflows_Clad.pdf
152
146
147
+ [ two major techniques ] : https://compiler-research.org/assets/presentations/G_Singh-MODE3_Fast_Likelyhood_Calculations_RooFit.pdf
153
148
149
+ [ The source code transformation approach ] : https://compiler-research.org/assets/presentations/I_Ifrim-EuroAD21_GPU_AD.pdf
0 commit comments