18
18
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE #src
19
19
# SOFTWARE. #src
20
20
21
- # # User-defined hessians
21
+ # # User-defined Hessians
22
22
23
- # In this tutorial, we explain how to write a user-defined function with an
24
- # explicit hessian .
23
+ # In this tutorial, we explain how to write a user-defined function (see [User-defined Functions](@ref))
24
+ # with an explicit Hessian matrix .
25
25
26
26
# This tutorial uses the following packages:
27
27
28
28
using JuMP
29
29
import Ipopt
30
30
31
+ # ## Motivation
32
+
31
33
# Our goal for this tutorial is to solve the bilevel optimization problem:
32
34
33
35
# ```math
@@ -46,7 +48,9 @@ import Ipopt
46
48
# involving variables ``y``. From the perspective of the lower-level, the
47
49
# values of ``x`` are fixed parameters, and so the model optimizes ``y`` given
48
50
# those fixed parameters. Simultaneously, the upper level is optimizing ``x``
49
- # given the response of ``yy``.
51
+ # given the response of ``y``.
52
+
53
+ # ## Decomposition
50
54
51
55
# There are a few ways to solve this problem, but we are going to use a
52
56
# nonlinear decomposition method. The first step is to write a function to
@@ -145,8 +149,10 @@ value.(x)
145
149
_, y = solve_lower_level (value .(x)... )
146
150
y
147
151
148
- # This solution approach worked, but it has a performance problem: every time
149
- # we needed to compute the value, gradient, or hessian of ``V``, we had to
152
+ # ## Memoization
153
+
154
+ # Our solution approach works, but it has a performance problem: every time
155
+ # we need to compute the value, gradient, or hessian of ``V``, we have to
150
156
# re-solve the lower-level optimization problem! This is wasteful, because we
151
157
# will often call the gradient and hessian at the same point, and so solving the
152
158
# problem twice with the same input repeats work unnecessarily.
0 commit comments