Skip to content

Commit 544c487

Browse files
committed
Update user_defined_hessians.jl
1 parent b23b925 commit 544c487

File tree

1 file changed

+8
-7
lines changed

1 file changed

+8
-7
lines changed

docs/src/tutorials/nonlinear/user_defined_hessians.jl

Lines changed: 8 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ import Ipopt
3434
# \begin{array}{r l}
3535
# \min\limits_{x} & x_1^2 + x_2^2 + z \\
3636
# s.t. & \begin{array}{r l}
37-
# z \ge \max\limits_{y} & x_1^2 y_1 + x_2^2 * y_2 - x_1 y_1^4 - 2 x_2 y_2^4 \\
37+
# z \ge \max\limits_{y} & x_1^2 y_1 + x_2^2 y_2 - x_1 y_1^4 - 2 x_2 y_2^4 \\
3838
# s.t. & (y_1 - 10)^2 + (y_2 - 10)^2 \le 25
3939
# \end{array} \\
4040
# & x \ge 0.
@@ -54,7 +54,7 @@ import Ipopt
5454

5555
# ```math
5656
# \begin{array}{r l}
57-
# V(x_1, x_z) = \max\limits_{y} & x_1^2 y_1 + x_2^2 * y_2 - x_1 y_1^4 - 2 x_2 y_2^4 \\
57+
# V(x_1, x_z) = \max\limits_{y} & x_1^2 y_1 + x_2^2 y_2 - x_1 y_1^4 - 2 x_2 y_2^4 \\
5858
# s.t. & (y_1 - 10)^2 + (y_2 - 10)^2 \le 25
5959
# \end{array}
6060
# ```
@@ -74,7 +74,7 @@ function solve_lower_level(x...)
7474
return objective_value(model), value.(y)
7575
end
7676

77-
# This function takes a guess of ``x``, and returns the optimal lower-level
77+
# This function takes a guess of ``x`` and returns the optimal lower-level
7878
# objective-value and the optimal response ``y``. The reason why we need both
7979
# the objective and the optimal ``y`` will be made clear shortly, but for now
8080
# let us define:
@@ -84,7 +84,7 @@ function V(x...)
8484
return f
8585
end
8686

87-
# We can substitute ``V`` into our full problem to create:
87+
# Then, we can substitute ``V`` into our full problem to create:
8888

8989
# ```math
9090
# \begin{array}{r l}
@@ -96,7 +96,8 @@ end
9696
# This looks like a nonlinear optimization problem with a user-defined function
9797
# ``V``! However, because ``V`` solves an optimization problem internally, we
9898
# can't use automatic differentiation to compute the first and second
99-
# derivatives.
99+
# derivatives. Instead, we can use JuMP's ability to pass callback functions
100+
# for the gradient and hessian instead.
100101

101102
# First up, we need to define the gradient of ``V`` with respect to ``x``. In
102103
# general, this may be difficult to compute, but because ``x`` appears only in
@@ -146,14 +147,14 @@ y
146147

147148
# This solution approach worked, but it has a performance problem: every time
148149
# we needed to compute the value, gradient, or hessian of ``V``, we had to
149-
# resolve the lower-level optimization problem! This is wasteful, because we
150+
# re-solve the lower-level optimization problem! This is wasteful, because we
150151
# will often call the gradient and hessian at the same point, and so solving the
151152
# problem twice with the same input repeats work unnecessarily.
152153

153154
# We can work around this by using memoization:
154155

155156
function memoized_solve_lower_level()
156-
last_x, f, y = nothing, 0.0, [NaN, NaN]
157+
last_x, f, y = nothing, NaN, [NaN, NaN]
157158
function _update_if_needed(x...)
158159
if last_x != x
159160
f, y = solve_lower_level(x...)

0 commit comments

Comments
 (0)