Skip to content

Commit 3d3b285

Browse files
committed
Update user_defined_hessians.jl
1 parent 544c487 commit 3d3b285

File tree

1 file changed

+12
-6
lines changed

1 file changed

+12
-6
lines changed

docs/src/tutorials/nonlinear/user_defined_hessians.jl

Lines changed: 12 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -18,16 +18,18 @@
1818
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE #src
1919
# SOFTWARE. #src
2020

21-
# # User-defined hessians
21+
# # User-defined Hessians
2222

23-
# In this tutorial, we explain how to write a user-defined function with an
24-
# explicit hessian.
23+
# In this tutorial, we explain how to write a user-defined function (see [User-defined Functions](@ref))
24+
# with an explicit Hessian matrix.
2525

2626
# This tutorial uses the following packages:
2727

2828
using JuMP
2929
import Ipopt
3030

31+
# ## Motivation
32+
3133
# Our goal for this tutorial is to solve the bilevel optimization problem:
3234

3335
# ```math
@@ -46,7 +48,9 @@ import Ipopt
4648
# involving variables ``y``. From the perspective of the lower-level, the
4749
# values of ``x`` are fixed parameters, and so the model optimizes ``y`` given
4850
# those fixed parameters. Simultaneously, the upper level is optimizing ``x``
49-
# given the response of ``yy``.
51+
# given the response of ``y``.
52+
53+
# ## Decomposition
5054

5155
# There are a few ways to solve this problem, but we are going to use a
5256
# nonlinear decomposition method. The first step is to write a function to
@@ -145,8 +149,10 @@ value.(x)
145149
_, y = solve_lower_level(value.(x)...)
146150
y
147151

148-
# This solution approach worked, but it has a performance problem: every time
149-
# we needed to compute the value, gradient, or hessian of ``V``, we had to
152+
# ## Memoization
153+
154+
# Our solution approach works, but it has a performance problem: every time
155+
# we need to compute the value, gradient, or hessian of ``V``, we have to
150156
# re-solve the lower-level optimization problem! This is wasteful, because we
151157
# will often call the gradient and hessian at the same point, and so solving the
152158
# problem twice with the same input repeats work unnecessarily.

0 commit comments

Comments
 (0)