Skip to content

Commit a33e65f

Browse files
committed
Respond to feedback
1 parent a71f427 commit a33e65f

File tree

2 files changed

+18
-17
lines changed

2 files changed

+18
-17
lines changed

docs/src/manual/nlp.md

+3-2
Original file line numberDiff line numberDiff line change
@@ -462,8 +462,9 @@ register(model, :rosenbrock, 2, f, ∇f, ∇²f)
462462
Make sure the first argument to `∇²f` supports an `AbstractMatrix`, and do
463463
not assume the input is `Float64`. You may assume the matrix is initialized
464464
with zeros, so you need only to fill in the non-zero terms. The matrix type
465-
passed in as `H` is very limited. You may assume only that it supports
466-
`size(H)` and `setindex!`.
465+
passed in as `H` depends on the automatic differentiation system, so it may
466+
be something other than `Matrix{Float64}`. You may assume only that it
467+
supports `size(H)` and `setindex!`.
467468

468469
### User-defined functions with vector inputs
469470

docs/src/tutorials/nonlinear/user_defined_hessians.jl

+15-15
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@
2121
# # User-defined Hessians
2222

2323
# In this tutorial, we explain how to write a user-defined function (see [User-defined Functions](@ref))
24-
# with an explicit Hessian matrix.
24+
# with a Hessian matrix explicitly provided by the user.
2525

2626
# This tutorial uses the following packages:
2727

@@ -34,7 +34,7 @@ import Ipopt
3434

3535
# ```math
3636
# \begin{array}{r l}
37-
# \min\limits_{x} & x_1^2 + x_2^2 + z \\
37+
# \min\limits_{x,z} & x_1^2 + x_2^2 + z \\
3838
# s.t. & \begin{array}{r l}
3939
# z \ge \max\limits_{y} & x_1^2 y_1 + x_2^2 y_2 - x_1 y_1^4 - 2 x_2 y_2^4 \\
4040
# s.t. & (y_1 - 10)^2 + (y_2 - 10)^2 \le 25
@@ -45,16 +45,16 @@ import Ipopt
4545

4646
# This bilevel optimization problem is composed of two nested optimization
4747
# problems. An _upper_ level, involving variables ``x``, and a _lower_ level,
48-
# involving variables ``y``. From the perspective of the lower-level, the
49-
# values of ``x`` are fixed parameters, and so the model optimizes ``y`` given
50-
# those fixed parameters. Simultaneously, the upper level is optimizing ``x``
51-
# given the response of ``y``.
48+
# involving variables ``y``. From the perspective of the lower-level problem,
49+
# the values of ``x`` are fixed parameters, and so the model optimizes ``y``
50+
# given those fixed parameters. Simultaneously, the upper-level problem
51+
# optimizes ``x`` and ``z`` given the response of ``y``.
5252

5353
# ## Decomposition
5454

5555
# There are a few ways to solve this problem, but we are going to use a
5656
# nonlinear decomposition method. The first step is to write a function to
57-
# compute:
57+
# compute the lower-level problem:
5858

5959
# ```math
6060
# \begin{array}{r l}
@@ -78,7 +78,7 @@ function solve_lower_level(x...)
7878
return objective_value(model), value.(y)
7979
end
8080

81-
# This function takes a guess of ``x`` and returns the optimal lower-level
81+
# The next function takes a value of ``x`` and returns the optimal lower-level
8282
# objective-value and the optimal response ``y``. The reason why we need both
8383
# the objective and the optimal ``y`` will be made clear shortly, but for now
8484
# let us define:
@@ -101,7 +101,7 @@ end
101101
# ``V``! However, because ``V`` solves an optimization problem internally, we
102102
# can't use automatic differentiation to compute the first and second
103103
# derivatives. Instead, we can use JuMP's ability to pass callback functions
104-
# for the gradient and hessian instead.
104+
# for the gradient and Hessian instead.
105105

106106
# First up, we need to define the gradient of ``V`` with respect to ``x``. In
107107
# general, this may be difficult to compute, but because ``x`` appears only in
@@ -115,7 +115,7 @@ function ∇V(g::AbstractVector, x...)
115115
return
116116
end
117117

118-
# Second, we need to define the hessian of ``V`` with respect to ``x``. This is
118+
# Second, we need to define the Hessian of ``V`` with respect to ``x``. This is
119119
# a symmetric matrix, but in our example only the diagonal elements are
120120
# non-zero:
121121

@@ -152,17 +152,17 @@ y
152152
# ## Memoization
153153

154154
# Our solution approach works, but it has a performance problem: every time
155-
# we need to compute the value, gradient, or hessian of ``V``, we have to
155+
# we need to compute the value, gradient, or Hessian of ``V``, we have to
156156
# re-solve the lower-level optimization problem! This is wasteful, because we
157-
# will often call the gradient and hessian at the same point, and so solving the
157+
# will often call the gradient and Hessian at the same point, and so solving the
158158
# problem twice with the same input repeats work unnecessarily.
159159

160160
# We can work around this by using memoization:
161161

162162
function memoized_solve_lower_level()
163163
last_x, f, y = nothing, NaN, [NaN, NaN]
164164
function _update_if_needed(x...)
165-
if last_x != x
165+
if last_x !== x
166166
f, y = solve_lower_level(x...)
167167
last_x = x
168168
end
@@ -191,8 +191,8 @@ f, ∇f, ∇²f = memoized_solve_lower_level()
191191

192192
# The function above is a little confusing, but it returns three new functions
193193
# `f`, `∇f`, and `∇²f`, each of which call `_update_if_needed(x...)`. This
194-
# function only updates the cached values of `f` and `y` if the input `x` is
195-
# different to what is last saw.
194+
# function only updates the cached values of the objective `f` and lower-level
195+
# primal variables `y` if the input `x` is different to its previous value.
196196

197197
model = Model(Ipopt.Optimizer)
198198
@variable(model, x[1:2] >= 0)

0 commit comments

Comments
 (0)