Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiobjective support in JuMP #2099

Closed
odow opened this issue Nov 8, 2019 · 36 comments · Fixed by #3176
Closed

Multiobjective support in JuMP #2099

odow opened this issue Nov 8, 2019 · 36 comments · Fixed by #3176

Comments

@odow
Copy link
Member

odow commented Nov 8, 2019

This issue is for a discussion on introducing multiobjective support to JuMP (and
MOI).

Passing multiple objectives to the solver

There are two possibilities:

  1. Extending @objective to support vector-valued functions, so the user would
    write:
    @objective(model, Min, C * x)
  2. Adding an index keyword, so the user would write:
    @objective(model, Min, C[1, :] * x, index = 1)
    @objective(model, Min, C[2, :] * x, index = 2)

Option (1) is probably the easiest, since it fits quite well into the MOI
framework, and JuMP already has support for parsing vector-valued functions.

M.O. solvers would just declare:

function MOI.supports(
    ::Optimizer, ::MOI.ObjectiveFunction{MOI.VectorAffineFunction{Float64}}
)
    return true
end

The biggest downside with (1) is that it would only support linear and quadratic
objectives. We wouldn't be able to directly support nonlinear objectives. (You
could, of course, introduce a dummy variable with a nonlinear equality
constraint.)

Querying results from the solver

I think the easiest way for M.O. solvers to return the Pareto frontier is for
them to sligthly abuse the notion of ResultCount.

We should implement

function result_count(model::Model)
    return MOI.get(model, MOI.ResultCount())
end

And then add keyword arguments to value, objective_value, so we have:

function value(x::VariableRef; result=1)
    return MOI.get(owner_model(x), MOI.VariablePrimal(result), index(x))
end

The only issue with objective_value (and objective_bound and
dual_objective_bound) is that they expect a Float64 return type. This would
be relaxed to depend on the type of the objective function.

Example

If MultiJuMP implemented a MOI.Optimizer solver, we could write:

using JuMP
using MultiJuMP

model = Model(() -> MultiJuMP.Optimizer(GLPK.Optimizer))
@variable(model, x[1:2] >= 0)
@objective(model, Min, [2x[1] + x[2]; x[1] + 2x[2]])
@constraint(model, sum(x) >= 1)
optimize!(model)

pareto_frontier = []
for i = 1:result_count(model)
    @assert primal_status(model; result = i) == MOI.OPTIMAL
    push!(pareto_frontier, (
        x_val = value.(x; result = i),
        obj = objective_value(model; result = i)
    ))
end

cc @anriseth, @matbesancon

@blegat
Copy link
Member

blegat commented Nov 8, 2019

Another option is to define

function MOI.set(model, ::ObjectiveFunction, func::AbstractVectorFunction)
   for i in 1:MOI.output_dimension(func)
       MOI.set(model, MultiObjectiveFunction(i), func[i]) 
    end
end

so that both syntax works for the user

@odow
Copy link
Member Author

odow commented Nov 8, 2019

The more I've thought about this, the more sure I am that 1 is the correct way. You really are declaring a vector valued objective, instead of multiple different objectives. It causes a natural interpretation of the return type of ObjectiveValue and needs no change to MOI.

@anriseth
Copy link
Contributor

anriseth commented Nov 9, 2019

Great to see a renewed and improved effort to bring vector-optimzation to JuMP :)
I've left academia and, therefore, sadly stopped using Julia. I don't think I can provide much useful input, but look forward to trying these features at some point in the future:)

@xgandibleux
Copy link

Thanks @odow for having initiated this discussion, and I am happy to see it.

We are developping vOptSolver which is currently composed of two packages, vOptSpecific and vOptGeneric. vOptGeneric is devoted to multi-objective linear optimization problems with discrete variables, see https://github.com/vOptSolver (but we have also some beta versions of algorithms for problems with mixed integer variables and continuous variables).

The version online is compliant with julia 1.x and JuMP 0.18. You can give a look on the notebook (here : https://github.com/vOptSolver/vOptSolver-notebook) that I have prepared as support for a seminar held in November 2018. The version of vOptGeneric compliant with JuMP 0.19 is ready on a branch (https://github.com/vOptSolver/vOptGeneric.jl/branches). It is a matter of free time for me for updating all the documents (for some reasons I have been much less available since the last summer).

Here an example with vOptGeneric+JuMP0.19 for the bi-objective knapsack problem:

using vOptGeneric, JuMP, Cbc, LinearAlgebra
m = vModel(with_optimizer(Cbc.Optimizer, logLevel=0))

p1 = [77,94,71,63,96,82,85,75,72,91,99,63,84,87,79,94,90,60,69,62]
p2 = [65,90,90,77,95,84,70,94,66,92,74,97,60,60,65,97,93,60,69,74]
w = [80,87,68,72,66,77,99,85,70,93,98,72,100,89,67,86,91,79,71,99]
c = 900
size = length(p1)

@variable(m, x[1:size], Bin)
@addobjective(m, Max, dot(x, p1))
@addobjective(m, Max, dot(x, p2))
@constraint(m, dot(x, w) <= c)

vSolve(m, method=:dichotomy)

Y_N = getY_N(m)
for n = 1:length(Y_N)
    X = value.(x, n)
    print(findall(elt -> elt  1, X))
    println("| z = ",Y_N[n])
end

As you can see:

  1. we have chosen to handle the objectives separately. Adding an index which denote a specific objective as suggested by @odow is our preferred option because we would like in a near future to remove one objective without redefining all the model.
  2. with vSolve, we specify the solving method used for computing the exact solutions (X_E) and the corresponding non-dominated points (Y_N). Currently four methods are available (epsilon-constraint; dichotomy; Chalmet; lexicographic), and these methods are currently limited to the bi-objective case (except for the lexicographic ).
  3. we have developed some functions for extracting Y_N and X_E.

Thus:

  • a discussion about how to pass the resolution method to invoke (and possibly the required parameters) is needed.

  • a discussion about how to retrieve Y_N and X_E is needed as soon as the problem is not discrete. Indeed, for the MO-MILP case, Y_N may be composed of points, segments (open or closed), facets (full or not; open or closed). We have not yet stated about a protocol in the vOpt team. One option may be to return each object separately, with its proprieties, and a description of their adjacency relations.

@blegat
Copy link
Member

blegat commented Nov 9, 2019

a discussion about how to pass the resolution method to invoke (and possibly the required parameters) is needed.

vOpt would can either create

  • one MOI optimizer per resolution method, so the user has do to model = Model(with_optimizer(vOptSolver.Chalmet.Optimizer) or
  • one single MOI optimizer and the resolution method is given as parameter, so the user ahs to do model = Model(with_optimizer(vOptSolver.Optimizer) and then set_parameter(model, "method", "Chalmet").

a discussion about how to retrieve Y_N and X_E is needed as soon as the problem is not discrete

You can freely define any interface, JuMP will not restrict you. You just need to define new MOI attributes, so that the user can do MOI.get(model, VOptSolver.Y_N()) and then you can define convenience function with the interface you like, e.g.: getY_N(model) = MOI.get(model, VOptSolver.Y_N()).

@odow
Copy link
Member Author

odow commented Nov 9, 2019

we have chosen to handle the objectives separately. Adding an index which denote a specific objective as suggested by @odow is our preferred option because we would like in a near future to remove one objective without redefining all the model.

Is it okay to just redefine the objective? (E.g., below.) Or are you keeping the pre-computed frontier and projecting it onto the lower-dimensional space when you remove an objective?

@objective(model, Min, C * x)
# Then later
@objective(model, Min, C[1, :] * x)

If X_E and Y_N are points, then you will be able to query them from JuMP using value(x, result=i) and objective_value(model, result=i) as I outlined. If you have more solver-specific solutions, it's easy to define MOI attributes as @blegat mentioned. For example:

model = Model()
@variable(model, x[1:2])
@objective(model, Min, C * x)
Y_N = MOI.get(model, vOpt.EfficientFacets())

Gurobi.jl uses this to query the IIS, for example.

There is one very big change for vOptSolver: you should re-write it as a MOI solver, rather than a JuMP-extension. That means you don't have to worry about macros like @addobjective, and defining types like vModel which wrap JuMP models. You would have first-class support for everything JuMP offers :)

@joaquimg
Copy link
Member

From the user/modeler perspective it seems that option 2 is better. Mode one is sort of not natural mainly if the two objectives are completely different.

@odow
Copy link
Member Author

odow commented Nov 11, 2019

So my reasoning is this:

  • MathOptInterface can support multiple objectives via vector-valued objective functions
  • MO solvers should declare support for MOI.ObjectiveFunction{MOI.VectorOfVariables}, MOI.ObjectiveFunction{MOI.VectorAffineFunction} or MOI.ObjectiveFunction{MOI.VectorQuadraticFunction}.
  • If they do this, then things like bridges should JustWork™
  • MathOptInterface can interpret the Pareto frontier using the ResultCount attribute. We already have support for this, and all VariablePrimal etc are parameterized by the ResultCount.

So, at the MOI level, we don't need to change anything to support M.O.

One exception is if solvers want to return more complicated solution artifacts (e.g., facets). If so, they should define solver-specific MOI attributes.

So, given we don't need to change MOI, what is the closest mapping to JuMP? Vector-valued objectives.

If you want to give the objectives separately, you can go:

@expression(model, first_objective, ...)
@expression(model, second_objective, ...)
@objective(model, Min, [first_objective; second_objective])

The other approach, JuMP giving objectives separately, would require changes to MOI to parameterize the objective function based on the index. This raises some questions.

  • What does ObjectiveValue return?
  • What if users give ObjectiveFunction{1} and ObjectiveFunction{3}?
  • What if one of the objectives is a vector-valued function?
  • How do we delete an objective? MOI only has the concept of setting attributes, so we could set it an objective to zero, but then does the user expect the dimension of the returned ObjectiveValue to change? Or do they expect a 0 in that element?

@joaquimg
Copy link
Member

expressions are reasonable for a quick fix, although doing separately seems a good future goal.
Delete should delete and not set to zero, otherwise it could grow a lot...

@matbesancon
Copy link
Contributor

The inconvenience I see to the @objective(model, Min, C * x) syntax is that people can get confused when accidentally providing a vector function, compared to having a multi-objective specific syntax

@odow
Copy link
Member Author

odow commented Nov 11, 2019

The inconvenience I see to the @objective(model, Min, C * x) syntax is that people can get confused when accidentally providing a vector function, compared to having a multi-objective specific syntax

To which I counter: there is already precedent for vector-valued functions in constraints; most solvers would throw "VectorAffineFunction in objective not supported"; and even if they did solve it, objective_value would return a vector not a Float64.

People could get equally confused with

@objective(model, Min, x)
@objective(model, Min, y)

And how would we handle multiple objective senses?

Here is @xgandibleux's example with what I am proposing:

using vOptGeneric, JuMP, Cbc, LinearAlgebra

p1 = [77,94,71,63,96,82,85,75,72,91,99,63,84,87,79,94,90,60,69,62]
p2 = [65,90,90,77,95,84,70,94,66,92,74,97,60,60,65,97,93,60,69,74]
w = [80,87,68,72,66,77,99,85,70,93,98,72,100,89,67,86,91,79,71,99]
c = 900
size = length(p1)

model = Model(with_optimizer(
    vOptGeneric.Optimizer(
        Cbc.Optimizer(logLevel=0), method=:dichotomy
    )
))
@variable(model, x[1:size], Bin)
@objective(model, Max, [dot(x, p1), dot(x, p2)])
@constraint(m, dot(x, w) <= c)
optimize!(model)
for n = 1:result_count(model)
    X = value.(x; result = n)
    print(findall(elt -> elt  1, X))
    println("| z = ", objective_value(model; result = n))
end

@xgandibleux
Copy link

In writing

@objective(model, Max, [dot(x, p1), dot(x, p2)])

the user cannot
(1) blend functions to minimize and to maximize in the model, and
(2) point out a function to delete without rebuilding all the model.

• Concerning (1): of course, it is trivial to rewrite the function from min to max, but having the choice here is natural (same level of why not express the constraints only in <=).

• Concerning (2): MO-models are also solved within an interactive approach, where the user can adopt a "what-if" behavior with the objectives. This is a feedback that I have received from users of vOptGeneric.

I am working with my students on a case study (a car sequencing problem) with 2 or 3 objective functions. The (technical) constraints are static but the objectives may change between production plants. This is an other illustration where preferences of a decision-maker may suggest to delete an objective or replace an objective by an other (with the idea of comparing different optimization policy) in a (large) MIP model.

Is it technically possible to imagine of naming the objectives as the constraints:

@objective(model, obj1, Max, [dot(x, p1)])
@objective(model, obj2, Min, [dot(x, p2)])

or

@objective(model, cost[1], Min, [dot(x, p1)])
@objective(model, stability[2], Max, [dot(x, p2)])

or

@objective(model, [1], Min, [dot(x, p1)])
@objective(model, [2], Max, [dot(x, p2)])

or

@objective(model, obj[i = 1:3], [ dot(x, p[i]) ])

and then to refer to one objective by name (e.g. stability) or by its index (e.g. 2)?

@odow
Copy link
Member Author

odow commented Nov 11, 2019

blend functions to minimize and to maximize in the model

This is a reasonable point to make. The question of "objective sense" isn't obvious. The hang-up we seem to be having is the standard form. Is it:

min: f_0(x)
s.t. f_i(x) in S_i

where f_0 can be vector-valued, or is it

sense_i: f_i(x), i=1,2,...
s.t. g_j(x) in S_j, j=1,2,...

where f_i must be scalar-valued.

I'm going to argue strongly that it's the first, since it is the simplest concept. More importantly, it's the design of MathOptInterface as it stands today. We wouldn't need to add any special features for multi objective problems. This is pretty important for actually implementing this, because I don't think we want to re-think the entire concept of objectives this close to the release of JuMP 1.0.

So at the expense of (i) providing a list of expressions not being the nicest syntax and (ii) not being able to specify different objective senses, perhaps we can table this idea until JuMP 2.0?

point out a function to delete without rebuilding all the model.

They can just reset the objective:

model = Model()
@variable(model, x[1:2])
@objective(model, Min, [x[1], x[2]])
optimize!(model)
@objective(model, Min, x[1] + x[2])
optimize!(model)

Is it technically possible to imagine of naming the objectives as the constraints:

This would be possible using @expression:

model = Model()
@variable(model, x[1:2])
@expression(model, obj[i=1:2], x[i])
@objective(model, Min, obj)
optimize!(model)
value(obj[1], result=1)

Relating to the changing objectives point:

model = Model()
@variable(model, x[1:3])
@expression(model, stability, x[1])
@expression(model, cost, x[2])
@expression(model, production, x[3])
@objective(model, Max, [stability, production, -cost])
optimize!(model)
@show value(stability; result=2)
@show value(cost; result=2)
@objective(model, Max, [stability, production])
optimize!(model)
# Note: we didn't have cost as an objective, but we can still query the value!
@show value(cost; result=2)

@odow
Copy link
Member Author

odow commented Nov 12, 2019

Okay, so I started to implement this, and I found that one thing would have to change in MOI. Essentially, we just need to relax the AbstractScalarFunction aspect of this definition:

"""
    ObjectiveFunction{F<:AbstractScalarFunction}()

A model attribute for the objective function which has a type `F<:AbstractScalarFunction`.
`F` should be guaranteed to be equivalent but not necessarily identical to the function type provided by the user.
Throws an `InexactError` if the objective function cannot be converted to `F`,
e.g. the objective function is quadratic and `F` is `ScalarAffineFunction{Float64}` or
it has non-integer coefficient and `F` is `ScalarAffineFunction{Int}`.
"""
struct ObjectiveFunction{F<:AbstractScalarFunction} <: AbstractModelAttribute end

https://github.com/JuliaOpt/MathOptInterface.jl/blob/1d4f0d3fd0a545eb2c926731b7c428963092974d/src/attributes.jl#L804-L813

@odow
Copy link
Member Author

odow commented Nov 12, 2019

So I talked to @mlubin, and we won't be pushing this into JuMP unless there is general agreement on the approach. We definitely don't want to force @xgandibleux et al. to implement our (edit) my approach when they are the subject matter experts.

However, since I require this for my personal work, I will move forward with an implementation. Once it's ready for review, we can have another discussion about the pro's and con's of each approach, this time with some actual working code. But, to re-iterate, we won't merge it unless there is general agreement.

@odow
Copy link
Member Author

odow commented Dec 6, 2019

Okay, I now have a proof of concept multi-objective solver: https://github.com/odow/MOO.jl

The best place to look is the test, where we solve a trivial 2 variable BOLP:
https://github.com/odow/MOO.jl/blob/4a1efcbfe04802ca477932b2c1d7fca55eaccb16/test/nise.jl

Arguments for min VectorFunction as opposed to Vector[min ScalarFunction1, max ScalarFunction2]:

  • Aside from a 1 line change in MOI (the rest of the diff is for MOI.Utilities), everything JustWorks™. Note in particular that we can use loadfromstring! without modifications.
  • The use of ResultCount to represent the solver returning a set of Pareto-optimal solutions
  • The interpretation of ObjectiveBound as the utopian point
  • It's nice that the entire implementation of the solver is only ~200 lines, and most of those are lines which forward methods onto the inner solver.
  • The MOP format, the VLP format, Gurobi, and CPLEX all assume min/max F(x)

If we want to move forward with this, now that JuMP has support for accessing multiple solutions (#2100), the only missing piece is JuMP passing vector-valued functions to solvers as the objective function.

@freemin7
Copy link
Contributor

I recently had a conversation with Kaisa Miettinen who leads the multiobjective optimization group at University of Jyväskylä. (She made me aware of @xgandibleux 's work using Julia)
The reason for using multi objective optimization according to her is that the proper weighing of the objectives depends on querying a decision maker. There a 4 ways how that might be done:

  • The a proper (method of) weighing is know ahead of time and the problem can be solved as a single objective optimization
  • A surrogate model of the pareto front is required so that a decision maker can judge/explore the space computationally cheaply
  • After some sampling the decision maker get queried to might judge which objectives/parts of the parameter space are interesting to be explored more in case the space of options is really huge
  • A sampling of the pareto front is required such that a decision maker can choose between those samples, possibly with some software that helps him explore that space.

I see that 1&4 can be accomplished with the current frame work.
3 could be accomplished using callbacks. Although a standardized interface for user interaction would be really nice so that many Multi objective solvers can share one exploratory interface (or multiple interfaces could compete in a far away future).
While i imagine that 2 could be accomplished using either callback or parsing the multiple results but i imagine that a solver might have more useful information for constructing surrogate than just the set of the pareto front. (I can think right now of local gradient information on the pareto manifold (think tangent space) or rejected points close to the pareto front so that the surrogate doesn't touch them.)

If consider those features non-essential but think they are low hanging fruits on the way of JuMP becoming a best in class multi objective framework. Although my knowledge on multi objective optimization is second hand only and i am open to be corrected.

@manuelbb-upb
Copy link

I would like to add my two cents to the discussion, mainly concerning way objectives are declared.
(tldr: I go with @xgandibleux and am in favor of declaring them individually).

Background: I have recently worked on a trust region solver for nonlinear multiobjective problems.
This solver is specifically aimed at expensive or heterogeneous problems where some objectives are considered blackbox and might not even be Julia functions. Hence, it is not necessarily something I would write a MOI interface for.

Despite this fact, the need for separate evaluation of the different objectives might arise in other approaches as well.
• There is, of course, the Pascoletti-Serafini scalarization, a very powerful tool that can be used on its own or be integrated in iterative schemes. It requires an utopia point in objective space that results from the minimization of the individual objectives.
• I guess for the ε-constraint method, accessibility of the individual objectives would be of use too.
• Suppose, for a you want to calculate the multiobjective Newton direction. This requires positive definite Hessians which is why often times BFGS approximations are used. A sophisticated solver could allow for the user to specify (or automatically determine) how the Hessian should be calculated/approximated for each objective.
• I know of investigations into the hierarchical structure of the Pareto Set/Front, where reduced problems were used, i.e., where one or several objectives were dropped.

Hope this helps in evaluating how to proceed :)

@odow
Copy link
Member Author

odow commented Mar 8, 2021

@/all: Thanks for the input.

If anyone watching this thread (or your students) is interested in progressing this further, I've proposed this as a Google Summer of Code (GSOC) for this (Northern) summer:
https://github.com/jump-dev/GSOC2021#multiobjective-optimization

You can find more about GSOC here:

If we get selected, student applications run March 30 - April 14.

@jacktang
Copy link

jacktang commented Jul 1, 2022

Hi @odow , how is the progress about MOO? or recommend to use https://github.com/anriseth/MultiJuMP.jl?

@odow
Copy link
Member Author

odow commented Jul 2, 2022

No progress. MultiJuMP hasn't been updated to JuMP 1.0. Digging in to understand how it works and updating it to JuMP 1.0 would be a good project, if you have the time.

@xgandibleux
Copy link

vOptGeneric solves generic linear problems with 2 objectives with e.g. an epsilon constraint method (+ MIP solver available from JuMP). It is compliant with the last versions of julia and JuMP. A quick starter is presented in recent slides here. Several examples are provided here.
Several extensions/new features are under developments, I hope to come back soon with news. Feedbacks are welcome.

odow added a commit to jump-dev/MathOptInterface.jl that referenced this issue Jan 6, 2023
This is a re-hash of #968

The last PR lost steam because of the discussion in jump-dev/JuMP.jl#2099

My sense is that this is still the right API approach at the MOI-level, but
that it's the wrong approach at the JuMP level. I'm working on a PR to JuMP,
so let's hold off comments until things are working fully and we can discuss
the full API, rather than the pros and cons of treating multi-criteria as
vector optimization.

I have a proof-of-concept solver at https://github.com/odow/MOO.jl
and I'll also update Gurobi.jl to support the new API.
odow added a commit that referenced this issue Jan 6, 2023
To be read in conjunction with jump-dev/MathOptInterface.jl#2070

A proof-of-concept solver and JuMP examples are available at jump-dev/MultiObjectiveAlgorithms.jl#2

The issue #2099 discussed two approaches
for implementing MO in JuMP.

 1. Treat multicriteria as a vector of scalar objectives and a vector of
    scalar senses. This would let someone write [Min, Max], [f(x), g(x)].
    The main reason for this approach is that it matches what users want
    to do.
 2. Treat multicriteria as an optimization problem with a vector-valued
    objective function. Users could write only Min, f(x) where f(x) is a
    vector. The main reason for this approach is that it matches what
    MathOptInterface wants.

This PR implements option 2. The strongest reason in support of option 2
is that it requires very little code to implement, suggesting that it is
a natural extension of MOI.

The biggest downside is that it doesn't overcome the Min-Max issue; but
I think we can work around this with user-facing cosmetic tooling in JuMP;
solvers would be forced to accept a single sense.
@odow
Copy link
Member Author

odow commented Jan 10, 2023

I'm back at this in #3176.

Here's a related project that we should look to for ideas: https://pymoo.org. It's a bit limited though. It supports only minimize of a vector of objectives.

@odow
Copy link
Member Author

odow commented Jan 11, 2023

Single objective sense

  • Gurobi, CPLEX, and Xpress support multiple objectives, but require weights (and/or priorities)
  • CPLEX's extension to the LP file format and MPS file format (can give negative weights to flip)
  • pymoo (minimize only)

Mixed objective sense

@rschwarz
Copy link
Contributor

If I may comment on Gurobi's capability: You can specify both weights and priorities for the objectives. The objectives will then be grouped by the priority, and each group will be "blended" with a weighted sum.

In addition, the user can specify tolerances (both rel. and abs.?) to allow for some relaxation when going one level deeper into the hierarchy. This feature in particular is quite useful in practice, we found, as you'd otherwise quickly end up in extreme corners.

@odow
Copy link
Member Author

odow commented Jan 11, 2023

You can specify both weights and priorities for the objectives

Yeah. I haven't tested, but this should work:

set_optimizer_attribute.(
    model, 
    Gurobi.MultiObjectivePriority.(1:3), 
    [2, 1, 1],
)
set_optimizer_attribute.(
    model, 
    Gurobi.MultiObjectiveWeight.(1:3), 
    [1.0, 0.25, 0.75],
)

I think these are attributes of the optimizer though. They're unrelated to how we model the objectives at the JuMP level.

@rschwarz
Copy link
Contributor

I think these are attributes of the optimizer though. They're unrelated to how we model the objectives at the JuMP level.

Fair enough, I'm not sure about the scope of this issue. If this is just about defining attributes that some of the solvers may support, there's not much more to say about it.

But the algorithm for hierarchical objectives is easy enough to implement on top of any (LP/MIP) solver, so it could make sense to implement some kind of wrapper for it.

@odow
Copy link
Member Author

odow commented Jan 11, 2023

But the algorithm for hierarchical objectives is easy enough to implement on top of any (LP/MIP) solver, so it could make sense to implement some kind of wrapper for it.

Yeah. That's my plan for: https://github.com/odow/MOO.jl

Currently it just has a single algorithm, but the idea is to collate a few simple ones.

@xgandibleux
Copy link

Before the integration of MOP features into CPLEX and Gurobi, we have integrated in vOptGeneric two methods aiming to solve weighted sum and the lexicographic LP/MIP with any solver interfaced to JuMP. The methods to invoke are

:dicho or :dichotomy
:lex or :lexico

These last months, we have extended HiGHS to 2LP (expected by several colleagues for research needs) and developped the epsilon-contraint method for 3 objectives based on the Kirlik-and-Sayin algorithm (vOptGeneric is mainly used today for solving bi-objective problems with the epslion-contraint method with declared with JuMP). Both should be integrated to vOptGeneric shorthly (it is a matter of freetime for us).

BTW, FICO has also announced recently the integration of MOP features into XPRESS.

@odow
Copy link
Member Author

odow commented Jan 11, 2023

These last months, we have extended HiGHS to 2LP

Nice 😄 Is there a link to the code? Does it support independent objective senses?

@xgandibleux
Copy link

The code is already on github. We are now testing it. It will be available in the coming weeks (as soon as the tests will be completed, and cleaned from french comments).

HiGHS2LP implements the 3rd phase of the simplex algorithm. From the optimal solution obtained with the simplex algorithm, it iterates for generating all the non-dominated points.

HiGHS2LP is used exactly as HiGHS (see hereafter); the only difference comes the second objective that the user has to specify. Both objectives are given in maximization or in minimization.

The solver computes a minimum complete set of efficient solutions $X_E$ and the corresponding non-dominated points $Y_N$. It returns also the vector of critical weights $\lambda$. Here an example (coming from the book of Matthias Ehrgott, pages 154-155):

The 2LP problem to solve:

min 3x1 + x2 
min -x1 -2x2
s/t   x2 <= 3 
     3x1 - x2 <= 6
      x1, x2 >= 0

The description for HiGHS2LP:

model.lp_.num_col_ = 2; 
model.lp_.num_row_ = 2;
model.lp_.sense_ = ObjSense::kMinimize; 
model.lp_.col_cost_ = {3 , 1};
model.lp_.col_cost2_ = {−1, −2}; 
model.lp_.col_lower_ = {0 , 0};
model.lp_.col_upper_ = {inf , inf }; 
model.lp_.row_lower_ = {−inf , −inf };
model.lp_.row_upper_ = {3 , 6};
model.lp_.a_matrix_.start_ = {0, 1, 3}; 
model.lp_.a_matrix_.index_ = { 1, 0, 1};
model.lp_.a_matrix_.value_ = { 3, 1, −1};

The output:

Running HiGHS 1.2.2 [date: 2022−09−27, git hash: n/a]
Copyright (c) 2022 ERGO−Code under MIT licence terms Solving LP without presolve or with basis
  Iteration Objective Infeasibilities num(sum)
         0   0.0000000000e+00 Pr: 0(0) 0s
         2   1.2000000000e+01 Pr: 0(0); Du: 2(3) 0s
Model status : Optimal
Simplex iterations : 2
Objective value : 1.2000000000e+01
Objective value2 : −9.0000000000e+00 
HiGHS run time : 0.00

Model status : Optimal 
Simplex iteration count : 2
Objective function value for c1 : 12 
Objective function value for c2 : −9
Primal solution status : Feasible 
Dual solution status : Feasible
Basis : Valid
Column 0; value = 3; dual c1 = 0; dual c2 = 0; status: Basic
Column 1; value = 3; dual c1 = 0; dual c2 = 0; status: Basic
Row 0; value = 3; dual c1 = 2; dual c2 = −2.33333; status: At upper bound
Row 1; value = 6; dual c1 = 1; dual c2 = −0.333333; status: At upper bound
Basic feasible solutions :
(0 0 ) −> (0 0 ) 
(0 3 ) −> (3 −6 )
(3 3 ) −> (12 −9 )
Poids :
1; 0.666667; 0.25;

In the output we get:

  • $X_E$, the efficient solutions (left), and $Y_N$, the non-nominated points (right); here (0 0 ) −> (0 0 ) ; (0 3 ) −> (3 −6 ); (3 3 ) −> (12 −9 )
  • $\lambda$, the weight decomposition; here 1; 0.666667; 0.25;

To do:

  • ASAP, heavy tests on several 2LP problems
  • soon, develop a bridge from JuMP/MOI for calling the binary of HiGHS2LP
  • soon, update vOptGeneric for calling HiGHS2LP from JuMP syntax and collecting the outputs
  • later, merge HiGHS2LP into HiGHS (we are in touch with Julian for that perspective)

@odow
Copy link
Member Author

odow commented Jan 12, 2023

Both objectives are given in maximization or in minimization

Okay. I think this settles it for me. Having a vector-valued objective function with a single objective sense is the way to go, since that's what all solvers supporting multiple objectives will support.

If, in future, we find that users keep asking for [Min, Max], we could add some sort of bridge to JuMP that would convert it into a pure minimization problem to pass to the solver.

So the PR for people to review if interested is jump-dev/MathOptInterface.jl#2070

soon, develop a bridge from JuMP/MOI for calling the binary of HiGHS2LP

This should be easy, we can copy HiGHS.jl, but it will need jump-dev/MathOptInterface.jl#2070 first.

odow added a commit to jump-dev/MathOptInterface.jl that referenced this issue Jan 13, 2023
This is a re-hash of #968

The last PR lost steam because of the discussion in jump-dev/JuMP.jl#2099

My sense is that this is still the right API approach at the MOI-level, but
that it's the wrong approach at the JuMP level. I'm working on a PR to JuMP,
so let's hold off comments until things are working fully and we can discuss
the full API, rather than the pros and cons of treating multi-criteria as
vector optimization.

I have a proof-of-concept solver at https://github.com/odow/MOO.jl
and I'll also update Gurobi.jl to support the new API.
odow added a commit that referenced this issue Jan 25, 2023
To be read in conjunction with jump-dev/MathOptInterface.jl#2070

A proof-of-concept solver and JuMP examples are available at jump-dev/MultiObjectiveAlgorithms.jl#2

The issue #2099 discussed two approaches
for implementing MO in JuMP.

 1. Treat multicriteria as a vector of scalar objectives and a vector of
    scalar senses. This would let someone write [Min, Max], [f(x), g(x)].
    The main reason for this approach is that it matches what users want
    to do.
 2. Treat multicriteria as an optimization problem with a vector-valued
    objective function. Users could write only Min, f(x) where f(x) is a
    vector. The main reason for this approach is that it matches what
    MathOptInterface wants.

This PR implements option 2. The strongest reason in support of option 2
is that it requires very little code to implement, suggesting that it is
a natural extension of MOI.

The biggest downside is that it doesn't overcome the Min-Max issue; but
I think we can work around this with user-facing cosmetic tooling in JuMP;
solvers would be forced to accept a single sense.
odow added a commit to jump-dev/MathOptInterface.jl that referenced this issue Jan 25, 2023
This is a re-hash of #968

The last PR lost steam because of the discussion in jump-dev/JuMP.jl#2099

My sense is that this is still the right API approach at the MOI-level, but
that it's the wrong approach at the JuMP level. I'm working on a PR to JuMP,
so let's hold off comments until things are working fully and we can discuss
the full API, rather than the pros and cons of treating multi-criteria as
vector optimization.

I have a proof-of-concept solver at https://github.com/odow/MOO.jl
and I'll also update Gurobi.jl to support the new API.
@odow
Copy link
Member Author

odow commented Jan 26, 2023

@xgandibleux here's the documentation for the PR: https://jump.dev/JuMP.jl/previews/PR3176/manual/objective/#Set-a-vector-valued-objective

Take a read. Any comments on the syntax? Are you happy to model problems with a vector-valued objective?

@xgandibleux
Copy link

@odow, I installed and tested this code on my computer, it is great!
Having these two ways to set the objectives fits the expectations discussed here, the practice of vOptGeneric users, and viewed as convenient by colleagues active in MOP (I did a quick tour of my close collaborators).

Shall we moving to
(1) the integration of 2D/3D MOP solving methods called by optimize!(), in taking in parameter

  • the MOP method (lexicographic, e-constraint, weightedSum, Simplex, specific ones, etc.)
  • the MILP solver (HiGHS, GLPK, Gurobi, Cplex, etc.) when it is required by a method,
  • the specific parameters (e.g. value of epsilon)

(2) the retrieval of outputs at least for 2 and 3 objective cases:

  • for each non-dominated points $y \in Y_N$,
  • (at least) one corresponding efficient solution $x \in X_E$,
  • the specific information associated to one point when it is required e.g. adjacent vertexes in LP/MILP, open/closed edges/facets in MILP

Perhaps starting with a 2D e-constraint method (which call a MILP solver among the 3 listed) for solving an IP will help to design the coming steps. The code corresponding to this method is available in vOptGeneric, which can be a starting point. An other tentative could be to revisit gurobi.jl in order to deliver a 2-IP and to return $Y_{N}$ and $X_{E}$ corresponding to a lexicographic resolution.

@odow
Copy link
Member Author

odow commented Jan 30, 2023

I've already implemented support in Gurobi.jl, and MOO.jl contains a couple of simple algorithms that we can move to vOptGeneric once this is released.

Here's what it looks like with these branches:

(hgh) pkg> st
      Status `/private/tmp/hgh/Project.toml`
  [2e9cd046] Gurobi v0.11.5 `https://github.com/jump-dev/Gurobi.jl.git#od/vector-optimization`
  [87dc4568] HiGHS v1.4.2
  [4076af6c] JuMP v1.7.0 `https://github.com/jump-dev/JuMP.jl.git#od/vector-optimization`
  [0327d340] MOO v0.0.1 `https://github.com/odow/MOO.jl#master`
  [b8f27783] MathOptInterface v1.11.5 `https://github.com/jump-dev/MathOptInterface.jl.git#od/vector-optimization`
julia> using JuMP

julia> import Gurobi, HiGHS, MOO

julia> begin
           p1 = [77, 94, 71, 63, 96, 82, 85, 75, 72, 91, 99, 63, 84, 87, 79, 94, 90]
           p2 = [65, 90, 90, 77, 95, 84, 70, 94, 66, 92, 74, 97, 60, 60, 65, 97, 93]
           w = [80, 87, 68, 72, 66, 77, 99, 85, 70, 93, 98, 72, 100, 89, 67, 86, 91]
           model = Model()
           set_silent(model)
           @variable(model, x[1:length(w)], Bin)
           @objective(model, Max, [p1' * x, p2' * x])
           @constraint(model, w' * x <= 900)
           model
       end
A JuMP Model
Maximization problem with:
Variables: 17
Objective function type: Vector{AffExpr}
`AffExpr`-in-`MathOptInterface.LessThan{Float64}`: 1 constraint
`VariableRef`-in-`MathOptInterface.ZeroOne`: 17 constraints
Model mode: AUTOMATIC
CachingOptimizer state: NO_OPTIMIZER
Solver name: No optimizer attached.
Names registered in the model: x

julia> set_optimizer(model, Gurobi.Optimizer)

julia> optimize!(model)

julia> for i in 1:result_count(model)
           Y = objective_value(model; result = i)
           println("Result $i : Objective value = $Y")
           X = findall(xi -> xi > 0.9, value.(x; result = i))
           println("  Items = $X")
       end
Result 1 : Objective value = [934.0, 971.0]
  Items = [2, 3, 5, 6, 8, 10, 11, 12, 15, 16, 17]
Result 2 : Objective value = [905.0, 897.0]
  Items = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]

julia> set_optimizer(model, () -> MOO.Optimizer(HiGHS.Optimizer))

julia> set_optimizer_attribute(model, MOO.Algorithm(), MOO.NISE())

julia> optimize!(model)

julia> for i in 1:result_count(model)
           Y = objective_value(model; result = i)
           println("Result $i : Objective value = $Y")
           X = findall(xi -> xi > 0.9, value.(x; result = i))
           println("  Items = $X")
       end
Result 1 : Objective value = [955.0, 906.0]
  Items = [2, 3, 5, 6, 9, 10, 11, 14, 15, 16, 17]
Result 2 : Objective value = [948.0, 939.0]
  Items = [1, 2, 3, 5, 6, 8, 10, 11, 15, 16, 17]
Result 3 : Objective value = [934.0, 971.0]
  Items = [2, 3, 5, 6, 8, 10, 11, 12, 15, 16, 17]
Result 4 : Objective value = [918.0, 983.0]
  Items = [2, 3, 4, 5, 6, 8, 10, 11, 12, 16, 17]

julia> set_optimizer_attribute(model, MOO.Algorithm(), MOO.Hierarchical())

julia> optimize!(model)

julia> for i in 1:result_count(model)
           Y = objective_value(model; result = i)
           println("Result $i : Objective value = $Y")
           X = findall(xi -> xi > 0.9, value.(x; result = i))
           println("  Items = $X")
       end
Result 1 : Objective value = [955.0, 906.0]
  Items = [2, 3, 5, 6, 9, 10, 11, 14, 15, 16, 17]

Note that Gurobi returns multiple results, but the second is not on the frontier, it is just an integer feasible point.

odow added a commit that referenced this issue Feb 10, 2023
To be read in conjunction with jump-dev/MathOptInterface.jl#2070

A proof-of-concept solver and JuMP examples are available at jump-dev/MultiObjectiveAlgorithms.jl#2

The issue #2099 discussed two approaches
for implementing MO in JuMP.

 1. Treat multicriteria as a vector of scalar objectives and a vector of
    scalar senses. This would let someone write [Min, Max], [f(x), g(x)].
    The main reason for this approach is that it matches what users want
    to do.
 2. Treat multicriteria as an optimization problem with a vector-valued
    objective function. Users could write only Min, f(x) where f(x) is a
    vector. The main reason for this approach is that it matches what
    MathOptInterface wants.

This PR implements option 2. The strongest reason in support of option 2
is that it requires very little code to implement, suggesting that it is
a natural extension of MOI.

The biggest downside is that it doesn't overcome the Min-Max issue; but
I think we can work around this with user-facing cosmetic tooling in JuMP;
solvers would be forced to accept a single sense.
@odow
Copy link
Member Author

odow commented Feb 15, 2023

Merged! Thanks for all of the input over the last few years.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Development

Successfully merging a pull request may close this issue.

10 participants