Skip to content

Documentation

Paulo edited this page Jun 30, 2020 · 5 revisions

Table of Contents

  1. MOBayesianOpt
    1. Parameters
      1. Output
    2. Methods
      1. initialize
      2. maximize
      3. WriteSpace
      4. ReadSpace
    3. Attributes

MOBayesianOpt

class MOBayesianOpt(target, NObj, pbounds, constraints=[],
                    verbose=False, Picture=True, TPF=None,
                    n_restarts_optimizer=100, Filename=None, MetricsPS=True,
                    max_or_min='max', RandomSeed=None))

A MOBayesianOpt object represents a full multi-objective optimization problem.

Parameters

  • target: Functions to be optimized
def target(x): # x is a np.array
    return [f_1, f_2, ..., f_NObj]
  • NObj: int, Number of objective functions

  • pbounds: numpy.ndarray with bounds for each parameter, pbounds.shape must be equal to (NParam,2), where NParam is Number of parameters for the objective function arguments (in other words the dimension of the search space)

    For example, for a problem with three dimensional search space, with variables x, y and z, pbounds would be similar to:

pbounds = numpy.array([[x_min, x_max], [y_min, y_max], [z_min, z_max]])
Where the values `x_min` and `x_max` could be any real value
or `None`, in which case that variable does not have a
lower and/or upper bound.
  • constraints: list of dictionary with constraints [{'type': 'ineq', 'fun': constr_fun}, ...], only inequality constraints are implemented. The constr_fun is the constraint function defined as:
def constr_fun(x):
    return g(x) # >= 0
  • verbose: bool, (default False) whether or not to print progress

  • Picture: bool, (default True) whether or not to plot PF convergence

  • TPF: numpyp.ndarray, (default None). Array with the True Pareto Front for calculation of convergence metrics

  • nrestartsoptimizer: int, (default 10) GaussianProcessRegressor parameter, the number of restarts of the optimizer for finding the kernel’s parameters which maximize the log-marginal likelihood. For complicated objective functions larger values of n_restarts_optimizer are desirable.

  • Filename: str, (default None), Partial metrics will be saved at a file named Filename, if None nothing is saved. If a file with the same name already exists the output is appended to the existing file, otherwise a new file is created. See description of what is saved in the Output section.

  • MetricsPS: bool, (default True), whether or not to calculate metrics with the Pareto Set points

  • max_or_min: str (default 'max'), whether the optimization problem is a maximization problem ('max'), or a minimization one ('min')

  • RandomSeed: {None, int, array_like}, optional, Random seed used to initialize the pseudo-random number generator. Can be any integer between 0 and 2**32 - 1 inclusive, an array (or other sequence) of such integers, or None (the default). If seed is None, then RandomState will try to read data from /dev/urandom (or the Windows analogue) if available or seed from the clock otherwise. (Argument to scipy RandomState)

Output

In file Filename each column corresponds to the following:

  • NDim: Dimensionality of the search space

  • Iter: Total number of objective function evaluation (including initial points)

  • N_init: Initial points in the current run of the code

  • NPF: Number of points of the Pareto Front used to calculate the metrics in this row

  • GD: Generational distance

  • SS: Spread

  • HV: Hyervolume

  • HausDist: Haussdorf Distance

  • Cover: Coverage

  • GDPS: Generational distance calculated from points in the Pareto Set

  • SSPS: Spread calculated from points in the Pareto Set

  • HDPS: Haussdorf distance calculated from points in the Pareto Set

  • Prob: Probability of random point selection of next iteration point

  • q: weight between Search space and objective space when selecting next iteration point

    • q = 1: objective space only
    • q = 0: search space only
  • Front: Filename where current Pareto front approximation is saved

The data in this file is may be saved from several different runs of the code, with different parameters and can be used for statistical purposes through Pandas filtering tools. Non-available values are replaced with numpy.nan.

Methods

initialize

initialize(self, init_points=None, Points=None, Y=None)

Initialize the optimization method by evaluating the objective functions on selected locations

Parameters

  • init_points: int, Number of random points in which to probe the objective functions
  • points: (optional) list of points in which to sample the method
  • Y: (optional) list values of the objective function at the points in Points. If not provided the method evaluates the objective at the points in Points

maximize

maximize(n_iter=100,
         prob=0.1,
         ReduceProb=False,
         q=0.5,
         n_pts=100,
         SaveInterval=10,
         FrontSampling=[10, 25, 50, 100],
         **gp_params)

The main method of the class, it actually runs the optimizer.

Parameters

  • n_iter: int (default 100), number of iterations of the method. Each iteration corresponds to a call of the objective function.

  • prob: float ( 0 < prob < 1, default 0.1), probability of chosing next iteration point randomly

  • ReduceProb: bool (default False) if True prob is linearly reduced to zero along the iterations of the method

  • q: float ( 0 < q < 1.0, default 0.5 ) weight between Search space and objective space when selecting next iteration point - q = 1: objective space only - q = 0: search space only

  • n_pts: int, effective size of the pareto front (len(front) = n_pts)

  • SaveInterval: int, at every SaveInterval save a npz file with the full pareto front at that iteration. The Pareto front data is retrieved using the numpy.load command, which will return a dictionary with the following keys:

    • "Front": numpy.array with current Pareto front approximation
    • "Pop": numpy.array with coordinates in search space of current Pareto front approximation
    • "PF": list of current probed non-dominated points in objective space
    • "PS": list of current probed non-dominated points in search space
  • FrontSampling: list of ints, Number of points to sample the pareto front for metrics

Returns

The method returns a tuple (front, pop), where:

  • front: numpy.array with Pareto front approximation. front.shape=(n_pts, NObj)
  • pop: numpy.array with Pareto set approximation (in search space). front.shape=(n_pts, NParam)

WriteSpace

WriteSpace(filename="space"):

ReadSpace

ReadSpace(self, filename="space.npz"):

Can be used in the place of initialize

Attributes

The object has the following attributes:

  • y_Pareto: list of non-dominated points in objective space
  • x_Pareto: list of non-dominated points in search space
  • space: TargetSpace object with information of the history of iterations:
    • print(obj.space): pretty print the probed points in the simulation
    • x, f = obj.space[:]: getitem from space, x and f are the design variable and objective functions, respectively. Slices can be used to get the values