Skip to content

Latest commit

 

History

History
181 lines (132 loc) · 7.2 KB

README.md

File metadata and controls

181 lines (132 loc) · 7.2 KB

image image image Ruff GitHub

hposuite

A lightweight framework for benchmarking HPO algorithms

Minimal Example to run hposuite

from hposuite import create_study

study = create_study(
    name="hposuite_demo",
    output_dir="./hposuite-output",
    optimizers=[...],   # Eg: "RandomSearch"
    benchmarks=[...],   # Eg: "ackley"
    num_seeds=5,
    budget=100,         # Number of iterations
)

study.optimize()

Tip

Installation

Create a Virtual Environment using Venv

python -m venv hposuite_env
source hposuite_env/bin/activate

Installing from PyPI

pip install hposuite

Tip

  • pip install hposuite["notebook"] - For usage in a notebook
  • pip install hposuite["all"] - To install hposuite with all available optimizers and benchmarks
  • pip install hposuite["optimizers"] - To install hposuite with all available optimizers only
  • pip install hposuite["benchmarks"] - To install hposuite with all available benchmarks only

Note

  • We recommend doing pip install hposuite["all"] to install all available benchmarks and optimizers, along with ipykernel for running the notebook examples

Installation from source

git clone https://github.com/automl/hposuite.git
cd hposuite

pip install -e . # -e for editable install

Simple example to run multiple Optimizers on multiple benchmarks

from hposuite.benchmarks import BENCHMARKS
from hposuite.optimizers import OPTIMIZERS

from hposuite import create_study

study = create_study(
    name="smachb_dehb_mfh3good_pd1",
    output_dir="./hposuite-output",
    optimizers=[
        OPTIMIZERS["SMAC_Hyperband"],
        OPTIMIZERS["DEHB_Optimizer"]
    ],
    benchmarks=[
        BENCHMARKS["mfh3_good"],
        BENCHMARKS["pd1-imagenet-resnet-512"]
    ],
    num_seeds=5,
    budget=100,
)

study.optimize()

Command-Line usage

python -m hposuite \
    --optimizer RandomSearch Scikit_Optimize \
    --benchmark ackley \
    --num_seeds 3 \
    --budget 50 \
    --study_name test_study

View all available Optimizers and Benchmarks

from hposuite.optimizers import OPTIMIZERS
from hposuite.benchmarks import BENCHMARKS
print(OPTIMIZERS.keys())
print(BENCHMARKS.keys())

Results

hposuite saves the Study by default to ../hposuite-output/ (relative to the current working directory). Results are saved in the Run subdirectories within the main Study directory as .parquet files.
The Study directory and the individual Run directory paths are logged when running Study.optimize()

To view the result dataframe, use the following code snippet:

import pandas as pd
df = pd.read_parquet("<full path to the result .parquet file>")
print(df)
print(df.columns)

Plotting

python -m hposuite.plotting.incumbent_trace \
    --study_dir <study directory name> \
    --output_dir <abspath of dir where study dir is stored> \
    --save_dir <path relative to study_dir to store the plots> \ # optional
    --plot_file_name <file_name for saving the plot> \ # optional

--save_dir is set by default to study_dir/plots --output_dir by default is ../hposuite-output

Overview of available Optimizers

For a more detailed overview, check here

Optimizer Package Blackbox Multi-Fidelity (MF) Multi-Objective (MO) Expert Priors
RandomSearch
RandomSearch with priors
SMAC
DEHB
HEBO
Nevergrad
Optuna
Scikit-Optimize

Overview of available Benchmarks

For a more detailed overview, check here

Benchmark Package Type Multi-Fidelity Multi-Objective
Ackley Synthetic
Branin Synthetic
mf-prior-bench Synthetic, Surrogate
MF-Hartmann Tabular Tabular
LCBench-Tabular Tabular
Pymoo Synthetic
IOH (BBOB) Synthetic
BBOB Tabular Tabular