Skip to content

Commit 045593f

Browse files
Merge pull request #136 from automl/development
Development
2 parents d314edf + 8a70d01 commit 045593f

14 files changed

+570
-251
lines changed

README.md

+21-10
Original file line numberDiff line numberDiff line change
@@ -2,39 +2,50 @@
22

33
HPOBench is a library for providing benchmarks for (multi-fidelity) hyperparameter optimization and with a focus on reproducibility.
44

5-
A list of benchmarks can be found in the [wiki](https://github.com/automl/HPOBench/wiki/Available-Containerized-Benchmarks) and a guide on howto contribute benchmarks is avaiable [here](https://github.com/automl/HPOBench/wiki/How-to-add-a-new-benchmark-step-by-step)
5+
Further info:
6+
* list of [benchmarks](https://github.com/automl/HPOBench/wiki/Available-Containerized-Benchmarks)
7+
* [howto](https://github.com/automl/HPOBench/wiki/How-to-add-a-new-benchmark-step-by-step) contribute benchmarks
68

79
## Status
810

911
Status for Master Branch:
10-
[![Build Status](https://github.com/automl/HPOBench/workflows/Test%20Pull%20Requests/badge.svg?branch=master)](https://https://github.com/automl/HPOBench/actions)
12+
[![Build Status](https://github.com/automl/HPOBench/workflows/Test%20Pull%20Requests/badge.svg?branch=master)](https://github.com/automl/HPOBench/actions)
1113
[![codecov](https://codecov.io/gh/automl/HPOBench/branch/master/graph/badge.svg)](https://codecov.io/gh/automl/HPOBench)
1214

1315
Status for Development Branch:
14-
[![Build Status](https://github.com/automl/HPOBench/workflows/Test%20Pull%20Requests/badge.svg?branch=development)](https://https://github.com/automl/HPOBench/actions)
16+
[![Build Status](https://github.com/automl/HPOBench/workflows/Test%20Pull%20Requests/badge.svg?branch=development)](https://github.com/automl/HPOBench/actions)
1517
[![codecov](https://codecov.io/gh/automl/HPOBench/branch/development/graph/badge.svg)](https://codecov.io/gh/automl/HPOBench)
1618

1719
## In 4 lines of code
1820

1921
Evaluate a random configuration using a singularity container
2022
```python
21-
from hpobench.container.benchmarks.ml.xgboost_benchmark import XGBoostBenchmark
22-
b = XGBoostBenchmark(task_id=167149, container_source='library://phmueller/automl', rng=1)
23+
from hpobench.container.benchmarks.nas.tabular_benchmarks import SliceLocalizationBenchmark
24+
b = SliceLocalizationBenchmark(rng=1)
2325
config = b.get_configuration_space(seed=1).sample_configuration()
24-
result_dict = b.objective_function(configuration=config, fidelity={"n_estimators": 128, "dataset_fraction": 0.5}, rng=1)
26+
result_dict = b.objective_function(configuration=config, fidelity={"budget": 100}, rng=1)
2527
```
2628

2729
All benchmarks can also be queried with fewer or no fidelities:
2830

2931
```python
30-
from hpobench.container.benchmarks.ml.xgboost_benchmark import XGBoostBenchmark
31-
b = XGBoostBenchmark(task_id=167149, container_source='library://phmueller/automl', rng=1)
32+
from hpobench.container.benchmarks.nas.tabular_benchmarks import SliceLocalizationBenchmark
33+
b = SliceLocalizationBenchmark(rng=1)
3234
config = b.get_configuration_space(seed=1).sample_configuration()
33-
result_dict = b.objective_function(configuration=config, fidelity={"n_estimators": 128,}, rng=1)
35+
result_dict = b.objective_function(configuration=config, fidelity={"budget": 50}, rng=1)
36+
# returns results on the highest budget
3437
result_dict = b.objective_function(configuration=config, rng=1)
3538
```
3639

37-
For more examples see `/example/`.
40+
For each benchmark further info on the searchspace and fidelity space can be obtained:
41+
42+
```python
43+
from hpobench.container.benchmarks.nas.tabular_benchmarks import SliceLocalizationBenchmark
44+
b = SliceLocalizationBenchmark(task_id=167149, rng=1)
45+
cs = b.get_configuration_space(seed=1)
46+
fs = b.get_fidelity_space(seed=1)
47+
meta = b.get_meta_information()
48+
```
3849

3950
## Installation
4051

changelog.md

+9
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,12 @@
1+
# 0.0.10
2+
* Cartpole Benchmark Version 0.0.4:
3+
Fix: Pass the hp `entropy_regularization` to the PPO Agent.
4+
Increase the lower limit of `likelihood_ratio_clipping` from 0 to 10e-7 (0 is invalid.)
5+
* Tabular Benchmark (NAS) Version 0.0.5 + NAS201 Version 0.0.5:
6+
We add for each benchmark in nas/tabular_benchmarks and nas/nasbench_201 a new benchmark class with a modified fidelity space. The new benchmark are called _Original_, e.g. _SliceLocalizationBenchmarkOriginal_ compared to _SliceLocalizationBenchmark_
7+
These new benchmarks have the same fidelity space as used in previous experiments by [DEHB](https://ml.informatik.uni-freiburg.de/wp-content/uploads/papers/21-IJCAI-DEHB.pdf) and [BOHB](http://proceedings.mlr.press/v80/falkner18a/falkner18a.pdf).
8+
Specifically, we increase the lowest fidelity from 1 to 3 for nas/tabular_benchmarks and from 1 to 12 for nas/nasbench_201. The upper fidelity and the old benchmarks remain unchanged.
9+
110
# 0.0.9
211
* Add new Benchmarks: Tabular Benchmarks.
312
Provided by @Neeratyoy.
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,149 @@
1+
"""
2+
Multiple Optimizers on SVMSurrogate
3+
=======================================
4+
5+
This example shows how to run SMAC-HB and SMAC-random-search on SVMSurrogate
6+
7+
Please install the necessary dependencies via ``pip install .`` and singularity (v3.5).
8+
https://sylabs.io/guides/3.5/user-guide/quick_start.html#quick-installation-steps
9+
"""
10+
import logging
11+
from pathlib import Path
12+
from time import time
13+
14+
import numpy as np
15+
from smac.facade.smac_mf_facade import SMAC4MF
16+
from smac.facade.roar_facade import ROAR
17+
from smac.intensification.hyperband import Hyperband
18+
from smac.scenario.scenario import Scenario
19+
from smac.callbacks import IncorporateRunResultCallback
20+
21+
from hpobench.container.benchmarks.surrogates.svm_benchmark import SurrogateSVMBenchmark
22+
from hpobench.abstract_benchmark import AbstractBenchmark
23+
from hpobench.util.example_utils import set_env_variables_to_use_only_one_core
24+
25+
logger = logging.getLogger("minicomp")
26+
logging.basicConfig(level=logging.INFO)
27+
set_env_variables_to_use_only_one_core()
28+
29+
30+
class Callback(IncorporateRunResultCallback):
31+
def __init__(self):
32+
self.budget = 10
33+
34+
def __call__(self, smbo, run_info, result, time_left) -> None:
35+
self.budget -= run_info.budget
36+
if self.budget < 0:
37+
# No budget left
38+
raise ValueError
39+
40+
def create_smac_rs(benchmark, output_dir: Path, seed: int):
41+
# Set up SMAC-HB
42+
cs = benchmark.get_configuration_space(seed=seed)
43+
44+
scenario_dict = {"run_obj": "quality", # we optimize quality (alternative to runtime)
45+
"wallclock-limit": 60,
46+
"cs": cs,
47+
"deterministic": "true",
48+
"runcount-limit": 200,
49+
"limit_resources": True, # Uses pynisher to limit memory and runtime
50+
"cutoff": 1800, # runtime limit for target algorithm
51+
"memory_limit": 10000, # adapt this to reasonable value for your hardware
52+
"output_dir": output_dir,
53+
"abort_on_first_run_crash": True,
54+
}
55+
56+
scenario = Scenario(scenario_dict)
57+
def optimization_function_wrapper(cfg, seed, **kwargs):
58+
""" Helper-function: simple wrapper to use the benchmark with smac """
59+
result_dict = benchmark.objective_function(cfg, rng=seed)
60+
cs.sample_configuration()
61+
return result_dict['function_value']
62+
63+
smac = ROAR(scenario=scenario,
64+
rng=np.random.RandomState(seed),
65+
tae_runner=optimization_function_wrapper,
66+
)
67+
return smac
68+
69+
def create_smac_hb(benchmark, output_dir: Path, seed: int):
70+
# Set up SMAC-HB
71+
cs = benchmark.get_configuration_space(seed=seed)
72+
73+
scenario_dict = {"run_obj": "quality", # we optimize quality (alternative to runtime)
74+
"wallclock-limit": 60,
75+
"cs": cs,
76+
"deterministic": "true",
77+
"runcount-limit": 200,
78+
"limit_resources": True, # Uses pynisher to limit memory and runtime
79+
"cutoff": 1800, # runtime limit for target algorithm
80+
"memory_limit": 10000, # adapt this to reasonable value for your hardware
81+
"output_dir": output_dir,
82+
"abort_on_first_run_crash": True,
83+
}
84+
85+
scenario = Scenario(scenario_dict)
86+
def optimization_function_wrapper(cfg, seed, instance, budget):
87+
""" Helper-function: simple wrapper to use the benchmark with smac """
88+
result_dict = benchmark.objective_function(cfg, rng=seed,
89+
fidelity={"dataset_fraction": budget})
90+
cs.sample_configuration()
91+
return result_dict['function_value']
92+
93+
smac = SMAC4MF(scenario=scenario,
94+
rng=np.random.RandomState(seed),
95+
tae_runner=optimization_function_wrapper,
96+
intensifier=Hyperband,
97+
intensifier_kwargs={'initial_budget': 0.1, 'max_budget': 1, 'eta': 3}
98+
)
99+
return smac
100+
101+
102+
def run_experiment(out_path: str, on_travis: bool = False):
103+
104+
out_path = Path(out_path)
105+
out_path.mkdir(exist_ok=True)
106+
107+
hb_res = []
108+
rs_res = []
109+
for i in range(4):
110+
benchmark = SurrogateSVMBenchmark(rng=i)
111+
smac = create_smac_hb(benchmark=benchmark, seed=i, output_dir=out_path)
112+
callback = Callback()
113+
smac.register_callback(callback)
114+
try:
115+
smac.optimize()
116+
except ValueError:
117+
print("Done")
118+
incumbent = smac.solver.incumbent
119+
inc_res = benchmark.objective_function(configuration=incumbent)
120+
hb_res.append(inc_res["function_value"])
121+
122+
benchmark = SurrogateSVMBenchmark(rng=i)
123+
smac = create_smac_rs(benchmark=benchmark, seed=i, output_dir=out_path)
124+
callback = Callback()
125+
smac.register_callback(callback)
126+
try:
127+
smac.optimize()
128+
except ValueError:
129+
print("Done")
130+
incumbent = smac.solver.incumbent
131+
inc_res = benchmark.objective_function(configuration=incumbent)
132+
rs_res.append(inc_res["function_value"])
133+
134+
print("SMAC-HB", hb_res, np.median(hb_res))
135+
print("SMAC-RS", rs_res, np.median(rs_res))
136+
137+
138+
if __name__ == "__main__":
139+
import argparse
140+
parser = argparse.ArgumentParser(prog='HPOBench - SVM comp',
141+
description='Run different opts on SVM Surrogate',
142+
usage='%(prog)s --out_path <string>')
143+
parser.add_argument('--out_path', default='./svm_comp', type=str)
144+
parser.add_argument('--on_travis', action='store_true',
145+
help='Flag to speed up the run on the continuous integration tool \"travis\". This flag can be'
146+
'ignored by the user')
147+
args = parser.parse_args()
148+
149+
run_experiment(out_path=args.out_path, on_travis=args.on_travis)

examples/w_optimizer/cartpole_bohb.py

+7-11
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,13 @@
11
"""
2-
SMAC on Cartpole with BOHB
2+
BOHB on Cartpole
33
==========================
44
55
This example shows the usage of an Hyperparameter Tuner, such as BOHB on the cartpole benchmark.
66
BOHB is a combination of Bayesian optimization and Hyperband.
77
8-
Please install the necessary dependencies via ``pip install .`` and singularity (v3.5).
8+
**Note**: This is a raw benchmark, i.e. it actually runs an algorithms, and will take some time
9+
10+
Please install the necessary dependencies via ``pip install .[examples]`` and singularity (v3.5).
911
https://sylabs.io/guides/3.5/user-guide/quick_start.html#quick-installation-steps
1012
1113
"""
@@ -48,16 +50,15 @@ def compute(self, config, budget, **kwargs):
4850
def run_experiment(out_path, on_travis):
4951

5052
settings = {'min_budget': 1,
51-
'max_budget': 5, # Number of Agents, which are trained to solve the cartpole experiment
52-
'num_iterations': 10, # Number of HB brackets
53+
'max_budget': 9, # number of repetitions; this is the fidelity for this bench
54+
'num_iterations': 10, # Set this to a low number for demonstration
5355
'eta': 3,
5456
'output_dir': Path(out_path)
5557
}
5658
if on_travis:
5759
settings.update(get_travis_settings('bohb'))
5860

59-
b = Benchmark(container_source='library://phmueller/automl',
60-
container_name='cartpole')
61+
b = Benchmark(rng=1)
6162

6263
b.get_configuration_space(seed=1)
6364
settings.get('output_dir').mkdir(exist_ok=True)
@@ -105,11 +106,6 @@ def run_experiment(out_path, on_travis):
105106

106107
if not on_travis:
107108
benchmark = Benchmark(container_source='library://phmueller/automl')
108-
# Old API ---- NO LONGER SUPPORTED ---- This will simply ignore the fidelities
109-
# incumbent_result = benchmark.objective_function_test(configuration=inc_cfg,
110-
# budget=settings['max_budget'])
111-
112-
# New API ---- Use this
113109
incumbent_result = benchmark.objective_function_test(configuration=inc_cfg,
114110
fidelity={"budget": settings['max_budget']})
115111
print(incumbent_result)

examples/w_optimizer/cartpole_hyperband.py examples/w_optimizer/cartpole_smachb.py

+14-18
Original file line numberDiff line numberDiff line change
@@ -1,26 +1,28 @@
11
"""
2-
SMAC on Cartpole with Hyperband
2+
SMAC-HB on Cartpole with Hyperband
33
===============================
44
55
This example shows the usage of an Hyperparameter Tuner, such as SMAC on the cartpole benchmark.
66
We use SMAC with Hyperband.
77
8-
Please install the necessary dependencies via ``pip install .`` and singularity (v3.5).
8+
**Note**: This is a raw benchmark, i.e. it actually runs an algorithms, and will take some time
9+
10+
Please install the necessary dependencies via ``pip install .[examples]`` and singularity (v3.5).
911
https://sylabs.io/guides/3.5/user-guide/quick_start.html#quick-installation-steps
1012
"""
1113
import logging
1214
from pathlib import Path
1315
from time import time
1416

1517
import numpy as np
16-
from smac.facade.smac_hpo_facade import SMAC4HPO
18+
from smac.facade.smac_mf_facade import SMAC4MF
1719
from smac.intensification.hyperband import Hyperband
1820
from smac.scenario.scenario import Scenario
1921

2022
from hpobench.container.benchmarks.rl.cartpole import CartpoleReduced as Benchmark
2123
from hpobench.util.example_utils import get_travis_settings, set_env_variables_to_use_only_one_core
2224

23-
logger = logging.getLogger("HB on cartpole")
25+
logger = logging.getLogger("SMAC-HB on cartpole")
2426
logging.basicConfig(level=logging.INFO)
2527
set_env_variables_to_use_only_one_core()
2628

@@ -30,15 +32,11 @@ def run_experiment(out_path: str, on_travis: bool = False):
3032
out_path = Path(out_path)
3133
out_path.mkdir(exist_ok=True)
3234

33-
benchmark = Benchmark(container_source='library://phmueller/automl',
34-
container_name='cartpole',
35-
rng=1)
36-
37-
cs = benchmark.get_configuration_space(seed=1)
35+
benchmark = Benchmark(rng=1)
3836

39-
scenario_dict = {"run_obj": "quality", # we optimize quality (alternative to runtime)
37+
scenario_dict = {"run_obj": "quality",
4038
"wallclock-limit": 5 * 60 * 60, # max duration to run the optimization (in seconds)
41-
"cs": cs, # configuration space
39+
"cs": benchmark.get_configuration_space(seed=1),
4240
"deterministic": "true",
4341
"runcount-limit": 200,
4442
"limit_resources": True, # Uses pynisher to limit memory and runtime
@@ -53,16 +51,14 @@ def run_experiment(out_path: str, on_travis: bool = False):
5351
scenario = Scenario(scenario_dict)
5452

5553
# Number of Agents, which are trained to solve the cartpole experiment
56-
max_budget = 5 if not on_travis else 2
54+
max_budget = 9 if not on_travis else 2
5755

5856
def optimization_function_wrapper(cfg, seed, instance, budget):
5957
""" Helper-function: simple wrapper to use the benchmark with smac"""
6058

61-
# Now that we have already downloaded the conatiner,
59+
# Now that we have already downloaded the container,
6260
# we only have to start a new instance. This is a fast operation.
63-
b = Benchmark(container_source='library://phmueller/automl',
64-
container_name='cartpole',
65-
rng=seed)
61+
b = Benchmark(rng=seed)
6662

6763
# Old API ---- NO LONGER SUPPORTED ---- This will simply ignore the fidelities
6864
# result_dict = b.objective_function(cfg, budget=int(budget))
@@ -71,10 +67,10 @@ def optimization_function_wrapper(cfg, seed, instance, budget):
7167
result_dict = b.objective_function(cfg, fidelity={"budget": int(budget)})
7268
return result_dict['function_value']
7369

74-
smac = SMAC4HPO(scenario=scenario,
70+
smac = SMAC4MF(scenario=scenario,
7571
rng=np.random.RandomState(42),
7672
tae_runner=optimization_function_wrapper,
77-
intensifier=Hyperband, # you can also change the intensifier to use like this!
73+
intensifier=Hyperband,
7874
intensifier_kwargs={'initial_budget': 1, 'max_budget': max_budget, 'eta': 3}
7975
)
8076

0 commit comments

Comments
 (0)