Skip to content

Commit 9b38abf

Browse files
committed
docs
1 parent cd8dc57 commit 9b38abf

2 files changed

Lines changed: 199 additions & 121 deletions

File tree

docs/source/algorithms.md

Lines changed: 44 additions & 82 deletions
Original file line numberDiff line numberDiff line change
@@ -4710,14 +4710,7 @@ package are available in optimagic. To use it, you need to have
47104710
[gradient_free_optimizers](https://pypi.org/project/gradient_free_optimizers) installed.
47114711

47124712
```{eval-rst}
4713-
.. dropdown:: Common options across all optimizers
4714-
4715-
.. autoclass:: optimagic.optimizers.gradient_free_optimizers.GFOCommonOptions
4716-
4717-
```
4718-
4719-
```{eval-rst}
4720-
.. dropdown:: gfo_hillclimbing
4713+
.. dropdown:: gfo_pso
47214714
47224715
**How to use this algorithm.**
47234716
@@ -4727,7 +4720,7 @@ package are available in optimagic. To use it, you need to have
47274720
om.minimize(
47284721
fun=lambda x: x @ x,
47294722
params=[1.0, 2.0, 3.0],
4730-
algorithm=om.algos.gfo_hillclimbing(stopping_maxiter=1_000, ...),
4723+
algorithm=om.algos.gfo_pso(stopping_maxiter=1_000, ...),
47314724
bounds = om.Bounds(lower = np.array([1,1,1]), upper=np.array([5,5,5]))
47324725
)
47334726
@@ -4738,62 +4731,31 @@ package are available in optimagic. To use it, you need to have
47384731
om.minimize(
47394732
fun=lambda x: x @ x,
47404733
params=[1.0, 2.0, 3.0],
4741-
algorithm="gfo_hillclimbing",
4734+
algorithm="gfo_pso",
47424735
algo_options={"stopping_maxiter": 1_000, ...},
47434736
bounds = om.Bounds(lower = np.array([1,1,1]), upper=np.array([5,5,5]))
47444737
)
47454738
47464739
**Description and available options:**
47474740
4748-
.. autoclass:: optimagic.optimizers.gradient_free_optimizers.GFOHillClimbing
4741+
.. autoclass:: optimagic.optimizers.gfo_optimizers.GFOParticleSwarmOptimization
47494742
47504743
```
47514744

47524745
```{eval-rst}
4753-
.. dropdown:: gfo_stochastichillclimbing
4754-
4755-
**How to use this algorithm.**
4756-
4757-
.. code-block:: python
4758-
4759-
import optimagic as om
4760-
om.minimize(
4761-
fun=lambda x: x @ x,
4762-
params=[1.0, 2.0, 3.0],
4763-
algorithm=om.algos.gfo_stochastichillclimbing(stopping_maxiter=1_000, ...),
4764-
bounds = om.Bounds(lower = np.array([1,1,1]), upper=np.array([5,5,5]))
4765-
)
4766-
4767-
or using the string interface:
4768-
4769-
.. code-block:: python
4770-
4771-
om.minimize(
4772-
fun=lambda x: x @ x,
4773-
params=[1.0, 2.0, 3.0],
4774-
algorithm="gfo_stochastichillclimbing",
4775-
algo_options={"stopping_maxiter": 1_000, ...},
4776-
bounds = om.Bounds(lower = np.array([1,1,1]), upper=np.array([5,5,5]))
4777-
)
4778-
4779-
**Description and available options:**
4780-
4781-
.. autoclass:: optimagic.optimizers.gradient_free_optimizers.GFOStochasticHillClimbing
47824746
4783-
```
4784-
4785-
```{eval-rst}
4786-
.. dropdown:: gfo_repulsinghillclimbing
4747+
.. dropdown:: gfo_parallel_tempering
47874748
47884749
**How to use this algorithm.**
47894750
47904751
.. code-block:: python
47914752
47924753
import optimagic as om
4754+
import numpy as np
47934755
om.minimize(
47944756
fun=lambda x: x @ x,
4795-
params=[1.0, 2.0, 3.0],
4796-
algorithm=om.algos.gfo_repulsinghillclimbing(stopping_maxiter=1_000, ...),
4757+
params=np.array([1.0, 2.0, 3.0]),
4758+
algorithm=om.algos.gfo_parallel_tempering(population_size=15, n_iter_swap=5),
47974759
bounds = om.Bounds(lower = np.array([1,1,1]), upper=np.array([5,5,5]))
47984760
)
47994761
@@ -4803,30 +4765,30 @@ package are available in optimagic. To use it, you need to have
48034765
48044766
om.minimize(
48054767
fun=lambda x: x @ x,
4806-
params=[1.0, 2.0, 3.0],
4807-
algorithm="gfo_repulsinghillclimbing",
4808-
algo_options={"stopping_maxiter": 1_000, ...},
4768+
params=np.array([1.0, 2.0, 3.0]),
4769+
algorithm="gfo_parallel_tempering",
4770+
algo_options={"population_size": 15, "n_iter_swap": 5},
48094771
bounds = om.Bounds(lower = np.array([1,1,1]), upper=np.array([5,5,5]))
48104772
)
48114773
48124774
**Description and available options:**
48134775
4814-
.. autoclass:: optimagic.optimizers.gradient_free_optimizers.GFORepulsingHillClimbing
4815-
4776+
.. autoclass:: optimagic.optimizers.gfo_optimizers.GFOParallelTempering
48164777
```
48174778

48184779
```{eval-rst}
4819-
.. dropdown:: gfo_randomrestarthillclimbing
4780+
.. dropdown:: gfo_spiral_optimization
48204781
48214782
**How to use this algorithm.**
48224783
48234784
.. code-block:: python
48244785
48254786
import optimagic as om
4787+
import numpy as np
48264788
om.minimize(
48274789
fun=lambda x: x @ x,
4828-
params=[1.0, 2.0, 3.0],
4829-
algorithm=om.algos.gfo_randomrestarthillclimbing(stopping_maxiter=1_000, ...),
4790+
params=np.array([1.0, 2.0, 3.0]),
4791+
algorithm=om.algos.gfo_spiral_optimization(population_size=15, decay_rate=0.95),
48304792
bounds = om.Bounds(lower = np.array([1,1,1]), upper=np.array([5,5,5]))
48314793
)
48324794
@@ -4836,30 +4798,30 @@ package are available in optimagic. To use it, you need to have
48364798
48374799
om.minimize(
48384800
fun=lambda x: x @ x,
4839-
params=[1.0, 2.0, 3.0],
4840-
algorithm="gfo_randomrestarthillclimbing",
4841-
algo_options={"stopping_maxiter": 1_000, ...},
4801+
params=np.array([1.0, 2.0, 3.0]),
4802+
algorithm="gfo_spiral_optimization",
4803+
algo_options={"population_size": 15, "decay_rate": 0.95},
48424804
bounds = om.Bounds(lower = np.array([1,1,1]), upper=np.array([5,5,5]))
48434805
)
48444806
48454807
**Description and available options:**
48464808
4847-
.. autoclass:: optimagic.optimizers.gradient_free_optimizers.GFORandomRestartHillClimbing
4848-
4809+
.. autoclass:: optimagic.optimizers.gfo_optimizers.GFOSpiralOptimization
48494810
```
48504811

48514812
```{eval-rst}
4852-
.. dropdown:: gfo_simulatedannealing
4813+
.. dropdown:: gfo_genetic_algorithm
48534814
48544815
**How to use this algorithm.**
48554816
48564817
.. code-block:: python
48574818
48584819
import optimagic as om
4820+
import numpy as np
48594821
om.minimize(
48604822
fun=lambda x: x @ x,
4861-
params=[1.0, 2.0, 3.0],
4862-
algorithm=om.algos.gfo_simulatedannealing(stopping_maxiter=1_000, ...),
4823+
params=np.array([1.0, 2.0, 3.0]),
4824+
algorithm=om.algos.gfo_genetic_algorithm(population_size=20, mutation_rate=0.6),
48634825
bounds = om.Bounds(lower = np.array([1,1,1]), upper=np.array([5,5,5]))
48644826
)
48654827
@@ -4869,30 +4831,30 @@ package are available in optimagic. To use it, you need to have
48694831
48704832
om.minimize(
48714833
fun=lambda x: x @ x,
4872-
params=[1.0, 2.0, 3.0],
4873-
algorithm="gfo_simulatedannealing",
4874-
algo_options={"stopping_maxiter": 1_000, ...},
4834+
params=np.array([1.0, 2.0, 3.0]),
4835+
algorithm="gfo_genetic_algorithm",
4836+
algo_options={"population_size": 20, "mutation_rate": 0.6},
48754837
bounds = om.Bounds(lower = np.array([1,1,1]), upper=np.array([5,5,5]))
48764838
)
48774839
48784840
**Description and available options:**
48794841
4880-
.. autoclass:: optimagic.optimizers.gradient_free_optimizers.GFOSimulatedAnnealing
4881-
4842+
.. autoclass:: optimagic.optimizers.gfo_optimizers.GFOGeneticAlgorithm
48824843
```
48834844

48844845
```{eval-rst}
4885-
.. dropdown:: gfo_downhillsimplex
4846+
.. dropdown:: gfo_evolution_strategy
48864847
48874848
**How to use this algorithm.**
48884849
48894850
.. code-block:: python
48904851
48914852
import optimagic as om
4853+
import numpy as np
48924854
om.minimize(
48934855
fun=lambda x: x @ x,
4894-
params=[1.0, 2.0, 3.0],
4895-
algorithm=om.algos.gfo_downhillsimplex(stopping_maxiter=1_000, ...),
4856+
params=np.array([1.0, 2.0, 3.0]),
4857+
algorithm=om.algos.gfo_evolution_strategy(population_size=15, crossover_rate=0.4),
48964858
bounds = om.Bounds(lower = np.array([1,1,1]), upper=np.array([5,5,5]))
48974859
)
48984860
@@ -4902,30 +4864,30 @@ package are available in optimagic. To use it, you need to have
49024864
49034865
om.minimize(
49044866
fun=lambda x: x @ x,
4905-
params=[1.0, 2.0, 3.0],
4906-
algorithm="gfo_downhillsimplex",
4907-
algo_options={"stopping_maxiter": 1_000, ...},
4867+
params=np.array([1.0, 2.0, 3.0]),
4868+
algorithm="gfo_evolution_strategy",
4869+
algo_options={"population_size": 15, "crossover_rate": 0.4},
49084870
bounds = om.Bounds(lower = np.array([1,1,1]), upper=np.array([5,5,5]))
49094871
)
49104872
49114873
**Description and available options:**
49124874
4913-
.. autoclass:: optimagic.optimizers.gradient_free_optimizers.GFODownhillSimplex
4914-
4875+
.. autoclass:: optimagic.optimizers.gfo_optimizers.GFOEvolutionStrategy
49154876
```
49164877

49174878
```{eval-rst}
4918-
.. dropdown:: gfo_pso
4879+
.. dropdown:: gfo_differential_evolution
49194880
49204881
**How to use this algorithm.**
49214882
49224883
.. code-block:: python
49234884
49244885
import optimagic as om
4886+
import numpy as np
49254887
om.minimize(
49264888
fun=lambda x: x @ x,
4927-
params=[1.0, 2.0, 3.0],
4928-
algorithm=om.algos.gfo_pso(stopping_maxiter=1_000, ...),
4889+
params=np.array([1.0, 2.0, 3.0]),
4890+
algorithm=om.algos.gfo_differential_evolution(population_size=20, mutation_rate=0.8),
49294891
bounds = om.Bounds(lower = np.array([1,1,1]), upper=np.array([5,5,5]))
49304892
)
49314893
@@ -4935,15 +4897,15 @@ package are available in optimagic. To use it, you need to have
49354897
49364898
om.minimize(
49374899
fun=lambda x: x @ x,
4938-
params=[1.0, 2.0, 3.0],
4939-
algorithm="gfo_pso",
4940-
algo_options={"stopping_maxiter": 1_000, ...},
4900+
params=np.array([1.0, 2.0, 3.0]),
4901+
algorithm="gfo_differential_evolution",
4902+
algo_options={"population_size": 20, "mutation_rate": 0.8},
49414903
bounds = om.Bounds(lower = np.array([1,1,1]), upper=np.array([5,5,5]))
49424904
)
49434905
49444906
**Description and available options:**
49454907
4946-
.. autoclass:: optimagic.optimizers.gradient_free_optimizers.GFOParticleSwarmOptimization
4908+
.. autoclass:: optimagic.optimizers.gfo_optimizers.GFODifferentialEvolution
49474909
49484910
```
49494911

0 commit comments

Comments
 (0)