Skip to content

Wrap gradient_free_optimizers (local) #624

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 23 commits into
base: main
Choose a base branch
from

Conversation

gauravmanmode
Copy link
Collaborator

@gauravmanmode gauravmanmode commented Jul 27, 2025

Copy link

codecov bot commented Aug 2, 2025

Codecov Report

❌ Patch coverage is 74.71264% with 44 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
...c/optimagic/optimizers/gradient_free_optimizers.py 59.25% 44 Missing ⚠️

❌ Your patch check has failed because the patch coverage (74.71%) is below the target coverage (80.00%). You can increase the patch coverage or adjust the target coverage.

Files with missing lines Coverage Δ
src/optimagic/algorithms.py 87.94% <100.00%> (+0.24%) ⬆️
src/optimagic/config.py 100.00% <100.00%> (ø)
...c/optimagic/optimizers/gradient_free_optimizers.py 59.25% <59.25%> (ø)

... and 5 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@gauravmanmode gauravmanmode marked this pull request as ready for review August 4, 2025 14:21
@gauravmanmode
Copy link
Collaborator Author

Hi @janosg
There are a few points

  1. I am failing tests in test_many_algorithms for stochastic algorithms as in the case of all gfo optimizers needs_bounds = True but the tests do not have them.
  2. Writing test for helper functions is a bit difficult as I have exposed the converter. Should I pass arrays only after converting?

@janosg
Copy link
Member

janosg commented Aug 5, 2025

Hi @gauravmanmode

  1. Is this because we currently use the is_global flag to decide which tests get called wtih bounds? If so, we can simply use the new flags in test_many_algorithms (either adding a new test case for non global optimizers that require bounds or restructuring what weh have).
  2. I don't understand what the problem is and what you mean by "Should I pass arrays only after converting?". Is the problem that SphereExampleInternalOptimizationProblem does not have a converter?

@gauravmanmode
Copy link
Collaborator Author

Hi @janosg,
Could you please review the changes?

Changes

add a new example problem with converter in internal_optimization_problem

Added functions and converter dealing with dict input.

refactor test_many_algorithms.

( this is minimal and is just for passing tests on this one)

  1. For now, the new tests in test_many_algorithms_new cover all of the tests existing in the original test_many_algorithms_new .
  2. The only failing test is scipy_trust_constr with binding bounds which fails barely . For the purpose of passing this one test, I have set the accuracy of local algorithms to 3 for now.
  3. Why test_with _binding_bounds test does not test with finite bounds ?
    Should this be in a separate PR?

gfo

  1. Many algos in GFO do not pass tests with their default values. How should I go about that?
  2. Some algorithms in GFO particularly population based ones do not advance ( n_iterations_performed is low ) if convergence_iter_noimprove is set to something low like < 100. Not sure but some optimizers jump randomly to new positions to explore the search space might be the reason.

stopping_funval: float | None = None
""""Stop the optimization if the objective function is less than this value."""

convergence_iter_noimprove: PositiveInt = 1000 # need to set high
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we set this to None or a really high value instead? Is there another convercence criterion we could set to a non-None value instead? We don't want all optimizers just to run until max_iter but we also don't want pre-mature stopping of course.

Copy link
Collaborator Author

@gauravmanmode gauravmanmode Aug 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am a bit confused about this.
Most of the time convergence_iter_noimprove behaves as stopping_maxiter. The other convergence criteria dont seem to be respected. Even after a > 100000 iterations the algorithm does not converge to a good solution. I might be missing something but hope to get a solution here.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for creating the issue over there.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants