-
Notifications
You must be signed in to change notification settings - Fork 159
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add automatic benchmarking #1062
base: develop
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you have an example of the results generated by this? Also, how will it be accessed?
benchmarks/data/input_full.yaml
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it necessary to add another input file?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have a small chicken-egg thing with default, plan it to move to that when it catches up to main
.github/workflows/benchmark.yaml
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this would ultimately go into the docs, you might consider combining this whole job in the deploy-pages
workflow
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You know, that would work! Nice idea, @misi9170 ok with you?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah no issues for me
Yes, it makes a nice interactive chart, I think we have to do this first merge for it to exist right, then I can do a second PR linking it into the docs |
Is it intentional to merge into |
If you have it locally, a screenshot would be nice. It's just tough to review not knowing what it produces. |
Intentional, it'll be a process pointed at main, which now reminds me why I can't combine it with the publish pages, which points at develop. It has to do with the fact that it can run from a fork, so only pushes to main work as a trigger |
|
@rafmudaf , ok @misi9170 and I talked and made some changes based on your comments and discussing here:
|
destination_dir: docs # Publishes to the docs folder | ||
|
||
|
||
jobs: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this work? I would have guessed you can't have two "job" blocks
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@paulf81 Some things that would help tidy this up and make a review easier are:
|
Add automatic benchmarking to FLORIS
The PR will supercede #1060 (which itself superceded #992) and seeks to avoid issues running some of the github actions from a fork and further makes the changes direct to main to make sure the changes to gh-pages work together. This shouldn't be an issue going forward since the action is specified to run on develop at a specific time and is not connected to pulls or pushes.
Adds automatic code benchmarking to FLORIS. Proposed solution is to use pytest-benchmark to implement set timing tests:
https://pytest-benchmark.readthedocs.io/en/latest/
https://github.com/ionelmc/pytest-benchmark
And then try to schedule some semi-daily execution of these tests with logged performance checks so we can track changes over time. Here focused on:
https://github.com/benchmark-action/github-action-benchmark
To this end I added a first test to the tests folder including benchmarking to the
tests/
folder and confirm the command line:pytest floris_benchmark_test.py
This first version includes only a few tests with the idea with once the framework is up and running more possible to add future improvements. This PR is a bit in the way of other PRs so descoping a bit to get it done.