Skip to content

Add automatic benchmarking #1062

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 30 commits into from
Feb 7, 2025
Merged
Show file tree
Hide file tree
Changes from 4 commits
Commits
Show all changes
30 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
51 changes: 51 additions & 0 deletions .github/workflows/benchmark.yaml
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this would ultimately go into the docs, you might consider combining this whole job in the deploy-pages workflow

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You know, that would work! Nice idea, @misi9170 ok with you?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah no issues for me

Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
name: Floris Benchmark
# on:

on:
schedule:
- cron: '0 3 * * *' # Runs daily at 3am UTC
push:
branches:
- main
pull_request:
branches:
- main
workflow_dispatch: # Allows manual triggering of the workflow


permissions:
# deployments permission to deploy GitHub pages website
deployments: write
# contents permission to update benchmark contents in gh-pages branch
contents: write

jobs:
benchmark:
name: Run FLORIS benchmarks
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.13"]
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: Install project
run: |
python -m pip install --upgrade pip
pip install -e ".[develop]"
- name: Run benchmark
run: |
cd benchmarks
pytest bench.py --benchmark-json output.json
- name: Store benchmark result
uses: benchmark-action/github-action-benchmark@v1
with:
name: Python Benchmark with pytest-benchmark
tool: 'pytest'
output-file-path: benchmarks/output.json
# Use personal access token instead of GITHUB_TOKEN due to https://github.community/t/github-action-not-triggering-gh-pages-upon-push/16096
github-token: ${{ secrets.GITHUB_TOKEN }}
auto-push: true
2 changes: 2 additions & 0 deletions .github/workflows/deploy-pages.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ on:
- develop
paths:
- docs/**
workflow_dispatch: # Allows manual triggering of the workflow

# This job installs dependencies, builds the book, and pushes it to `gh-pages`
jobs:
Expand Down Expand Up @@ -55,3 +56,4 @@ jobs:
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: ./docs/_build/html
destination_dir: docs # Publishes to the docs folder
160 changes: 160 additions & 0 deletions benchmarks/bench.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,160 @@
from pathlib import Path

import numpy as np
import pytest

from floris import (
FlorisModel,
TimeSeries,
)
from floris.core.turbine.operation_models import POWER_SETPOINT_DEFAULT
from floris.heterogeneous_map import HeterogeneousMap


TEST_DATA = Path(__file__).resolve().parent / "data"
YAML_INPUT = TEST_DATA / "input_full.yaml"

N_Conditions = 100


def test_timing_small_farm_set(benchmark):
"""Timing test for setting up a small farm"""
fmodel = FlorisModel(configuration=YAML_INPUT)
wind_directions = np.linspace(0, 360, N_Conditions)
wind_speeds = np.ones(N_Conditions) * 8
turbulence_intensities = np.ones(N_Conditions) * 0.06

benchmark(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be good to mention how this benchmark function is introduced into the namespace since it isn't imported

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a comment above, that is a confusing practice but it's a pattern I've seen before,

fmodel.set,
wind_directions=wind_directions,
wind_speeds=wind_speeds,
turbulence_intensities=turbulence_intensities,
)


def test_timing_small_farm_run(benchmark):
"""Timing test for running a small farm"""
fmodel = FlorisModel(configuration=YAML_INPUT)
wind_directions = np.linspace(0, 360, N_Conditions)
wind_speeds = np.ones(N_Conditions) * 8
turbulence_intensities = np.ones(N_Conditions) * 0.06

fmodel.set(
wind_directions=wind_directions,
wind_speeds=wind_speeds,
turbulence_intensities=turbulence_intensities,
)

benchmark(fmodel.run)


def test_timing_large_farm_set(benchmark):
"""Timing test for setting up a large farm"""
fmodel = FlorisModel(configuration=YAML_INPUT)
wind_directions = np.linspace(0, 360, N_Conditions)
wind_speeds = np.ones(N_Conditions) * 8
turbulence_intensities = np.ones(N_Conditions) * 0.06

benchmark(
fmodel.set,
wind_directions=wind_directions,
wind_speeds=wind_speeds,
turbulence_intensities=turbulence_intensities,
layout_x=np.linspace(0, 1000, 100),
layout_y=np.linspace(0, 1000, 100),
)


def test_timing_large_farm_run(benchmark):
"""Timing test for running a large farm"""
fmodel = FlorisModel(configuration=YAML_INPUT)
wind_directions = np.linspace(0, 360, N_Conditions)
wind_speeds = np.ones(N_Conditions) * 8
turbulence_intensities = np.ones(N_Conditions) * 0.06

fmodel.set(
wind_directions=wind_directions,
wind_speeds=wind_speeds,
turbulence_intensities=turbulence_intensities,
layout_x=np.linspace(0, 1000, 100),
layout_y=np.linspace(0, 1000, 100),
)

benchmark(fmodel.run)


def test_timing_het_set(benchmark):
"""Timing test for setting up a farm with a heterogeneous map"""

# The side of the flow which is accelerated reverses for east versus west
het_map = HeterogeneousMap(
x=np.array([0.0, 0.0, 500.0, 500.0]),
y=np.array([0.0, 500.0, 0.0, 500.0]),
speed_multipliers=np.array(
[
[1.0, 2.0, 1.0, 2.0], # Top accelerated
[2.0, 1.0, 2.0, 1.0], # Bottom accelerated
]
),
wind_directions=np.array([270.0, 90.0]),
wind_speeds=np.array([8.0, 8.0]),
)

# Get the FLORIS model
fmodel = FlorisModel(configuration=YAML_INPUT)

time_series = TimeSeries(
wind_directions=np.linspace(0, 360, N_Conditions),
wind_speeds=8.0,
turbulence_intensities=0.06,
heterogeneous_map=het_map,
)

# Set the model to a turbines perpendicular to
# east/west flow with 0 turbine closer to bottom and
# turbine 1 closer to top
benchmark(
fmodel.set,
wind_data=time_series,
layout_x=[250.0, 250.0],
layout_y=[100.0, 400.0],
)


def test_timing_het_run(benchmark):
"""Timing test for running a farm with a heterogeneous map"""

# The side of the flow which is accelerated reverses for east versus west
het_map = HeterogeneousMap(
x=np.array([0.0, 0.0, 500.0, 500.0]),
y=np.array([0.0, 500.0, 0.0, 500.0]),
speed_multipliers=np.array(
[
[1.0, 2.0, 1.0, 2.0], # Top accelerated
[2.0, 1.0, 2.0, 1.0], # Bottom accelerated
]
),
wind_directions=np.array([270.0, 90.0]),
wind_speeds=np.array([8.0, 8.0]),
)

# Get the FLORIS model
fmodel = FlorisModel(configuration=YAML_INPUT)

time_series = TimeSeries(
wind_directions=np.linspace(0, 360, N_Conditions),
wind_speeds=8.0,
turbulence_intensities=0.06,
heterogeneous_map=het_map,
)

# Set the model to a turbines perpendicular to
# east/west flow with 0 turbine closer to bottom and
# turbine 1 closer to top
fmodel.set(
wind_data=time_series,
layout_x=[250.0, 250.0],
layout_y=[100.0, 400.0],
)

benchmark(fmodel.run)
90 changes: 90 additions & 0 deletions benchmarks/data/input_full.yaml
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it necessary to add another input file?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have a small chicken-egg thing with default, plan it to move to that when it catches up to main

Original file line number Diff line number Diff line change
@@ -0,0 +1,90 @@

name: test_input
description: Single turbine for testing
floris_version: v4

logging:
console:
enable: false
level: WARNING
file:
enable: false
level: WARNING

solver:
type: turbine_grid
turbine_grid_points: 3

farm:
layout_x:
- 0.0
layout_y:
- 0.0
turbine_type:
- nrel_5MW

flow_field:
air_density: 1.225
reference_wind_height: 90.0
turbulence_intensities:
- 0.06
wind_directions:
- 270.0
wind_shear: 0.12
wind_speeds:
- 8.0
wind_veer: 0.0

wake:
model_strings:
combination_model: sosfs
deflection_model: gauss
turbulence_model: crespo_hernandez
velocity_model: gauss

enable_secondary_steering: true
enable_yaw_added_recovery: true
enable_active_wake_mixing: true
enable_transverse_velocities: true

wake_deflection_parameters:
gauss:
ad: 0.0
alpha: 0.58
bd: 0.0
beta: 0.077
dm: 1.0
ka: 0.38
kb: 0.004
jimenez:
ad: 0.0
bd: 0.0
kd: 0.05

wake_velocity_parameters:
cc:
a_s: 0.179367259
b_s: 0.0118889215
c_s1: 0.0563691592
c_s2: 0.13290157
a_f: 3.11
b_f: -0.68
c_f: 2.41
alpha_mod: 1.0
gauss:
alpha: 0.58
beta: 0.077
ka: 0.38
kb: 0.004
jensen:
we: 0.05
turboparkgauss:
A: 0.04
include_mirror_wake: True

wake_turbulence_parameters:
crespo_hernandez:
initial: 0.01
constant: 0.9
ai: 0.83
downstream: -0.25
4 changes: 3 additions & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,7 @@ docs = [
]
develop = [
"pytest",
"pytest-benchmark~=5.1",
"pre-commit",
"ruff",
"isort"
Expand All @@ -75,7 +76,8 @@ branch = true
source = "floris/*"
omit = [
"setup.py",
"tests/*"
"tests/*",
"benchmarks/*"
]


Expand Down
Loading