Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

31 ci benchmarking #32

Open
wants to merge 86 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
86 commits
Select commit Hold shift + click to select a range
5c46d04
Fix mesh definitions in benchmarks
lorenzovarese Jul 18, 2024
1eea839
Attempt to fix the benchmark PR
lorenzovarese Jul 18, 2024
02ee8b7
Fix pip install gt4py
lorenzovarese Jul 18, 2024
2ae9b77
Fix PyCall installation in the benchmark_pr.yml
lorenzovarese Jul 18, 2024
8686326
Fix the PyCall invoke
lorenzovarese Jul 18, 2024
e978b6c
Add reference to julia env in benchmark_pr config
lorenzovarese Jul 18, 2024
ea7d32d
Add cache for python, and fix the pycall (again :/)
lorenzovarese Jul 18, 2024
3d12f85
Activate the env in the benchmark CI
lorenzovarese Jul 19, 2024
2ae25f2
Include the Cell and K definitions in the benchmark
lorenzovarese Jul 19, 2024
5412e44
Add readme to run benchmark example
lorenzovarese Jul 22, 2024
978e6b5
Add commented benchmark for field operations
lorenzovarese Jul 24, 2024
49687ce
Merge remote-tracking branch 'origin' into 16-setup-benchmarking-infr…
lorenzovarese Jul 24, 2024
8046538
fix benchmarks
lorenzovarese Jul 26, 2024
8fea0ca
Add benchmark comparison between Julia's broadcast addition and the f…
lorenzovarese Jul 31, 2024
8d0296b
Update the Benchmark suite. Provide comparison between broadcast on a…
lorenzovarese Aug 2, 2024
b9d4e2e
Improve naming and type checking
lorenzovarese Aug 2, 2024
2ab1421
Add draft of neighbour_sum benchmark
lorenzovarese Aug 13, 2024
3062bdf
Add the benchmarks for sine and cosine field operators
lorenzovarese Aug 13, 2024
323c269
Add benchmarks for remapping
lorenzovarese Aug 14, 2024
9c138c2
Add draft mpdata
lorenzovarese Aug 14, 2024
e3ba8ec
Merge remote-tracking branch 'origin' into 16-setup-benchmarking-infr…
lorenzovarese Aug 16, 2024
700a545
Clear benchmarks.jl and add remapping
lorenzovarese Aug 16, 2024
3590280
Add neighbor sum benchmark to the suite
lorenzovarese Aug 16, 2024
0be7ec1
Fix dependencies in benchmarks
lorenzovarese Aug 16, 2024
62276b0
Add verbose flag to avoid printing
lorenzovarese Aug 16, 2024
1bbb1e6
Quick fix to the unary/binary operation support
lorenzovarese Aug 16, 2024
17a55f8
Use the ExampleMeshes in Atlas miniapp (with workaround on the offset…
lorenzovarese Aug 16, 2024
fcb44cd
Add benchmark for mp_data
lorenzovarese Aug 16, 2024
22d1257
Fix slicing operation in the advection_miniapp
lorenzovarese Aug 19, 2024
30d92c1
Fix benchmarking size
lorenzovarese Aug 19, 2024
9928e14
Remove deprecated benchmark files
lorenzovarese Aug 19, 2024
177f8ba
Fix advection benchmarks and place them in a separate script
lorenzovarese Aug 19, 2024
391dc0a
Fix K dimension in advection meshes
lorenzovarese Aug 20, 2024
9048666
Add multi-threads optimization on broadcasting operation
lorenzovarese Aug 20, 2024
e2ce601
Change benchmark SUITE for compatibility with AirSpeedVelocity
lorenzovarese Aug 20, 2024
45cf97a
Add multi-threads optimization
lorenzovarese Aug 20, 2024
6fb4870
Merge branch '16-setup-benchmarking-infrastructure' of https://github…
lorenzovarese Aug 20, 2024
8b8a68f
Restoring SIMD loop in broadcast
lorenzovarese Aug 20, 2024
be385b7
Add benchmark readme on how to run benchmarks on separate revisions
lorenzovarese Aug 20, 2024
b9ebf8e
Create an AtlasMeshes module and resolve issues with atlas4py import
lorenzovarese Aug 20, 2024
085877d
Fix embedded test with the new K dimension definition in example meshes
lorenzovarese Aug 20, 2024
89572e1
Separate the simulation loop from the Advection Setup of the miniapp
lorenzovarese Aug 20, 2024
4d71e0b
Small changes in benchmark documentation
lorenzovarese Aug 20, 2024
26fe900
Fix the names retrieval of the modules automatically generated by Air…
lorenzovarese Aug 20, 2024
7cac41c
Ignore plot files by AirSpeedVelocity
lorenzovarese Aug 21, 2024
6cb5585
Add Polyester to the dependencies
lorenzovarese Aug 21, 2024
96f416f
Increase the size of the Atlas Mesh for benchmarking purposes
lorenzovarese Aug 22, 2024
426f936
Add script to automate the benchmark comparison between the last two …
lorenzovarese Aug 22, 2024
c8a08bb
Add utilis for benchmark/profiling in the interactive REPL
lorenzovarese Aug 22, 2024
d964221
Move autorun in the utils folder
lorenzovarese Aug 22, 2024
182dd6d
Update autorun script
lorenzovarese Aug 22, 2024
b1f539e
Fix the autorun script to use hashes instead of tags
lorenzovarese Aug 23, 2024
1930bdb
Update basic benchmarks for CI usage
lorenzovarese Sep 10, 2024
b2693a1
Add benchmarks.jl to scripts in the cscs ci
lorenzovarese Sep 10, 2024
86fae38
Test commit
lorenzovarese Sep 10, 2024
0588d7f
Retrigger CI
tehrengruber Sep 10, 2024
4f6d89b
Retrigger CI
tehrengruber Sep 10, 2024
518749b
Triggers CI
lorenzovarese Sep 10, 2024
6b4fba8
Retrigger CI
tehrengruber Sep 10, 2024
3b97d78
Merge branch 'main' into 31-ci-benchmarking
tehrengruber Sep 10, 2024
0ff763e
Merge branch '16-setup-benchmarking-infrastructure' into 31-ci-benchm…
lorenzovarese Sep 12, 2024
13a8ca2
Merge branch '31-ci-benchmarking' of https://github.com/GridTools/Gri…
lorenzovarese Sep 12, 2024
70a0aa8
Add atlas4py dependency into the CI
lorenzovarese Sep 12, 2024
38f2db7
Add documentation to access ci container
lorenzovarese Sep 12, 2024
f18450d
Add multithread optimization to custom broadcast
lorenzovarese Sep 12, 2024
25e9a7c
Merge remote-tracking branch 'origin' into 31-ci-benchmarking
lorenzovarese Sep 12, 2024
57772c2
Fix basic benchmark ci
lorenzovarese Sep 12, 2024
b8f6d64
Add atlast4py to the basic ci for benchmarks
lorenzovarese Sep 12, 2024
47bd242
Add atlas4py dependency to documentation ci
lorenzovarese Sep 12, 2024
54204f6
Improve basic ci
lorenzovarese Sep 12, 2024
329a5f2
Fix benchmark pr basic
lorenzovarese Sep 12, 2024
54fbd7d
Change basic benchmark pr
lorenzovarese Sep 12, 2024
01b1188
Fix commit comparison in benchmark pr basic
lorenzovarese Sep 12, 2024
f1c720f
Fix commit hash in benchmark pr
lorenzovarese Sep 12, 2024
c79c03b
Add package name to benchpkgplot in benchmark pr
lorenzovarese Sep 12, 2024
c82571b
Retrigger CI
lorenzovarese Sep 13, 2024
bf0884b
Fix benchmark pr base with the correct comparison between main and cu…
lorenzovarese Sep 13, 2024
086462b
Add AirSpeedVelocity.jl to CSCS CI
lorenzovarese Sep 13, 2024
4311017
Remove plots generation in cscs ci
lorenzovarese Sep 13, 2024
daa0f41
Fix script format in cscs ci
lorenzovarese Sep 13, 2024
223d79d
Add escaping chars in the cscs ci script
lorenzovarese Sep 13, 2024
2b22e78
Fix formatting script cscs ci
lorenzovarese Sep 13, 2024
187d795
Fix formatting only when required cscs ci script
lorenzovarese Sep 13, 2024
e827f75
Encapsulate airspeedvelocity logic in a script
lorenzovarese Sep 13, 2024
40783c0
Merge remote-tracking branch 'origin' into 31-ci-benchmarking
lorenzovarese Sep 13, 2024
148c3b6
Merge remote-tracking branch 'origin' into 31-ci-benchmarking
lorenzovarese Sep 13, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
180 changes: 112 additions & 68 deletions .github/workflows/benchmark_pr.yml
Original file line number Diff line number Diff line change
@@ -1,78 +1,122 @@
name: Benchmark a pull request
name: GridTools Benchmark CI Pipeline

on:
pull_request:
push:
branches:
- main
tags: ['*']
pull_request:

permissions:
pull-requests: write
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: ${{ startsWith(github.ref, 'refs/pull/') }}

jobs:
generate_plots:
runs-on: ubuntu-latest
benchmark:
name: Julia ${{ matrix.version }} - ${{ matrix.os }} - ${{ matrix.arch }} - ${{ github.event_name }}
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
julia_version:
- '1.8'
python_version:
- '3.10'
os:
- ubuntu-latest
arch:
- x64

steps:
- uses: actions/checkout@v3

- name: Set up Python ${{ matrix.python_version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python_version }}

- name: Install dependencies
run: |
sudo apt-get update
sudo apt-get install -y libboost-all-dev
python -m pip install --upgrade pip

- name: Install GT4Py and atlas4py
run: |
git clone --branch fix_python_interp_path_in_cmake https://github.com/tehrengruber/gt4py.git
cd gt4py
pip install -r requirements-dev.txt
pip install .
pip install -i https://test.pypi.org/simple/ atlas4py

- uses: julia-actions/setup-julia@v1
with:
version: ${{ matrix.julia_version }}
arch: ${{ matrix.arch }}

- uses: julia-actions/cache@v1

- name: Install and Build Benchmarking Tools
run: |
julia -e 'using Pkg; Pkg.add("AirspeedVelocity"); Pkg.build("AirspeedVelocity")'
echo "PATH=$PATH:$HOME/.julia/bin" >> $GITHUB_ENV
ls $HOME/.julia/bin

steps:
- uses: actions/checkout@v2
- uses: julia-actions/setup-julia@v1
with:
version: "1.8"
- uses: julia-actions/cache@v1
- name: Extract Package Name from Project.toml
id: extract-package-name
run: |
PACKAGE_NAME=$(grep "^name" Project.toml | sed 's/^name = "\(.*\)"$/\1/')
echo "::set-output name=package_name::$PACKAGE_NAME"
- name: Build AirspeedVelocity
env:
JULIA_NUM_THREADS: 2
run: |
# Lightweight build step, as sometimes the runner runs out of memory:
julia -e 'ENV["JULIA_PKG_PRECOMPILE_AUTO"]=0; import Pkg; Pkg.add(;url="https://github.com/MilesCranmer/AirspeedVelocity.jl.git")'
julia -e 'ENV["JULIA_PKG_PRECOMPILE_AUTO"]=0; import Pkg; Pkg.build("AirspeedVelocity")'
- name: Add ~/.julia/bin to PATH
run: |
echo "$HOME/.julia/bin" >> $GITHUB_PATH
- name: Run benchmarks
run: |
echo $PATH
ls -l ~/.julia/bin
mkdir results
benchpkg ${{ steps.extract-package-name.outputs.package_name }} --rev="${{github.event.repository.default_branch}},${{github.event.pull_request.head.sha}}" --url=${{ github.event.repository.clone_url }} --bench-on="${{github.event.repository.default_branch}}" --output-dir=results/ --tune
- name: Create plots from benchmarks
run: |
mkdir -p plots
benchpkgplot ${{ steps.extract-package-name.outputs.package_name }} --rev="${{github.event.repository.default_branch}},${{github.event.pull_request.head.sha}}" --npart=10 --format=png --input-dir=results/ --output-dir=plots/
- name: Upload plot as artifact
uses: actions/upload-artifact@v2
with:
name: plots
path: plots
- name: Create markdown table from benchmarks
run: |
benchpkgtable ${{ steps.extract-package-name.outputs.package_name }} --rev="${{github.event.repository.default_branch}},${{github.event.pull_request.head.sha}}" --input-dir=results/ --ratio > table.md
echo '### Benchmark Results' > body.md
echo '' >> body.md
echo '' >> body.md
cat table.md >> body.md
echo '' >> body.md
echo '' >> body.md
echo '### Benchmark Plots' >> body.md
echo 'A plot of the benchmark results have been uploaded as an artifact to the workflow run for this PR.' >> body.md
echo 'Go to "Actions"->"Benchmark a pull request"->[the most recent run]->"Artifacts" (at the bottom).' >> body.md
- name: Run Benchmarks
run: |
echo "Current PATH: $PATH"
mkdir results
git fetch origin main:refs/remotes/origin/main
CURRENT_COMMIT=$(git rev-parse HEAD)
LAST_MAIN_COMMIT=$(git rev-parse origin/main)
echo "Benchmarking current commit ($CURRENT_COMMIT) in the current branch and ($LAST_MAIN_COMMIT) in the main branch"
benchpkg --rev="$LAST_MAIN_COMMIT,$CURRENT_COMMIT" --bench-on="$CURRENT_COMMIT" --output-dir=results/ --tune

- name: Generate and Upload Benchmark Plots
run: |
mkdir -p plots
CURRENT_COMMIT=$(git rev-parse HEAD)
LAST_MAIN_COMMIT=$(git rev-parse origin/main)
echo "Generating plots comparing current commit ($CURRENT_COMMIT) against ($LAST_MAIN_COMMIT) in the main branch"
benchpkgplot GridTools --rev="$LAST_MAIN_COMMIT,$CURRENT_COMMIT" --npart=10 --format=png --input-dir=results/ --output-dir=plots/

- name: Upload Plots as Artifacts
uses: actions/upload-artifact@v4
with:
name: benchmark-plots
path: plots

- name: Create and Display Benchmark Table
run: |
mkdir -p results
CURRENT_COMMIT=$(git rev-parse HEAD)
LAST_MAIN_COMMIT=$(git rev-parse origin/main) # Fetch the latest commit from main for comparison
echo "Creating benchmark table comparing current commit ($CURRENT_COMMIT) against ($LAST_MAIN_COMMIT) in the main branch"
benchpkgtable --rev="$LAST_MAIN_COMMIT,$CURRENT_COMMIT" --input-dir=results/ --ratio > table.md
echo '### Benchmark Results' > body.md
cat table.md >> body.md
echo '### Benchmark Plots' >> body.md
echo 'A plot of the benchmark results has been uploaded as an artifact to this workflow run.' >> body.md
cat body.md # Print the markdown table to the log for review.

- name: Upload Benchmark Results Table
uses: actions/upload-artifact@v4
with:
name: benchmark-results-table
path: body.md

# - name: Find Comment
# uses: peter-evans/find-comment@v2
# id: fcbenchmark
# with:
# issue-number: ${{ github.event.pull_request.number }}
# comment-author: 'github-actions[bot]'
# body-includes: Benchmark Results
- name: Find and Comment Benchmark Results
uses: peter-evans/find-comment@v2
id: fcbenchmark
with:
issue-number: ${{ github.event.pull_request.number }}
comment-author: 'github-actions[bot]'
body-includes: Benchmark Results

- name: Comment on PR
uses: peter-evans/create-or-update-comment@v3
with:
# comment-id: ${{ steps.fcbenchmark.outputs.comment-id }}
issue-number: ${{ github.event.pull_request.number }}
body-path: body.md
edit-mode: replace
- name: Comment on PR with Benchmark Results
uses: peter-evans/create-or-update-comment@v3
with:
comment-id: ${{ steps.fcbenchmark.outputs.comment-id }}
issue-number: ${{ github.event.pull_request.number }}
body-path: body.md
edit-mode: replace
33 changes: 25 additions & 8 deletions .github/workflows/documentation.yml
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
name: Documentation
name: Documentation Build

on:
push:
branches:
- main # update to match your development branch (master, main, dev, trunk, ...)
tags: '*'
- main # Reflects your active development branch
tags: ['*']
pull_request:

jobs:
Expand All @@ -13,14 +13,31 @@ jobs:
contents: write
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3

- name: Set up Python environment
run: |
sudo apt-get update
sudo apt-get install python3-pip
python3 -m pip install --upgrade pip

- name: Install GT4Py and atlas4py
run: |
git clone --branch fix_python_interp_path_in_cmake https://github.com/tehrengruber/gt4py.git
cd gt4py
pip install -r requirements-dev.txt
pip install .
pip install -i https://test.pypi.org/simple/ atlas4py

- uses: julia-actions/setup-julia@v1
with:
version: '1.8'
- name: Install dependencies

- name: Install Julia dependencies
run: julia --project=docs/ -e 'using Pkg; Pkg.develop(PackageSpec(path=pwd())); Pkg.instantiate()'
- name: Build and deploy

- name: Build and deploy documentation
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # If authenticating with GitHub Actions token
DOCUMENTER_KEY: ${{ secrets.DOCUMENTER_KEY }} # If authenticating with SSH deploy key
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # Used for Git operations by Documenter.jl
DOCUMENTER_KEY: ${{ secrets.DOCUMENTER_KEY }} # If using SSH deploy keys for secure operations
run: julia --project=docs/ docs/make.jl
14 changes: 14 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -18,3 +18,17 @@ docs/build/
.DS_Store

Manifest.toml

# Python Env
.venv
env_setup.sh
.python-version

# Misc
**/.DS_Store
.vscode

# Ignore benchmark (benchpkg) results
results_GridTools@*
plot_*.png
plot_*.pdf
1 change: 1 addition & 0 deletions Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
JuliaFormatter = "98e50ef6-434e-11e9-1051-2b60c6c9e899"
MacroTools = "1914dd2f-81c6-5fcd-8719-6d5c9610ff09"
OffsetArrays = "6fe1bfb0-de20-5000-8ca7-80f57d26f881"
Polyester = "f517fe37-dbe3-4b94-8317-1923a5111588"
Printf = "de0858da-6303-5e67-8744-51eddeeeb8d7"
Profile = "9abbd945-dff8-562f-b5e8-e1ebf5ef1b79"
PyCall = "438e738f-606a-5dbb-bf0a-cddfbfd45ab0"
Expand Down
20 changes: 10 additions & 10 deletions advection/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
### README for Running `advection_miniapp.jl`
### README for Running `advection_setup.jl` using `run_simulation_loop.jl`

This README provides instructions on how to run the `advection_miniapp.jl` script for simulating advection using the Atlas library. The script allows for terminal visualization, which can be enabled as described below.
This README provides instructions on how to run the `run_simulation_loop.jl` script for simulating advection using the Atlas library. The script allows for terminal visualization, which can be enabled as described below.

#### Prerequisites

Expand All @@ -15,23 +15,23 @@ This README provides instructions on how to run the `advection_miniapp.jl` scrip
```

2. **Enabling Visualization** (optional):
- The script has a `VISUALIZATION_FLAG` that can be set to enable or disable visualization on the terminal. Ensure that this flag is set to `true` in the `advection_miniapp.jl` script if you wish to enable visualization.
- Note: Other parameters such as the number of iterations can be changed in the `# Simulation Parameters` section of the script.
- The script has a `VISUALIZATION_FLAG` that can be set to enable or disable visualization on the terminal. Ensure that this flag is set to `true` in the `run_simulation_loop.jl` script if you wish to enable visualization.
- Note: Other parameters such as the number of iterations can be changed in the `# Simulation Parameters` section of the `advection_setup.jl` script.

#### Running the Simulation

1. **Running the Script**:
- Use the following command to run the `advection_miniapp.jl` script with Julia:
- Use the following command to run the `run_simulation_loop.jl` script with Julia:
```sh
julia --color=yes --project=$GRIDTOOLS_JL_PATH/GridTools.jl $GRIDTOOLS_JL_PATH/GridTools.jl/src/examples/advection/advection_miniapp.jl
julia --color=yes --project=$GRIDTOOLS_JL_PATH/GridTools.jl $GRIDTOOLS_JL_PATH/GridTools.jl/src/examples/advection/run_simulation_loop.jl
```

#### Example

Here is an example of how to set the `VISUALIZATION_FLAG` in the `advection_miniapp.jl` script and run the simulation:
Here is an example of how to set the `VISUALIZATION_FLAG` in the `run_simulation_loop.jl` script and run the simulation:

1. **Setting the Visualization Flag**:
- Open the `advection_miniapp.jl` script.
- Open the `run_simulation_loop.jl` script.
- Set the `VISUALIZATION_FLAG` to `true`:
```julia
const VISUALIZATION_FLAG = true
Expand All @@ -42,7 +42,7 @@ Here is an example of how to set the `VISUALIZATION_FLAG` in the `advection_mini
- Run the script with the following command:
```sh
export GRIDTOOLS_JL_PATH=...
julia --color=yes --project=. $GRIDTOOLS_JL_PATH/src/examples/advection/advection_miniapp.jl
julia --color=yes --project=. $GRIDTOOLS_JL_PATH/src/examples/advection/run_simulation_loop.jl
```

By following these steps, you should be able to run the `advection_miniapp.jl` script and visualize the advection simulation results on your terminal.
By following these steps, you should be able to run the `run_simulation_loop.jl` script and visualize the advection simulation results on your terminal.
17 changes: 4 additions & 13 deletions advection/advection.jl
Original file line number Diff line number Diff line change
Expand Up @@ -6,11 +6,10 @@
level_indices::Field{Tuple{K_}, Int64},
num_level::Int64
)::Field{Tuple{Vertex_, K_}, Float64}

return where(
level_indices .== num_level - 1,
level_indices .== 0,
lower,
where(slice(level_indices .== 0, 1:29), upper, interior)
where(slice(level_indices .== 29, 2:30), upper, interior)
)
end

Expand Down Expand Up @@ -149,7 +148,8 @@ end
)::Field{Tuple{Vertex_, K_}, Float64}
zrhin =
(1.0 ./ vol) .* neighbor_sum(
-min.(0.0, flux(V2E)) .* max.(0.0, dual_face_orientation) -
# TODO: fix the 0-min workaround due to the binary/unary operation issue
(broadcast(0., (Vertex, V2EDim, K)) .- min.(0.0, flux(V2E))) .* max.(0.0, dual_face_orientation) -
max.(0.0, flux(V2E)) .* min.(0.0, dual_face_orientation),
axis = V2EDim,
)
Expand Down Expand Up @@ -227,15 +227,6 @@ end
dual_face_orientation::Field{Tuple{Vertex_, V2EDim_}, Float64},
dual_face_normal_weighted_x::Field{Tuple{Edge_}, Float64},
dual_face_normal_weighted_y::Field{Tuple{Edge_}, Float64},
tmp_vertex_1::Field{Tuple{Vertex_, K_}, Float64},
tmp_vertex_2::Field{Tuple{Vertex_, K_}, Float64},
tmp_vertex_3::Field{Tuple{Vertex_, K_}, Float64},
tmp_vertex_4::Field{Tuple{Vertex_, K_}, Float64},
tmp_vertex_5::Field{Tuple{Vertex_, K_}, Float64},
tmp_vertex_6::Field{Tuple{Vertex_, K_}, Float64},
tmp_edge_1::Field{Tuple{Edge_, K_}, Float64},
tmp_edge_2::Field{Tuple{Edge_, K_}, Float64},
tmp_edge_3::Field{Tuple{Edge_, K_}, Float64},
)

tmp_edge_1 = advector_normal(
Expand Down
Loading
Loading