Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci[cartesian]: Thread safe parallel stencil tests #1849

Conversation

romanc
Copy link
Contributor

@romanc romanc commented Feb 6, 2025

Description

To avoid repeating boiler plate code in testing, StencilTestSuite provides a convenient interace to test gtscript stencils.

Within that StencilTestSuite base class, generating the stencil is separated from running & validating the stencil code. Each deriving test class will end up with two tests: one for stencil generation and a second one to test the implementation by running the generated code with defined inputs and expected outputs.

The base class was written such that the implementation test would re-use the generated stencil code from the first test. This introduces an implicit test order dependency. To save time and avoid unnecessary test failure outputs, failing to generate the stencil code would automatically skip the implementation/validation test.

Running tests in parallel (with xdist) breaks the expected test execution order (in the default configuration). This leads to automatically skiped validation tests in case the stencil code wasn't generated yet. On the CI, we only run with 2 threads so only a couple tests were skipped usually. Locally, I was running with 16 threads and got ~30 skipped validation tests.

This PR proposes to address the issue by setting an xdist_group mark on the generation/implementation tests that belong togehter. In combination with --dist loadgroup, this will keep the expected order where necessary. Only tests with xdist_group markers are affected by --dist loadgroup. Tests without that marker will be distributed normally as if in --dist load mode (the default so far). By grouping with cls_name and backend, we keep maximal parallelization, grouping only the two tests that are depending on each other.

Further reading: see --dist section in pytest-xdist documentation.

Requirements

  • All fixes and/or new features come with corresponding tests.
    Existing tests are still green. No more skipped tests \o/ Works as expected locally
  • Important design decisions have been documented in the appropriate ADR inside the docs/development/ADRs/ folder.
    N/A

To avoid repeating boiler plate code in testing, `StencilTestSuite`
provides a convenient interace to test gtscript stencils.

Within that `StencilTestSuite` base class, generating the stencil is
separated from running & validating the stencil code. Each deriving
test class will end up with two tests: one for stencil generation and a
second one to test the implementation by running the generated code with
defined inputs and expected outputs.

The base class was written such that the implementation test would
re-use the generated stencil code from the first test. This introduces
an implicit test order dependency. To save time and avoid unnecessary
test failure outputs, failing to generate the stencil code would
automatically skip the implementation/validation test.

Running tests in parallel (with `xdist`) breaks the expected test
execution order (in the default configuration). This leads to
automatically skiped validation tests in case the stencil code wasn't
generated yet. On the CI, we only run with 2 threads so only a couple
tests were skipped usually. Locally, I was running with 16 threads and
got ~30 skipped validation tests.

This PR proposes to address the issue by setting an `xdist_group` mark
on the generation/implementation tests that belong togehter. In
combination with `--dist loadgroup`, this will keep the expected order
where necessary. Only tests with `xdist_group` markers are affected by
`--dist loadgroup`. Tests without that marker will be distributed
normally as if in `--dist load` mode (the default so far). By grouping
with `cls_name` and backend, we keep maximal parallelization, grouping
only the two tests that are depending on each other.
@romanc romanc force-pushed the romanc/cartesian-thread-safe-parallel-tests branch from 6900575 to 8e8e497 Compare February 6, 2025 14:21
Copy link
Contributor Author

@romanc romanc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

some details inline

@@ -167,7 +167,7 @@ def get_globals_combinations(dtypes):
generation_strategy=composite_strategy_factory(
d, generation_strategy_factories
),
implementations=[],
implementation=None,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This might have been different in the past. The way we cache implementations now, there's only ever max one implementation per test context.

Comment on lines -437 to +451
The generated implementations are cached in a :class:`utils.ImplementationsDB`
instance, to avoid duplication of (potentially expensive) compilations.
The generated implementation is cached in the test context, to avoid duplication
of (potentially expensive) compilation.
Note: This caching introduces a dependency between tests, which is captured by an
`xdist_group` marker in combination with `--dist loadgroup` to ensure safe parallel
test execution.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment was out of date. There's no utils.ImplementationDB (anymore).

Comment on lines -464 to +478
test["implementations"].append(implementation)
assert test["implementation"] is None
test["implementation"] = implementation
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Assert our assumption that we only ever cache one implementation per test context.

Comment on lines 591 to 614
implementation_list = test["implementations"]
if not implementation_list:
pytest.skip(
"Cannot perform validation tests, since there are no valid implementations."
)
for implementation in implementation_list:
if not isinstance(implementation, StencilObject):
raise RuntimeError("Wrong function got from implementations_db cache!")
implementation = test["implementation"]
assert (
implementation is not None
), "Stencil not yet generated. Did you attempt to run stencil tests in parallel?"
assert isinstance(implementation, StencilObject)

cls._run_test_implementation(parameters_dict, implementation)
cls._run_test_implementation(parameters_dict, implementation)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Simplified since we don't have an array of implementations (anymore). Assert that the stencil code has been generated. If not, fail instead of skip. This leads to more errors in case code generation fails. Imo the best way to handle this is to get rid of the the ideas of having two tests (one for codegen and one for validation) per class. We could achieve the same level of parallelization with less glue code if we had just one test per class (codegen and validation inside the same test).

@romanc romanc marked this pull request as ready for review February 6, 2025 14:34
@romanc
Copy link
Contributor Author

romanc commented Feb 6, 2025

/cc @egparedes @havogt FYI

The previous one was from when I thought we had to run these tests
on one thread only.
Copy link
Contributor

@FlorianDeconinck FlorianDeconinck left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

Copy link
Contributor

@havogt havogt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@FlorianDeconinck FlorianDeconinck merged commit 4b566d7 into GridTools:main Feb 7, 2025
31 checks passed
@romanc romanc deleted the romanc/cartesian-thread-safe-parallel-tests branch February 8, 2025 12:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants