Conversation
📝 WalkthroughWalkthroughAdds VTK‑HDF5 diagnostic writer/reader support, runtime diagnostic format selection, a patch-centric pharesee hierarchy/compute refactor, new VTK plotting utilities and tests, and multiple C++ and Python API/utility extensions and helpers. Changes
Sequence Diagram(s)sequenceDiagram
participant Sim as Simulation
participant Resolver as DiagnosticsManagerResolver
participant Writer as vtkh5::H5Writer
participant TypeW as H5TypeWriter
participant Init as VTKFileInitializer
participant FileW as VTKFileWriter
participant MV as ModelView
Sim->>Resolver: make_unique(dict with format)
Resolver->>Writer: instantiate H5Writer(hier, model, path, flags)
Sim->>Writer: dump(diagnostics, timestamp)
Writer->>TypeW: getDiagnosticWriterForType(type)
TypeW->>Writer: setup(DiagnosticProperties)
Writer->>Init: initialize per-level files (compute offsets)
TypeW->>Writer: write(DiagnosticProperties)
Writer->>FileW: create VTKFileWriter(level offset)
FileW->>MV: read fields/tensors per-box
FileW->>FileW: writeField / writeTensorField into HDF5
sequenceDiagram
participant User as Caller
participant HF as hierarchy_from
participant Loader as fromvtkhdf5.VtkFile
participant Builder as PatchBuilder
participant Hier as PatchHierarchy
User->>HF: hierarchy_from(filename)
alt filename endsWith(".h5")
HF->>HF: call hierarchy_fromh5(...)
else filename endsWith(".vtkhdf")
HF->>Loader: hierarchy_fromvtkhdf(...)
Loader->>Loader: parse times, levels, offsets
Loader->>Builder: patch_levels_from_h5(...)
Builder->>Hier: PatchHierarchy(patch_levels, times, domain)
end
Hier-->>User: return PatchHierarchy
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Suggested labels
Suggested reviewers
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
CodeQL found more than 20 potential problems in the proposed changes. Check the Files changed tab for more details.
39d7491 to
0c9462d
Compare
There was a problem hiding this comment.
Actionable comments posted: 11
🧹 Nitpick comments (3)
src/python3/patch_data.hpp (1)
35-36: Consider passing string parameters by const reference.The
patchIDandoriginparameters insetPatchData,setPatchDataFromGrid, andsetPatchDataFromFieldare passed by value, creating unnecessary copies. Since these functions don't require ownership, passing them asstd::string const&would be more efficient.Apply this pattern to avoid copies:
-void setPatchData(PatchData& data, std::string const patchID, std::string const origin, +void setPatchData(PatchData& data, std::string const& patchID, std::string const& origin, Container lower, Container upper)-void setPatchDataFromGrid(PatchData& pdata, GridLayout& grid, std::string patchID) +void setPatchDataFromGrid(PatchData& pdata, GridLayout& grid, std::string const& patchID)-void setPatchDataFromField(PatchData& pdata, Field const& field, GridLayout& grid, - std::string patchID) +void setPatchDataFromField(PatchData& pdata, Field const& field, GridLayout& grid, + std::string const& patchID)Also applies to: 46-46, 54-55
src/core/data/ndarray/ndarray_vector.hpp (1)
38-38: Simplify unused parameter.The inline comment
/*nCells*/for the unused parameter is non-standard. Modern C++ convention is to simply omit the parameter name.Apply this diff:
- static inline std::uint32_t idx(auto const /*nCells*/, Idx i) { return i; } + static inline std::uint32_t idx(auto const, Idx i) { return i; }src/amr/resources_manager/amr_utils.hpp (1)
247-247: Preserve perfect forwarding into resman.setOnPatchNow that
visitHierarchyforwardsargs, we should propagate the perfect forwarding down intosetOnPatch; otherwise rvalues (e.g., move-only helpers) devolve to lvalues when named insidevisitLevel, preventing those call-sites from compiling or forcing extra copies. Applyingstd::forwardhere keeps the chain intact.- auto guard = resman.setOnPatch(*patch, args...); + auto guard = resman.setOnPatch(*patch, std::forward<Args>(args)...);
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (34)
pyphare/pyphare/pharein/__init__.py(1 hunks)pyphare/pyphare/pharein/simulation.py(1 hunks)pyphare/pyphare/pharesee/__init__.py(1 hunks)pyphare/pyphare/pharesee/tovtk.py(1 hunks)src/amr/physical_models/hybrid_model.hpp(1 hunks)src/amr/resources_manager/amr_utils.hpp(1 hunks)src/core/data/grid/gridlayout.hpp(3 hunks)src/core/data/grid/gridlayoutimplyee.hpp(1 hunks)src/core/data/ndarray/ndarray_vector.hpp(5 hunks)src/core/data/tensorfield/tensorfield.hpp(1 hunks)src/core/utilities/algorithm.hpp(2 hunks)src/core/utilities/box/box.hpp(2 hunks)src/core/utilities/mpi_utils.hpp(1 hunks)src/core/utilities/types.hpp(3 hunks)src/diagnostic/detail/h5writer.hpp(1 hunks)src/diagnostic/detail/vtk_types/electromag.hpp(1 hunks)src/diagnostic/detail/vtk_types/fluid.hpp(1 hunks)src/diagnostic/detail/vtk_types/info.hpp(1 hunks)src/diagnostic/detail/vtk_types/meta.hpp(1 hunks)src/diagnostic/detail/vtk_types/particle.hpp(1 hunks)src/diagnostic/detail/vtkh5_type_writer.hpp(1 hunks)src/diagnostic/detail/vtkh5_writer.hpp(1 hunks)src/diagnostic/diagnostic_manager.hpp(1 hunks)src/diagnostic/diagnostic_model_view.hpp(6 hunks)src/diagnostic/diagnostics.hpp(2 hunks)src/hdf5/detail/h5/h5_file.hpp(2 hunks)src/phare/CMakeLists.txt(0 hunks)src/phare/phare_init.py(0 hunks)src/phare/phare_init_small.py(0 hunks)src/python3/patch_data.hpp(1 hunks)tests/simulator/CMakeLists.txt(1 hunks)tests/simulator/__init__.py(1 hunks)tests/simulator/refinement/test_2d_10_core.py(1 hunks)tests/simulator/test_vtk_diagnostics.py(1 hunks)
💤 Files with no reviewable changes (3)
- src/phare/CMakeLists.txt
- src/phare/phare_init_small.py
- src/phare/phare_init.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.hpp
⚙️ CodeRabbit configuration file
Review the C++ code, point out issues relative to principles of clean code, expressiveness, and performance.
Files:
src/diagnostic/diagnostics.hppsrc/python3/patch_data.hppsrc/amr/physical_models/hybrid_model.hppsrc/amr/resources_manager/amr_utils.hppsrc/diagnostic/detail/vtk_types/particle.hppsrc/diagnostic/detail/vtk_types/fluid.hppsrc/core/utilities/box/box.hppsrc/core/utilities/mpi_utils.hppsrc/diagnostic/detail/h5writer.hppsrc/diagnostic/detail/vtk_types/info.hppsrc/core/data/grid/gridlayout.hppsrc/diagnostic/diagnostic_manager.hppsrc/core/data/grid/gridlayoutimplyee.hppsrc/core/utilities/types.hppsrc/core/data/ndarray/ndarray_vector.hppsrc/hdf5/detail/h5/h5_file.hppsrc/diagnostic/detail/vtk_types/meta.hppsrc/core/utilities/algorithm.hppsrc/diagnostic/diagnostic_model_view.hppsrc/diagnostic/detail/vtkh5_writer.hppsrc/core/data/tensorfield/tensorfield.hppsrc/diagnostic/detail/vtk_types/electromag.hppsrc/diagnostic/detail/vtkh5_type_writer.hpp
🧠 Learnings (7)
📓 Common learnings
Learnt from: PhilipDeegan
Repo: PHAREHUB/PHARE PR: 784
File: tests/simulator/test_restarts.py:333-339
Timestamp: 2024-10-09T08:32:15.667Z
Learning: PhilipDeegan has indicated a preference for minimal and efficient code, even after making changes to get a function working. There may be opportunities to remove or optimize parts of the code such as sleep/wait/advances.
Learnt from: PhilipDeegan
Repo: PHAREHUB/PHARE PR: 784
File: tests/simulator/test_restarts.py:333-339
Timestamp: 2024-07-26T22:04:34.160Z
Learning: PhilipDeegan has indicated a preference for minimal and efficient code, even after making changes to get a function working. There may be opportunities to remove or optimize parts of the code such as sleep/wait/advances.
Learnt from: PhilipDeegan
Repo: PHAREHUB/PHARE PR: 1068
File: src/amr/data/field/coarsening/electric_field_coarsener.hpp:1-2
Timestamp: 2025-09-17T13:35:11.533Z
Learning: PhilipDeegan prefers header guard names that include the full directory path structure, following the pattern PHARE_[PATH_WITH_UNDERSCORES]_HPP. For example, a file at src/amr/data/field/coarsening/electric_field_coarsener.hpp should use PHARE_AMR_DATA_FIELD_COARSENING_ELECTRIC_FIELD_COARSENER_HPP as the header guard.
Learnt from: PhilipDeegan
Repo: PHAREHUB/PHARE PR: 888
File: src/hdf5/detail/h5/h5_file.hpp:78-79
Timestamp: 2024-10-09T08:32:15.667Z
Learning: Avoid commenting on draft PRs for the user PhilipDeegan.
Learnt from: PhilipDeegan
Repo: PHAREHUB/PHARE PR: 888
File: src/hdf5/detail/h5/h5_file.hpp:78-79
Timestamp: 2024-09-05T17:02:58.784Z
Learning: Avoid commenting on draft PRs for the user PhilipDeegan.
📚 Learning: 2025-02-07T14:35:14.630Z
Learnt from: nicolasaunai
Repo: PHAREHUB/PHARE PR: 591
File: res/cmake/options.cmake:73-75
Timestamp: 2025-02-07T14:35:14.630Z
Learning: In CMake configuration for PHARE, `lowResourceTests` and `highResourceTests` options must be mutually exclusive and cannot be enabled simultaneously.
Applied to files:
tests/simulator/CMakeLists.txt
📚 Learning: 2025-07-09T17:18:05.771Z
Learnt from: PhilipDeegan
Repo: PHAREHUB/PHARE PR: 888
File: src/amr/CMakeLists.txt:103-105
Timestamp: 2025-07-09T17:18:05.771Z
Learning: In the PHARE project, `HighFive` is a CMake option defined in `res/cmake/options.cmake` as `option(HighFive "Build with highfive usage" ON)`, not a target that would be created by find_package(). The condition `if (HighFive)` correctly checks this option.
Applied to files:
src/diagnostic/detail/h5writer.hpp
📚 Learning: 2024-07-26T22:04:34.160Z
Learnt from: PhilipDeegan
Repo: PHAREHUB/PHARE PR: 0
File: :0-0
Timestamp: 2024-07-26T22:04:34.160Z
Learning: The variable `c_ordering` in `ConcreteLoadBalancerHybridStrategyNPPC<PHARE_T>::compute` is used to create an array view `lb_view` for handling load balancing data.
Applied to files:
src/core/utilities/algorithm.hpp
📚 Learning: 2025-09-17T13:35:11.533Z
Learnt from: PhilipDeegan
Repo: PHAREHUB/PHARE PR: 1068
File: src/amr/data/field/coarsening/electric_field_coarsener.hpp:1-2
Timestamp: 2025-09-17T13:35:11.533Z
Learning: PhilipDeegan prefers header guard names that include the full directory path structure, following the pattern PHARE_[PATH_WITH_UNDERSCORES]_HPP. For example, a file at src/amr/data/field/coarsening/electric_field_coarsener.hpp should use PHARE_AMR_DATA_FIELD_COARSENING_ELECTRIC_FIELD_COARSENER_HPP as the header guard.
Applied to files:
src/core/utilities/algorithm.hpp
📚 Learning: 2024-09-09T13:57:02.285Z
Learnt from: PhilipDeegan
Repo: PHAREHUB/PHARE PR: 888
File: pyphare/pyphare/pharesee/hierarchy/hierarchy_utils.py:566-568
Timestamp: 2024-09-09T13:57:02.285Z
Learning: Avoid commenting on draft PRs for PhilipDeegan unless explicitly requested.
Applied to files:
src/diagnostic/detail/vtkh5_writer.hppsrc/diagnostic/detail/vtkh5_type_writer.hpp
📚 Learning: 2025-09-17T13:50:16.038Z
Learnt from: PhilipDeegan
Repo: PHAREHUB/PHARE PR: 1068
File: src/amr/data/tensorfield/tensor_field_data_factory.hpp:112-112
Timestamp: 2025-09-17T13:50:16.038Z
Learning: PhilipDeegan indicated that getSizeOfMemory returning a placeholder value of 1 in TensorFieldDataFactory is acceptable for their use case, suggesting it may not be critical for PHARE's memory management needs.
Applied to files:
src/core/data/tensorfield/tensorfield.hpp
🧬 Code graph analysis (20)
src/diagnostic/diagnostics.hpp (3)
src/diagnostic/diagnostic_manager.hpp (5)
dict(83-83)dict(86-86)dict(86-86)make_unique(74-80)make_unique(74-75)src/diagnostic/detail/vtkh5_writer.hpp (2)
make_unique(67-74)make_unique(67-67)src/diagnostic/detail/h5writer.hpp (2)
make_unique(61-68)make_unique(61-61)
src/amr/resources_manager/amr_utils.hpp (1)
src/core/data/field/field.hpp (4)
args(75-81)args(75-75)args(83-86)args(83-83)
src/diagnostic/detail/vtk_types/particle.hpp (4)
src/diagnostic/detail/vtk_types/electromag.hpp (2)
write(30-55)write(30-30)src/diagnostic/detail/vtkh5_type_writer.hpp (4)
diagnostic(87-92)diagnostic(87-87)ilvl(385-414)ilvl(385-385)src/diagnostic/detail/vtkh5_writer.hpp (2)
diagnostic(97-101)diagnostic(97-97)src/diagnostic/detail/h5writer.hpp (4)
diagnostic(94-98)diagnostic(94-94)iLevel(210-214)iLevel(210-210)
src/diagnostic/detail/vtk_types/fluid.hpp (3)
src/diagnostic/detail/vtkh5_type_writer.hpp (6)
diagnostic(87-92)diagnostic(87-87)ilvl(385-414)ilvl(385-385)boxes(57-71)boxes(57-57)src/diagnostic/detail/vtkh5_writer.hpp (2)
diagnostic(97-101)diagnostic(97-97)src/diagnostic/detail/vtk_types/electromag.hpp (2)
write(30-55)write(30-30)
src/core/utilities/box/box.hpp (1)
src/core/utilities/types.hpp (2)
product(265-268)product(265-265)
tests/simulator/refinement/test_2d_10_core.py (2)
pyphare/pyphare/cpp/__init__.py (1)
cpp_lib(6-9)tests/simulator/test_advance.py (1)
AdvanceTestBase(33-788)
src/diagnostic/detail/vtk_types/info.hpp (7)
src/diagnostic/detail/vtk_types/electromag.hpp (2)
write(30-55)write(30-30)src/diagnostic/detail/vtk_types/fluid.hpp (4)
write(38-71)write(38-38)diagnostic(29-33)diagnostic(29-30)src/diagnostic/detail/vtk_types/meta.hpp (2)
write(29-31)write(29-29)src/diagnostic/detail/vtk_types/particle.hpp (2)
write(31-54)write(31-31)src/diagnostic/detail/vtkh5_type_writer.hpp (2)
diagnostic(87-92)diagnostic(87-87)src/diagnostic/detail/vtkh5_writer.hpp (2)
diagnostic(97-101)diagnostic(97-97)src/diagnostic/detail/h5writer.hpp (2)
diagnostic(94-98)diagnostic(94-94)
src/core/data/grid/gridlayout.hpp (2)
src/core/utilities/types.hpp (5)
auto(151-156)auto(491-494)auto(497-500)for_N_make_array(484-487)for_N_make_array(484-484)src/core/utilities/box/box.hpp (6)
grow(88-93)grow(88-88)grow(323-328)grow(323-323)i(96-96)i(96-96)
tests/simulator/test_vtk_diagnostics.py (7)
pyphare/pyphare/cpp/__init__.py (1)
cpp_lib(6-9)pyphare/pyphare/simulator/simulator.py (2)
startMPI(40-44)reset(254-265)tests/simulator/__init__.py (4)
SimulatorTest(201-267)tearDown(204-205)unique_diag_dir_for_test_case(250-254)run(246-248)tests/diagnostic/__init__.py (1)
dump_all_diags(15-61)pyphare/pyphare/pharein/simulation.py (2)
simulation_domain(978-979)Simulation(745-1080)pyphare/pyphare/pharein/maxwellian_fluid_model.py (1)
MaxwellianFluidModel(8-303)pyphare/pyphare/pharesee/tovtk.py (1)
convert(269-416)
tests/simulator/__init__.py (1)
pyphare/pyphare/core/box.py (1)
Box(5-69)
src/core/data/grid/gridlayoutimplyee.hpp (1)
tests/core/data/gridlayout/test_linear_combinaisons_yee.py (1)
dualToPrimal(27-35)
src/core/utilities/types.hpp (1)
src/core/data/tensorfield/tensorfield.hpp (2)
for_N(83-87)for_N(88-92)
src/core/data/ndarray/ndarray_vector.hpp (1)
src/core/utilities/types.hpp (2)
product(265-268)product(265-265)
src/diagnostic/detail/vtk_types/meta.hpp (5)
src/diagnostic/detail/vtk_types/electromag.hpp (2)
write(30-55)write(30-30)src/diagnostic/detail/vtk_types/fluid.hpp (4)
write(38-71)write(38-38)diagnostic(29-33)diagnostic(29-30)src/diagnostic/detail/vtk_types/info.hpp (2)
write(31-33)write(31-31)src/diagnostic/detail/vtkh5_type_writer.hpp (2)
diagnostic(87-92)diagnostic(87-87)src/diagnostic/detail/types/meta.hpp (1)
PHARE(7-143)
pyphare/pyphare/pharesee/tovtk.py (1)
src/diagnostic/detail/vtkh5_type_writer.hpp (8)
shape(114-121)shape(114-114)shape(123-131)shape(123-123)ilvl(385-414)ilvl(385-385)resize(417-423)resize(417-417)
src/core/utilities/algorithm.hpp (2)
src/core/utilities/types.hpp (2)
all(355-358)all(355-355)src/core/data/ndarray/ndarray_vector.hpp (4)
make_array_view(214-217)make_array_view(214-214)make_array_view(220-223)make_array_view(220-220)
src/diagnostic/diagnostic_model_view.hpp (5)
src/amr/messengers/hybrid_hybrid_messenger_strategy.hpp (12)
patch(137-145)patch(137-137)ions(410-420)ions(410-411)ions(424-450)ions(424-425)ions(452-486)ions(452-453)ions(498-537)ions(498-499)ions(545-551)ions(545-546)src/amr/resources_manager/amr_utils.hpp (2)
visitHierarchy(257-266)visitHierarchy(257-258)src/core/data/electrons/electrons.hpp (16)
layout(107-133)layout(107-107)layout(285-285)layout(285-285)layout(286-286)layout(286-286)layout(309-319)layout(309-309)forward_as_tuple(47-50)forward_as_tuple(52-55)forward_as_tuple(181-184)forward_as_tuple(186-186)forward_as_tuple(254-257)forward_as_tuple(259-262)forward_as_tuple(330-333)forward_as_tuple(335-338)src/core/data/ions/ion_population/ion_population.hpp (1)
forward_as_tuple(109-113)src/core/data/ions/ions.hpp (1)
forward_as_tuple(225-229)
src/diagnostic/detail/vtkh5_writer.hpp (4)
src/diagnostic/diagnostic_model_view.hpp (1)
dimension(30-61)src/diagnostic/detail/h5writer.hpp (15)
H5Writer(51-56)H5Writer(58-58)H5Writer(194-194)H5Writer(195-195)make_unique(61-68)make_unique(61-61)level(72-73)timestamp(101-104)timestamp(101-101)diagnostic(94-98)diagnostic(94-94)modelView_(160-160)timestamp_(161-161)dump(224-250)dump(224-225)src/diagnostic/diagnostics.hpp (2)
make_unique(50-65)make_unique(50-51)src/diagnostic/detail/vtkh5_type_writer.hpp (6)
level(309-331)level(309-309)level(333-337)level(333-333)diagnostic(87-92)diagnostic(87-87)
src/diagnostic/detail/vtk_types/electromag.hpp (2)
src/diagnostic/detail/vtk_types/fluid.hpp (4)
write(38-71)write(38-38)diagnostic(29-33)diagnostic(29-30)src/diagnostic/detail/vtkh5_type_writer.hpp (6)
diagnostic(87-92)diagnostic(87-87)ilvl(385-414)ilvl(385-385)boxes(57-71)boxes(57-57)
src/diagnostic/detail/vtkh5_type_writer.hpp (7)
src/core/utilities/types.hpp (5)
get_env_as(249-249)sum_from(277-285)sum_from(277-277)sum(271-274)sum(271-271)src/hdf5/detail/h5/h5_file.hpp (9)
data(66-66)s(248-248)s(248-248)s(249-253)s(249-249)path(267-273)path(267-267)path(275-280)path(275-275)src/core/utilities/box/box.hpp (11)
box(50-50)i(96-96)i(96-96)grow(88-93)grow(88-88)grow(323-328)grow(323-323)s(209-234)s(209-209)s(235-240)s(235-235)src/core/data/tensorfield/tensorfield.hpp (6)
i(178-178)i(178-178)i(179-179)i(179-179)tensor_field_dim_from_rank(18-24)tensor_field_dim_from_rank(18-18)src/core/utilities/mpi_utils.hpp (4)
rank(25-25)collect(17-17)collect(214-229)collect(214-214)src/diagnostic/detail/vtkh5_writer.hpp (2)
diagnostic(97-101)diagnostic(97-97)src/core/utilities/algorithm.hpp (2)
convert_to_fortran_primal(137-140)convert_to_fortran_primal(137-137)
🪛 GitHub Check: CodeQL
pyphare/pyphare/pharesee/tovtk.py
[notice] 163-163: Unused local variable
Variable cellData_g is not used.
[notice] 165-165: Unused local variable
Variable fieldData_g is not used.
[notice] 166-166: Unused local variable
Variable cellDataOffset_g is not used.
[notice] 167-167: Explicit returns mixed with implicit (fall through) returns
Mixing implicit and explicit returns may indicate an error, as implicit returns always return None.
[notice] 168-168: Unused local variable
Variable FieldDataOffset_g is not used.
[failure] 361-361: Wrong number of arguments in a call
Call to function primalScalarToFlatPrimal with too many arguments; should be no more than 5.
[failure] 387-387: Wrong number of arguments in a call
Call to function BtoFlatPrimal with too few arguments; should be no fewer than 6.
Call to function EtoFlatPrimal with too few arguments; should be no fewer than 6.
Call to function primalVectorToFlatPrimal with too few arguments; should be no fewer than 6.
src/core/utilities/algorithm.hpp
[notice] 105-105: Unused local variable
Variable c_ordering is not used.
src/diagnostic/detail/vtkh5_type_writer.hpp
[notice] 41-41: Commented-out code
This comment appears to contain commented-out code.
🪛 Ruff (0.14.2)
tests/simulator/test_vtk_diagnostics.py
1-1: Shebang is present but file is not executable
(EXE001)
27-27: Unused function argument: xyz
(ARG001)
34-34: Do not assign a lambda expression, use a def
Rewrite _ as a def
(E731)
41-41: Do not assign a lambda expression, use a def
Rewrite _ as a def
(E731)
44-44: Unused function argument: xyz
(ARG001)
47-47: Unused function argument: xyz
(ARG001)
54-54: Do not assign a lambda expression, use a def
Rewrite _ as a def
(E731)
61-61: Do not assign a lambda expression, use a def
Rewrite _ as a def
(E731)
64-64: Unused function argument: xyz
(ARG001)
67-67: Unused function argument: xyz
(ARG001)
70-70: Unused function argument: xyz
(ARG001)
121-121: Do not use mutable data structures for argument defaults
Replace with None; initialize within function
(B006)
pyphare/pyphare/pharesee/tovtk.py
1-1: Shebang is present but file is not executable
(EXE001)
17-19: Avoid specifying long messages outside the exception class
(TRY003)
204-204: Avoid specifying long messages outside the exception class
(TRY003)
232-232: Avoid specifying long messages outside the exception class
(TRY003)
263-263: Consider [*root_spacing, 0, 0] instead of concatenation
Replace with [*root_spacing, 0, 0]
(RUF005)
265-265: Consider [*root_spacing, 0] instead of concatenation
Replace with [*root_spacing, 0]
(RUF005)
321-321: Local variable cellData_g is assigned to but never used
Remove assignment to unused variable cellData_g
(F841)
323-323: Local variable fieldData_g is assigned to but never used
Remove assignment to unused variable fieldData_g
(F841)
324-324: Local variable cellDataOffset_g is assigned to but never used
Remove assignment to unused variable cellDataOffset_g
(F841)
326-326: Local variable FieldDataOffset_g is assigned to but never used
Remove assignment to unused variable FieldDataOffset_g
(F841)
384-384: Prefer next(iter(patch.keys())) over single element slice
Replace with next(iter(patch.keys()))
(RUF015)
🔇 Additional comments (8)
src/python3/patch_data.hpp (1)
26-30: Critical fix: Correct variadic template forwarding.The change from
std::forward<Args...>(args...)tostd::forward<Args>(args)...fixes invalid syntax. The ellipsis must appear outside thestd::forwardcall to properly expand each argument in the parameter pack individually, ensuring correct perfect forwarding semantics.src/hdf5/detail/h5/h5_file.hpp (1)
98-99: LGTM!The defensive existence check prevents duplicate dataset creation and returns the existing dataset if found.
src/core/utilities/types.hpp (2)
248-254: Verify error handling for invalid environment variable values.This function depends on
from_string<T>, which silently returns a default-initialized value on conversion failure. If an environment variable exists but contains an invalid value for typeT, this function will return an unexpected result without any indication of failure.Ensure that upstream callers can handle or validate the returned values, especially for critical configuration parameters.
483-487: LGTM!This is a clean convenience wrapper around
for_Nwith themake_arraymode. The implementation properly forwards the function parameter and maintains constexpr correctness.src/core/utilities/mpi_utils.hpp (1)
5-7: LGTM!The added includes are necessary and properly support the file's dependencies:
phare_mpi.hppfor MPI definitions used throughoutspan.hppforSpanSettemplate (lines 169, 174, 204)types.hppfor type traits likeis_std_vector_v,is_std_array_v, anddisplacementFrom(lines 77, 154, 221)src/core/data/ndarray/ndarray_vector.hpp (3)
55-70: LGTM!The unified index computation via
idx(...)followed by bounds-checked access throughdata[i]provides a clean, consistent access pattern across all dimensionalities.
101-101: LGTM! Consistent template parameter updates.All usage sites have been correctly updated to use
NdArrayViewerwithout theDataTypetemplate parameter, maintaining consistency with the simplified template signature.Also applies to: 138-138, 271-271, 283-283
22-23: Reconsider index type constraint—no evidence of practical limitation.The
std::uint16_tindex type appears to be an intentional design constraint for this physics simulation codebase. Shape values are stored asstd::uint32_tarrays (lines 124, 209, 307), but indices are constrained touint16_t—a deliberate trade-off likely for memory efficiency or simulation requirements. No codebase search found array dimensions exceeding 65K in any single dimension, and type promotion in index calculations prevents overflow. This design aligns with typical physics grid constraints for plasma simulations.
9371cab to
da3b967
Compare
There was a problem hiding this comment.
Actionable comments posted: 4
♻️ Duplicate comments (7)
src/hdf5/detail/h5/h5_file.hpp (3)
105-114: Missing parent group creation will cause runtime failure.Unlike
create_data_set(line 100), this function doesn't callcreateGroupsToDataSet(path)before creating the dataset. If parent groups don't exist,h5file_.createDataSetwill fail at runtime.Apply this diff to add parent group creation:
template<typename Type> auto create_chunked_data_set(auto const& path, auto const chunk, auto const& dataspace) { if (exist(path)) return h5file_.getDataSet(path); + createGroupsToDataSet(path); HighFive::DataSetCreateProps props; props.add(HighFive::Chunking{chunk}); return h5file_.createDataSet(path, dataspace, HighFive::create_datatype<Type>(), props); }
249-253: Replace assertion with exception for production safety.The
assert(exist(s))is compiled out in release builds (whenNDEBUGis defined), leading to undefined behavior if a non-existent dataset is accessed. This could cause crashes or silent data corruption in production.Apply this diff:
auto getDataSet(std::string const& s) { - assert(exist(s)); + if (!exist(s)) + throw std::runtime_error("Dataset does not exist: " + s); return h5file_.getDataSet(s); }
255-258: Fix incorrect special member function signatures.Two critical issues:
- Line 256: Move constructor declared with
const&&defeats move semantics since you cannot transfer ownership from a const object.- Line 257: Copy assignment operator missing
constqualifier on parameter, which is non-standard and could allow accidental modification of the source object.Apply this diff to correct both signatures:
HighFiveFile(HighFiveFile const&) = delete; -HighFiveFile(HighFiveFile const&&) = delete; -HighFiveFile& operator=(HighFiveFile&) = delete; +HighFiveFile(HighFiveFile&&) = delete; +HighFiveFile& operator=(HighFiveFile const&) = delete; HighFiveFile& operator=(HighFiveFile&&) = delete;src/core/utilities/types.hpp (1)
224-231: Missing include and error handling already flagged.The issues with missing
<sstream>include and silent failure on conversion errors have already been identified in a previous review.pyphare/pyphare/pharesee/tovtk.py (1)
80-91: Correct 3D By primalization averaging.The 3D branch reuses
ph_bzand averages along the Y axis, so By mixes the wrong component and skips the X-direction dual pair. The exported magnetic field is therefore incorrect.Replace the averaging with this fix:
- by[:, :, :] = 0.5 * ( - 0.5 - * ( - ph_by[domain, domainP1, domain][:, 1:, :] - + ph_by[domain, domainP1, domain][:, :-1, :] - ) - + 0.5 - * ( - ph_bz[domain, domain, domainP1][:, :, 1:] - + ph_bz[domain, domain, domainP1][:, :, :-1] - ) - ) + by[:, :, :] = 0.5 * ( + 0.5 + * ( + ph_by[domainP1, domain, domain][1:, :, :] + + ph_by[domainP1, domain, domain][:-1, :, :] + ) + + 0.5 + * ( + ph_by[domain, domain, domainP1][:, :, 1:] + + ph_by[domain, domain, domainP1][:, :, :-1] + ) + )src/diagnostic/detail/vtk_types/particle.hpp (1)
37-54: Particle VTK writer is still a no-op
write_quantitynever writes anything, sovisitHierarchyproduces empty particle datasets and every particle diagnostic is silently lost. This is the same blocker that was raised earlier—please port the actual write logic (e.g., iterate populations/layouts and feed the initializer just like the existing H5 ParticlesDiagnosticWriter does) before landing this format.src/diagnostic/detail/vtkh5_writer.hpp (1)
177-183: Restore post-write flag downgrade to prevent data loss.Line 179 sets
file_flags[diagnostic->type + diagnostic->quantity] = this->flags, which in overwrite mode includesHiFile::Truncate. This flag persists across subsequentdump()calls, causing each dump to reopen the file in truncate mode and erase earlier time steps.Apply this diff to reset flags after writing:
for (auto* diagnostic : diagnostics) typeWriters_.at(diagnostic->type)->write(*diagnostic); + + for (auto* diagnostic : diagnostics) + file_flags[diagnostic->type + diagnostic->quantity] = READ_WRITE;
🧹 Nitpick comments (6)
src/amr/resources_manager/amr_utils.hpp (1)
229-239: Consider forwarding args for consistency withvisitHierarchy.While
visitHierarchynow properly forwards the variadic arguments (line 251),visitLevelreceives forwarding references but passes them tosetOnPatchwithout forwarding (line 234). For consistency and to preserve move semantics, consider applying the same forwarding pattern.Apply this diff to forward the arguments:
template<typename GridLayout, typename ResMan, typename Action, typename... Args> void visitLevel(SAMRAI_Types::level_t& level, ResMan& resman, Action&& action, Args&&... args) { for (auto& patch : level) { - auto guard = resman.setOnPatch(*patch, args...); + auto guard = resman.setOnPatch(*patch, std::forward<Args>(args)...); GridLayout layout = layoutFromPatch<GridLayout>(*patch); action(layout, to_string(patch->getGlobalId()), static_cast<std::size_t>(level.getLevelNumber())); } }tests/simulator/refinement/test_2d_10_core.py (1)
17-24: Consider refactoring for test isolation.Module-level initialization of
cpp,test,L0_diags, andL0L1_diagscreates global mutable state that persists across imports. While this works for single-use test scripts, it could impact test isolation if this module is later integrated into a larger test suite.Consider lazy initialization or moving these to
main()to improve test isolation and initialization order control.Example refactor moving initialization to main():
-cpp = cpp_lib() -test = AdvanceTestBase(rethrow=True) # change to False for debugging images -L0_diags = "phare_outputs/test_x_homo_0" -L0L1_diags = "phare_outputs/test_x_homo_1" - ph.NO_GUI() +# Module-level variables to be initialized in main() +cpp = None +test = None +L0_diags = "phare_outputs/test_x_homo_0" +L0L1_diags = "phare_outputs/test_x_homo_1" def config(diag_outputs, model_init={}, refinement_boxes=None):Then initialize in
main():def main(): + global cpp, test import random startMPI() + cpp = cpp_lib() + test = AdvanceTestBase(rethrow=True) rando = random.randint(0, int(1e10))tests/simulator/test_vtk_diagnostics.py (2)
34-34: Consider replacing lambda assignments with def statements.PEP 8 recommends using
defstatements instead of assigning lambdas to variables for better traceability and debugging.Example refactor for one occurrence:
- _ = lambda i: 0.1 * np.sin(2 * np.pi * xyz[i] / L[i]) + def component_value(i): + return 0.1 * np.sin(2 * np.pi * xyz[i] / L[i])Also applies to: 41-41, 54-54, 61-61
169-169: Clarify the status of EM_B conversion.The commented-out line suggests
tovtkconversion fails forEM_B.h5.Should this be tracked as a TODO/FIXME, or is this a known limitation that's acceptable for this PR? If it's a bug, consider opening an issue to track it.
Do you want me to help create an issue for tracking the EM_B conversion problem?
src/diagnostic/detail/vtkh5_writer.hpp (1)
130-130: Hard-coded maxLevel limit.The hard-coded
maxLevel = 10may be insufficient for simulations requiring deeper AMR hierarchies.Consider making this configurable via the
PHAREDictinmake_unique, e.g.:auto maxLevel = cppdict::get_value(dict, "max_level", std::size_t{10});src/diagnostic/detail/vtkh5_type_writer.hpp (1)
361-361: Document the X_TIMES multiplier logic.The dimension-dependent multiplier
X_TIMES(4 for 1D, 2 for 2D, 1 for 3D) implements an important conversion factor but lacks explanation.Add a comment explaining why the multiplier is needed:
// VTK stores data per cube face: 2D data requires 2 faces, 1D requires 4 faces, 3D is 1:1 constexpr static auto X_TIMES = std::array{4, 2, /* 3d noop */ 1}[dimension - 1];
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (33)
pyphare/pyphare/pharein/__init__.py(1 hunks)pyphare/pyphare/pharein/simulation.py(1 hunks)pyphare/pyphare/pharesee/__init__.py(1 hunks)pyphare/pyphare/pharesee/tovtk.py(1 hunks)src/amr/physical_models/hybrid_model.hpp(1 hunks)src/amr/resources_manager/amr_utils.hpp(1 hunks)src/core/data/grid/gridlayout.hpp(3 hunks)src/core/data/grid/gridlayoutdefs.hpp(1 hunks)src/core/data/grid/gridlayoutimplyee.hpp(27 hunks)src/core/data/ndarray/ndarray_vector.hpp(5 hunks)src/core/data/tensorfield/tensorfield.hpp(1 hunks)src/core/utilities/algorithm.hpp(2 hunks)src/core/utilities/box/box.hpp(2 hunks)src/core/utilities/types.hpp(3 hunks)src/diagnostic/detail/h5writer.hpp(1 hunks)src/diagnostic/detail/vtk_types/electromag.hpp(1 hunks)src/diagnostic/detail/vtk_types/fluid.hpp(1 hunks)src/diagnostic/detail/vtk_types/info.hpp(1 hunks)src/diagnostic/detail/vtk_types/meta.hpp(1 hunks)src/diagnostic/detail/vtk_types/particle.hpp(1 hunks)src/diagnostic/detail/vtkh5_type_writer.hpp(1 hunks)src/diagnostic/detail/vtkh5_writer.hpp(1 hunks)src/diagnostic/diagnostic_manager.hpp(1 hunks)src/diagnostic/diagnostic_model_view.hpp(7 hunks)src/diagnostic/diagnostics.hpp(2 hunks)src/hdf5/detail/h5/h5_file.hpp(2 hunks)src/phare/CMakeLists.txt(0 hunks)src/phare/phare_init.py(0 hunks)src/phare/phare_init_small.py(0 hunks)tests/simulator/CMakeLists.txt(2 hunks)tests/simulator/__init__.py(1 hunks)tests/simulator/refinement/test_2d_10_core.py(1 hunks)tests/simulator/test_vtk_diagnostics.py(1 hunks)
💤 Files with no reviewable changes (3)
- src/phare/CMakeLists.txt
- src/phare/phare_init_small.py
- src/phare/phare_init.py
🚧 Files skipped from review as they are similar to previous changes (12)
- src/diagnostic/diagnostic_manager.hpp
- pyphare/pyphare/pharein/simulation.py
- tests/simulator/init.py
- pyphare/pyphare/pharein/init.py
- src/core/data/grid/gridlayout.hpp
- pyphare/pyphare/pharesee/init.py
- src/diagnostic/detail/vtk_types/meta.hpp
- src/diagnostic/detail/h5writer.hpp
- src/amr/physical_models/hybrid_model.hpp
- src/core/utilities/algorithm.hpp
- tests/simulator/CMakeLists.txt
- src/core/utilities/box/box.hpp
🧰 Additional context used
📓 Path-based instructions (1)
**/*.hpp
⚙️ CodeRabbit configuration file
Review the C++ code, point out issues relative to principles of clean code, expressiveness, and performance.
Files:
src/core/data/grid/gridlayoutdefs.hppsrc/core/data/tensorfield/tensorfield.hppsrc/amr/resources_manager/amr_utils.hppsrc/diagnostic/detail/vtk_types/electromag.hppsrc/diagnostic/detail/vtk_types/particle.hppsrc/diagnostic/detail/vtk_types/fluid.hppsrc/diagnostic/diagnostics.hppsrc/core/utilities/types.hppsrc/hdf5/detail/h5/h5_file.hppsrc/core/data/grid/gridlayoutimplyee.hppsrc/core/data/ndarray/ndarray_vector.hppsrc/diagnostic/detail/vtkh5_writer.hppsrc/diagnostic/detail/vtk_types/info.hppsrc/diagnostic/diagnostic_model_view.hppsrc/diagnostic/detail/vtkh5_type_writer.hpp
🧠 Learnings (2)
📓 Common learnings
Learnt from: PhilipDeegan
Repo: PHAREHUB/PHARE PR: 784
File: tests/simulator/test_restarts.py:333-339
Timestamp: 2024-10-09T08:32:15.667Z
Learning: PhilipDeegan has indicated a preference for minimal and efficient code, even after making changes to get a function working. There may be opportunities to remove or optimize parts of the code such as sleep/wait/advances.
Learnt from: PhilipDeegan
Repo: PHAREHUB/PHARE PR: 784
File: tests/simulator/test_restarts.py:333-339
Timestamp: 2024-07-26T22:04:34.160Z
Learning: PhilipDeegan has indicated a preference for minimal and efficient code, even after making changes to get a function working. There may be opportunities to remove or optimize parts of the code such as sleep/wait/advances.
Learnt from: PhilipDeegan
Repo: PHAREHUB/PHARE PR: 1068
File: src/amr/data/field/coarsening/electric_field_coarsener.hpp:1-2
Timestamp: 2025-09-17T13:35:11.533Z
Learning: PhilipDeegan prefers header guard names that include the full directory path structure, following the pattern PHARE_[PATH_WITH_UNDERSCORES]_HPP. For example, a file at src/amr/data/field/coarsening/electric_field_coarsener.hpp should use PHARE_AMR_DATA_FIELD_COARSENING_ELECTRIC_FIELD_COARSENER_HPP as the header guard.
Learnt from: PhilipDeegan
Repo: PHAREHUB/PHARE PR: 888
File: src/hdf5/detail/h5/h5_file.hpp:78-79
Timestamp: 2024-10-09T08:32:15.667Z
Learning: Avoid commenting on draft PRs for the user PhilipDeegan.
Learnt from: PhilipDeegan
Repo: PHAREHUB/PHARE PR: 888
File: src/hdf5/detail/h5/h5_file.hpp:78-79
Timestamp: 2024-09-05T17:02:58.784Z
Learning: Avoid commenting on draft PRs for the user PhilipDeegan.
📚 Learning: 2024-09-09T13:57:02.285Z
Learnt from: PhilipDeegan
Repo: PHAREHUB/PHARE PR: 888
File: pyphare/pyphare/pharesee/hierarchy/hierarchy_utils.py:566-568
Timestamp: 2024-09-09T13:57:02.285Z
Learning: Avoid commenting on draft PRs for PhilipDeegan unless explicitly requested.
Applied to files:
src/diagnostic/detail/vtkh5_writer.hppsrc/diagnostic/detail/vtkh5_type_writer.hpp
🧬 Code graph analysis (13)
src/amr/resources_manager/amr_utils.hpp (1)
src/core/data/field/field.hpp (4)
args(75-81)args(75-75)args(83-86)args(83-83)
src/diagnostic/detail/vtk_types/electromag.hpp (3)
src/diagnostic/detail/vtk_types/fluid.hpp (4)
write(38-71)write(38-38)diagnostic(29-33)diagnostic(29-30)src/diagnostic/detail/vtkh5_type_writer.hpp (8)
diagnostic(87-92)diagnostic(87-87)lvl(267-272)lvl(267-267)ilvl(388-417)ilvl(388-388)boxes(57-71)boxes(57-57)src/diagnostic/detail/vtkh5_writer.hpp (2)
diagnostic(97-101)diagnostic(97-97)
tests/simulator/test_vtk_diagnostics.py (5)
pyphare/pyphare/cpp/__init__.py (1)
cpp_lib(6-9)pyphare/pyphare/simulator/simulator.py (2)
startMPI(41-45)reset(258-269)tests/simulator/__init__.py (3)
SimulatorTest(201-267)tearDown(204-205)unique_diag_dir_for_test_case(250-254)tests/diagnostic/__init__.py (1)
dump_all_diags(15-62)pyphare/pyphare/pharesee/tovtk.py (1)
convert(269-416)
src/diagnostic/detail/vtk_types/particle.hpp (3)
src/diagnostic/detail/vtk_types/electromag.hpp (2)
write(30-55)write(30-30)src/diagnostic/detail/vtk_types/fluid.hpp (4)
write(38-71)write(38-38)diagnostic(29-33)diagnostic(29-30)src/diagnostic/detail/vtkh5_type_writer.hpp (6)
diagnostic(87-92)diagnostic(87-87)lvl(267-272)lvl(267-267)ilvl(388-417)ilvl(388-388)
src/diagnostic/detail/vtk_types/fluid.hpp (2)
src/diagnostic/detail/vtkh5_type_writer.hpp (8)
diagnostic(87-92)diagnostic(87-87)lvl(267-272)lvl(267-267)ilvl(388-417)ilvl(388-388)boxes(57-71)boxes(57-57)src/diagnostic/detail/vtk_types/electromag.hpp (2)
write(30-55)write(30-30)
src/diagnostic/diagnostics.hpp (3)
src/diagnostic/diagnostic_manager.hpp (5)
dict(83-83)dict(86-86)dict(86-86)make_unique(74-80)make_unique(74-75)src/diagnostic/detail/vtkh5_writer.hpp (2)
make_unique(67-74)make_unique(67-67)src/diagnostic/detail/h5writer.hpp (2)
make_unique(61-68)make_unique(61-61)
src/core/data/ndarray/ndarray_vector.hpp (1)
src/core/utilities/types.hpp (2)
product(265-268)product(265-265)
tests/simulator/refinement/test_2d_10_core.py (2)
pyphare/pyphare/cpp/__init__.py (1)
cpp_lib(6-9)tests/simulator/test_advance.py (1)
AdvanceTestBase(33-869)
src/diagnostic/detail/vtkh5_writer.hpp (4)
src/diagnostic/diagnostic_model_view.hpp (1)
dimension(30-60)src/diagnostic/diagnostics.hpp (2)
make_unique(50-65)make_unique(50-51)src/diagnostic/detail/vtkh5_type_writer.hpp (6)
level(298-334)level(298-298)level(336-340)level(336-336)diagnostic(87-92)diagnostic(87-87)src/hdf5/detail/h5/h5_file.hpp (4)
path(267-273)path(267-267)path(275-280)path(275-275)
src/diagnostic/detail/vtk_types/info.hpp (5)
src/diagnostic/detail/vtk_types/electromag.hpp (2)
write(30-55)write(30-30)src/diagnostic/detail/vtk_types/fluid.hpp (4)
write(38-71)write(38-38)diagnostic(29-33)diagnostic(29-30)src/diagnostic/detail/vtk_types/meta.hpp (2)
write(29-31)write(29-29)src/diagnostic/detail/vtk_types/particle.hpp (2)
write(31-54)write(31-31)src/diagnostic/detail/vtkh5_type_writer.hpp (2)
diagnostic(87-92)diagnostic(87-87)
src/diagnostic/diagnostic_model_view.hpp (4)
src/amr/messengers/hybrid_hybrid_messenger_strategy.hpp (10)
patch(146-154)patch(146-146)ions(407-416)ions(407-407)ions(420-445)ions(420-420)ions(447-480)ions(447-447)ions(492-531)ions(492-493)src/amr/resources_manager/amr_utils.hpp (2)
visitHierarchy(244-253)visitHierarchy(244-245)src/core/data/ions/ion_population/ion_population.hpp (1)
forward_as_tuple(109-113)src/core/data/ions/ions.hpp (1)
forward_as_tuple(225-229)
src/diagnostic/detail/vtkh5_type_writer.hpp (5)
src/core/utilities/types.hpp (7)
get_env_as(249-249)for_N_make_array(504-507)for_N_make_array(504-504)sum_from(277-285)sum_from(277-277)sum(271-274)sum(271-271)src/core/utilities/box/box.hpp (7)
box(51-51)i(97-97)i(97-97)grow(89-94)grow(89-89)grow(275-280)grow(275-275)src/core/data/tensorfield/tensorfield.hpp (6)
i(181-181)i(181-181)i(182-182)i(182-182)tensor_field_dim_from_rank(17-24)tensor_field_dim_from_rank(17-17)src/diagnostic/detail/vtkh5_writer.hpp (2)
diagnostic(97-101)diagnostic(97-97)src/core/utilities/algorithm.hpp (2)
convert_to_fortran_primal(137-140)convert_to_fortran_primal(137-137)
pyphare/pyphare/pharesee/tovtk.py (2)
pyphare/pyphare/core/box.py (1)
shape(32-34)src/diagnostic/detail/vtkh5_type_writer.hpp (6)
ilvl(388-417)ilvl(388-388)lvl(267-272)lvl(267-267)resize(420-426)resize(420-420)
🪛 GitHub Check: CodeQL
pyphare/pyphare/pharesee/tovtk.py
[failure] 361-361: Wrong number of arguments in a call
Call to function primalScalarToFlatPrimal with too many arguments; should be no more than 5.
[failure] 387-387: Wrong number of arguments in a call
Call to function BtoFlatPrimal with too few arguments; should be no fewer than 6.
Call to function EtoFlatPrimal with too few arguments; should be no fewer than 6.
Call to function primalVectorToFlatPrimal with too few arguments; should be no fewer than 6.
[notice] 163-163: Unused local variable
Variable cellData_g is not used.
[notice] 165-165: Unused local variable
Variable fieldData_g is not used.
[notice] 166-166: Unused local variable
Variable cellDataOffset_g is not used.
[notice] 167-167: Explicit returns mixed with implicit (fall through) returns
Mixing implicit and explicit returns may indicate an error, as implicit returns always return None.
[notice] 168-168: Unused local variable
Variable FieldDataOffset_g is not used.
🪛 Ruff (0.14.3)
tests/simulator/test_vtk_diagnostics.py
1-1: Shebang is present but file is not executable
(EXE001)
27-27: Unused function argument: xyz
(ARG001)
34-34: Do not assign a lambda expression, use a def
Rewrite _ as a def
(E731)
41-41: Do not assign a lambda expression, use a def
Rewrite _ as a def
(E731)
44-44: Unused function argument: xyz
(ARG001)
47-47: Unused function argument: xyz
(ARG001)
54-54: Do not assign a lambda expression, use a def
Rewrite _ as a def
(E731)
61-61: Do not assign a lambda expression, use a def
Rewrite _ as a def
(E731)
64-64: Unused function argument: xyz
(ARG001)
67-67: Unused function argument: xyz
(ARG001)
70-70: Unused function argument: xyz
(ARG001)
121-121: Do not use mutable data structures for argument defaults
Replace with None; initialize within function
(B006)
pyphare/pyphare/pharesee/tovtk.py
1-1: Shebang is present but file is not executable
(EXE001)
17-19: Avoid specifying long messages outside the exception class
(TRY003)
204-204: Avoid specifying long messages outside the exception class
(TRY003)
232-232: Avoid specifying long messages outside the exception class
(TRY003)
263-263: Consider [*root_spacing, 0, 0] instead of concatenation
Replace with [*root_spacing, 0, 0]
(RUF005)
265-265: Consider [*root_spacing, 0] instead of concatenation
Replace with [*root_spacing, 0]
(RUF005)
321-321: Local variable cellData_g is assigned to but never used
Remove assignment to unused variable cellData_g
(F841)
323-323: Local variable fieldData_g is assigned to but never used
Remove assignment to unused variable fieldData_g
(F841)
324-324: Local variable cellDataOffset_g is assigned to but never used
Remove assignment to unused variable cellDataOffset_g
(F841)
326-326: Local variable FieldDataOffset_g is assigned to but never used
Remove assignment to unused variable FieldDataOffset_g
(F841)
384-384: Prefer next(iter(patch.keys())) over single element slice
Replace with next(iter(patch.keys()))
(RUF015)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
- GitHub Check: build (macos-13)
- GitHub Check: build (macos-latest)
- GitHub Check: build (macos-14)
- GitHub Check: Analyze (cpp)
- GitHub Check: Analyze (python)
- GitHub Check: build (ubuntu-latest, clang)
- GitHub Check: build (ubuntu-latest, gcc)
🔇 Additional comments (4)
src/hdf5/detail/h5/h5_file.hpp (1)
98-99: LGTM! Good defensive check.The existence check makes dataset creation idempotent, allowing safe repeated calls.
src/amr/resources_manager/amr_utils.hpp (1)
251-251: LGTM! Critical correctness fix for perfect forwarding.The corrected forwarding syntax
std::forward<Args>(args)...properly expands the parameter pack, forwarding each argument individually with its original value category. This matches the pattern shown in the relevant code snippets fromfield.hpp.tests/simulator/refinement/test_2d_10_core.py (1)
20-23: Initialization order with MPI startup is a valid concern—verify actual runtime behavior.
cpp_lib()is imported at module load time (line 20), but MPI initialization viaSamraiLifeCycle()doesn't occur untilstartMPI()is called inmain()(line 188) or viaSimulatorTest.setUp(). The C++ bindings module (pybindlibs.cpp) may require MPI context to be initialized at import time. Additionally, instantiatingtest = AdvanceTestBase(rethrow=True)at module level (line 21) before MPI is set up could cause issues if the class initialization has MPI dependencies, thoughsetUp()is only called by the test framework before test methods run.Consider moving module-level initialization inside
main()afterstartMPI()is called, or implement lazy initialization (defercpp_lib()andAdvanceTestBaseinstantiation until after MPI setup). This is especially important for MPI-based tests that initialize at module scope.src/diagnostic/detail/vtkh5_type_writer.hpp (1)
138-139: Verify duplicate write() calls in 2D writers.Both
VTKFileFieldWriter::write2DandVTKFileTensorFieldWriter::write2Dcallwrite()twice consecutively without any intervening state change. This pattern is unusual and may indicate either:
- An intentional handling of 2D data peculiarities
- A copy-paste error
Please confirm whether the duplicate
write()calls are intentional. If intentional, add a comment explaining why 2D data requires two identical write passes.If this is an error, remove the duplicate call:
}; write(); - write(); }Also applies to: 187-188
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@tests/simulator/initialize/density_check.py`:
- Around line 2-16: The matplotlib backend call is placed after importing pyplot
which makes mpl.use("Agg") ineffective; move the mpl.use("Agg") call to occur
before any import of matplotlib.pyplot in this module (i.e., before the current
"import matplotlib.pyplot as plt") so headless CI works reliably—reorder the
imports in tests/simulator/initialize/density_check.py so mpl.use("Agg") is
executed immediately after importing matplotlib as mpl and before the symbol plt
is imported.
♻️ Duplicate comments (27)
pyphare/pyphare/pharesee/run/run.py (2)
307-314: VTKHDF fallback pattern only applied toGetVi.The
_get_hier_formethod provides.h5→.vtkhdffallback, but onlyGetViuses it. Other accessor methods (GetB,GetE,GetNi,GetMassDensity,GetPi, etc.) still hardcode.h5filenames and will fail if only.vtkhdffiles are present.Consider extending this pattern to other Get* methods for consistent VTKHDF support.
227-232: Close the HDF5 file handle to prevent resource leak.
h5py.File(...)is opened but never closed, which can leak file descriptors.🐛 Proposed fix
def GetDomainSize(self, **kwargs): import h5py - data_file = h5py.File( - self.available_diags[0], "r" - ) # That is the first file in th available diags - root_cell_width = np.asarray(data_file.attrs["cell_width"]) - - return (data_file.attrs["domain_box"] + 1) * root_cell_width + with h5py.File(self.available_diags[0], "r") as data_file: + root_cell_width = np.asarray(data_file.attrs["cell_width"]) + return (data_file.attrs["domain_box"] + 1) * root_cell_widthpyphare/pyphare/pharesee/hierarchy/for_vtk/patchdata.py (3)
31-38: Typo:ghost_nbrshould beghosts_nbr.The parent
FieldData.__init__expectsghosts_nbr(with 's') as the keyword argument. Usingghost_nbrwill cause the parent to compute ghost numbers from the layout instead of using the provided zeros.🐛 Proposed fix
def __init__(self, layout, cmp_idx, data_offset, *args, **kwargs): + import numpy as np super().__init__( layout, *args, centering=["primal"] * layout.box.ndim, - ghost_nbr=[0] * layout.box.ndim, + ghosts_nbr=np.zeros(layout.box.ndim, dtype=int), **kwargs )
49-50: Bug:Boxclass has nosize()method.Based on the
Boxclass definition, it has ashapeproperty andnCells()method, but nosize()method. This will raise anAttributeErrorat runtime.🐛 Proposed fix
self.field_box = boxm.Box(self.box.lower, self.box.upper + 1) - self.data_size = self.field_box.size() + self.data_size = self.field_box.nCells()
17-23:__getitem__ignores the requested slice.The
sliceparameter is accepted but never applied—any slice currently returns the full array. Callers expecting subset selection will get incorrect results.🐛 Proposed fix
def __getitem__(self, slice): # todo finish slice/box lookup cmp_idx = self.field_data.cmp_idx off_set = self.field_data.data_offset data_size = self.field_data.data_size - return self.dataset[:, cmp_idx][off_set : off_set + data_size].reshape( + arr = self.dataset[:, cmp_idx][off_set : off_set + data_size].reshape( self.field_data.field_box.shape, order="F" ) + return arr[slice]pyphare/pyphare/pharesee/hierarchy/for_vtk/patch.py (1)
21-22: Incomplete__call__implementation raises placeholder error.This method raises
RuntimeError("finish")which suggests it's unfinished. Either implement the method or useNotImplementedErrorwith a descriptive message.🐛 Proposed fix
def __call__(self, qty, **kwargs): - raise RuntimeError("finish") + raise NotImplementedError("VTK Patch.__call__ not yet implemented")tests/simulator/test_vtk_diagnostics.py (1)
183-194: Strict toleranceatol=0may cause flaky tests.Using
atol=0for MPI size ≤ 2 requires exact binary equality, which can fail due to floating-point non-associativity even in deterministic scenarios. Consider using a small tolerance like1e-15instead.Proposed fix
# not binary == with more than 2 cores - atol = 0 if cpp.mpi_size() <= 2 else 1e-17 + atol = 1e-15 if cpp.mpi_size() <= 2 else 1e-17pyphare/pyphare/pharesee/run/utils.py (3)
362-370: 1D case in_get_rankreturnsNoneimplicitly.The
passstatement in the 1D branch causes the function to returnNone, which will break downstream code expecting a tuple of dictionaries like the 2D case returns.🐛 Proposed fix
if ndim == 1: - pass - - elif ndim == 2: + raise NotImplementedError("_get_rank not implemented for 1D") + if ndim == 2: data = np.zeros(shape) + int(patch.id.strip("p").split("#")[0]) pd = reference_pd.copy_as(data, centering=centering) return ({"name": "rank", "data": pd},)
426-431: Division by zero when densityNis zero.Lines 426-431 divide by
N(particle density) without guarding against zero values. Per retrieved learnings, "velocity calculations must account for per-population densities that can be strictly zero in given cells."🐛 Proposed fix using safe division
mass = kwargs["mass"] + # Guard against zero density + inv_N = np.divide(mass, N, out=np.zeros_like(N), where=N > 0) - Pxx = Mxx[:] - Fx * Fx * mass / N - Pxy = Mxy[:] - Fx * Fy * mass / N - Pxz = Mxz[:] - Fx * Fz * mass / N - Pyy = Myy[:] - Fy * Fy * mass / N - Pyz = Myz[:] - Fy * Fz * mass / N - Pzz = Mzz[:] - Fz * Fz * mass / N + Pxx = Mxx[:] - Fx * Fx * inv_N + Pxy = Mxy[:] - Fx * Fy * inv_N + Pxz = Mxz[:] - Fx * Fz * inv_N + Pyy = Myy[:] - Fy * Fy * inv_N + Pyz = Myz[:] - Fy * Fz * inv_N + Pzz = Mzz[:] - Fz * Fz * inv_N
68-103: Missing explicit error for unsupported dimensions.The function implicitly returns
Nonewhenndimis neither 1 nor 2. This is inconsistent with_compute_divB(line 129) which raisesRuntimeError. Add an explicit error for unsupported dimensions.🐛 Proposed fix
return ( {"name": "Jx", "data": Bx.copy_as(Jx, centering=centering["Jx"])}, {"name": "Jy", "data": By.copy_as(Jy, centering=centering["Jy"])}, {"name": "Jz", "data": Bz.copy_as(Jz, centering=centering["Jz"])}, ) + + raise RuntimeError("_compute_current: dimension not implemented")pyphare/pyphare/pharesee/hierarchy/scalarfield.py (1)
7-16: Verify that hierarchies passed to ScalarField always have single patch_data per patch.The
compute_renamecall withnew_names=("value",)silently takes only the first patch_data if multiple exist. This works correctly for intended use cases (scalar quantity files), but consider adding an assertion to catch unexpected multi-patch-data inputs for easier debugging.pyphare/pyphare/pharesee/hierarchy/hierarchy_utils.py (1)
188-193: Replaceassertwith explicit validation for production robustness.The
asserton line 191 is stripped under Python's-Oflag, allowing malformed compute output to slip through and cause unclear errors later. Use explicit validation with a descriptive exception.✅ Proposed fix
def new_patchdatas_from(compute, patch, **kwargs): new_patch_datas = {} for data in compute(patch, **kwargs): - assert len(data.keys()) == 2 + if not isinstance(data, dict) or set(data.keys()) != {"name", "data"}: + raise ValueError( + f"compute() must yield dicts with exactly {{'name', 'data'}} keys, got: {data.keys() if isinstance(data, dict) else type(data)}" + ) new_patch_datas[data["name"]] = data["data"] return new_patch_dataspyphare/pyphare/pharesee/hierarchy/patchdata.py (2)
73-78: Potential infinite recursion and missingatolparameter pass-through incompare.When types differ,
comparedelegates tothat.compare(self)without passingatol. If both types implement this same delegation pattern, it could recurse indefinitely. Additionally, the tolerance is lost in the delegation.🐛 Proposed fix
def compare(self, that, atol=1e-16): if type(that).__name__ != type(self).__name__: - return that.compare(self) + return that.compare(self, atol=atol) return self.field_name == that.field_name and phut.fp_any_all_close( self.dataset[:], that.dataset[:], atol=atol )
163-166: Bug:namevariable referenced before assignment.On line 165,
nameis used in the conditional expressionname if "name" in kwargs else self.field_name, butnameis never defined. This will raise aNameErrorat runtime.🐛 Proposed fix
def copy_as(self, data=None, **kwargs): data = data if data is not None else self.dataset - name = name if "name" in kwargs else self.field_name + name = kwargs.get("name", self.field_name) return type(self)(self.layout, name, data, **kwargs)pyphare/pyphare/core/operators.py (1)
3-3: Remove unused import.The
renameimport frompyphare.pharesee.hierarchy.hierarchy_computeis not used in this file.🧹 Proposed fix
-from pyphare.pharesee.hierarchy.hierarchy_compute import renamepyphare/pyphare/pharesee/hierarchy/fromvtkhdf5.py (12)
49-50: Fix ValueError message formatting.
Line 49-50 passes multiple args toValueError, producing a tuple-like message. Use a single formatted string.Proposed fix
- raise ValueError("ERROR: VTKHDF file does not exist: ", path) + raise ValueError(f"ERROR: VTKHDF file does not exist: {path}")
79-86: Indexboxeswith a relative offset.
Line 79-86 writesboxes[bi]using the absolute dataset index. Whenbox_offset > 0, this can throwIndexError. Use a local index.Proposed fix
- for bi in range(box_offset, box_offset + num_boxes): + for i, bi in enumerate(range(box_offset, box_offset + num_boxes)): box_vals = amr_box_ds[bi] lo = np.zeros(dim) up = np.zeros(dim) for i, di in enumerate(range(0, dim * 2, 2)): lo[i] = box_vals[di + 0] up[i] = box_vals[di + 1] - boxes[bi] = Box(lo, up) + boxes[i] = Box(lo, up)
97-100: Guard against missing time matches.
Line 97-100 assumesnp.wherefinds a match; if not,ret[0][0]raisesIndexError. ReturnNoneor raise a clear error.Proposed fix
ret = np.where(np.isclose(times, float(time), 1e-10)) + if ret[0].size == 0: + return None return ret[0][0]
116-122: Derivedimension/interp_orderfrom file metadata.
Line 116-122 hardcodes 2/1, which breaks 1D/3D and non‑first‑order files. Prefer reading from HDF5 metadata (with fallback). Please verify attribute names in the VTKHDF spec.Proposed fix (example)
`@property` def dimension(self): - return 2 # inject into file? + return len(self.level_spacing(0)) `@property` def interp_order(self): - return 1 # inject into file? + level_group = self._get_path_from(level_base_path + "0") + return int(level_group.attrs.get("InterpOrder", 1))
136-138: Makehas_timea method, not a property.
Line 136-138 declares@propertywith an argument; callinghas_time(time)will raiseTypeError.Proposed fix
- `@property` - def has_time(self, time): + def has_time(self, time): return self.time_idx(time) is not None
140-147: Return cached times on cache hit.
Line 143-144 evaluatesself._timesbut doesn’t return it, so cached reads returnNone.Proposed fix
if self._times is not None: - self._times + return self._times
163-167: Index theAMRBoxdataset explicitly.
Line 163-167 indexes the Level group with an int key. Use["AMRBox"](as inboxes()).Proposed fix
- box_vals = self.vtk_file.file["VTKHDF"][f"Level{self.ilvl}"][ - self.box_offset + idx - ] + amr_box_ds = self.vtk_file.file["VTKHDF"][f"Level{self.ilvl}"]["AMRBox"] + box_vals = amr_box_ds[self.box_offset + idx]
197-203: Avoid mutable default forexclude.
Line 197 uses a list default which is shared across calls. UseNoneand initialize inside. Based on learnings, please avoid mutable defaults.Proposed fix
-def get_all_available_quantities_from_h5(filepath, time=0, exclude=["tags"], hier=None): +def get_all_available_quantities_from_h5(filepath, time=0, exclude=None, hier=None): + if exclude is None: + exclude = ["tags"]
214-215: Particle files are never detected.
Line 214-215 always returnsFalse, so particle diagnostics will be treated as fields and can crash later. Please confirm whether particle support is intentionally deferred or implement detection.
218-219: Use suffix-safe removal for.vtkhdf.
Line 218-219 uses.strip(".vtkhdf"), which removes any of those characters from both ends. UsePath(...).stemorremovesuffix.Proposed fix
def pop_name(basename): - return basename.strip(".vtkhdf").split("_")[2] + return Path(basename).stem.split("_")[2]
470-472: Calltimes()when loading all times.
Line 471 iterates over the method object instead of its return value.Proposed fix
- for t in VtkFile(h5_filename).times: + for t in VtkFile(h5_filename).times(): add_time_from_h5(hier, h5_filename, t, **kwargs)
475-475: Replaceassert Falsewith explicit exception.
Line 475 is stripped underpython -O, which can hide invalid-branch failures.Proposed fix
- assert False + raise AssertionError("Invalid combination of time/hier arguments")
🧹 Nitpick comments (3)
pyphare/pyphare/pharesee/hierarchy/for_vtk/patchdata.py (1)
52-67: Non-standard__eq__return type may cause issues.
__eq__returns eitherTrueorphut.EqualityCheck(False, ...), but Python's equality protocol expects a boolean. This can cause unexpected behavior in contexts likeif obj1 == obj2or collection membership tests.Also,
all([that.dataset.shape == self.dataset.shape])wrapping a single comparison in a list is redundant.♻️ Suggested improvement
def compare(self, that, atol=1e-16): """VTK Diagnostics do not have ghosts values!""" try: that_data = ( that[:] - if all([that.dataset.shape == self.dataset.shape]) + if that.dataset.shape == self.dataset.shape else that[that.box] ) phut.assert_fp_any_all_close(self.dataset[:], that_data, atol=atol) return True except AssertionError as e: return phut.EqualityCheck(False, str(e)) def __eq__(self, that): - return self.compare(that) + result = self.compare(that) + return result if result is True else bool(result)pyphare/pyphare/core/operators.py (1)
43-68: Gradient computation with dimension limitation.The
_compute_gradfunction correctly computes gradients for 2D cases. The TODO comment about z-direction in 2D is appropriate. Consider tracking this limitation.Would you like me to open an issue to track the 3D gradient implementation?
pyphare/pyphare_tests/test_pharesee/test_hierarchy.py (1)
343-352: Make the aggregated test loop deterministic and callable-safe.Filtering for callables and sorting the names avoids accidental non-callable attributes and stabilizes execution order across Python implementations.
♻️ Proposed refactor
- checks = 0 - for test in [method for method in dir(self) if method.startswith("_test_")]: - getattr(self, test)() - checks += 1 + checks = 0 + tests = [ + name + for name in dir(self) + if name.startswith("_test_") and callable(getattr(self, name)) + ] + for test in sorted(tests): + getattr(self, test)() + checks += 1
There was a problem hiding this comment.
Actionable comments posted: 5
🤖 Fix all issues with AI agents
In `@pyphare/pyphare/pharesee/hierarchy/for_vtk/patch.py`:
- Around line 9-12: The placeholder self.attrs = {"TO": "DO"} in class
Patch.__init__ is overwriting attrs set by the parent; change it to either
remove the placeholder or merge VTK-specific attributes into the existing attrs
(e.g. read existing attrs via getattr(self, "attrs", {}) and then update/merge
with a vtk_specific_attrs dict) so parent attributes are preserved and only VTK
additions or overrides are applied; update Patch.__init__ in patch.py
accordingly.
In `@pyphare/pyphare/pharesee/hierarchy/hierarchy.py`:
- Around line 432-436: The ghost-slicing uses any_ghosts but then always slices
pd.x with nbrGhosts[0], which fails when the first dimension has zero ghosts;
update the slice for x to check the first-dimension ghost count explicitly (use
nbrGhosts[0] > 0) so x is sliced only when the first dimension actually has
ghosts; locate the variables pd, nbrGhosts, any_ghosts and the x assignment in
the hierarchy logic and replace the conditional to use nbrGhosts[0] > 0 instead
of any_ghosts.
In `@pyphare/pyphare/pharesee/hierarchy/patchdata.py`:
- Around line 161-181: _the _resolve_ghost_nbr method can return a scalar or
non-array ghosts_nbr which later breaks any(self.ghosts_nbr) in yeeCoordsFor;
update _resolve_ghost_nbr to normalize incoming kwargs["ghosts_nbr"] into a
numpy integer array of length self.ndim (e.g., if scalar or not array-like,
broadcast it to np.full(self.ndim, int(value))), ensure dtype=int and shape
(self.ndim,), then fill per-dimension defaults using
layout.nbrGhosts(layout.interp_order, centering) only when "ghosts_nbr" was not
explicitly given or when elements are zero/unspecified so any(self.ghosts_nbr)
works reliably; keep references to _resolve_ghost_nbr, yeeCoordsFor, ghosts_nbr,
centerings, and layout.nbrGhosts to locate changes.
In `@pyphare/pyphare/pharesee/run/run.py`:
- Around line 270-275: The property default_time currently passes only the
basename to default_time_from which fails when CWD != run dir; update the logic
in default_time to pass the full path from self.available_diags (use
self.available_diags[0] directly instead of splitting to basename) so
default_time_from receives the full filename/path; keep setting and returning
self.default_time_ as before.
- Around line 215-218: _get_any_hierarchy is passing a filename with extension
(e.g., "EM_B.h5") into _get_hier_for which expects a quantity stem, causing
searches for "EM_B.h5.h5"; fix by stripping the extension before calling
_get_hier_for (use os.path.splitext or pathlib.Path(...).stem) so _get_hier_for
receives "EM_B" instead of "EM_B.h5".
♻️ Duplicate comments (20)
pyphare/pyphare/pharesee/phare_vtk/base.py (4)
76-87: Critical: Multiple initialization issues.
phases=_phases()- At class definition time,_phasesis a staticmethod descriptor and calling it raisesTypeError. Even if it worked, this creates a shared mutable default.
time if timeon line 87 - Treats0.0as falsy, causing incorrect behavior if time=0.0 is a valid timestep.🐛 Proposed fix
- def __init__(self, filename, time=None, array_name="data", phases=_phases()): + def __init__(self, filename, time=None, array_name="data", phases=None): + if phases is None: + phases = self._phases() if len(phases) == 0: raise RuntimeError("Error: Zero phases!") ... self.times = all_times_in(self.reader) - self.reader.UpdateTimeStep(time if time else self.times[-1]) + self.reader.UpdateTimeStep(self.times[-1] if time is None else time)
98-100: Guard against null dataset.
GetDataSet(0, 0)can returnNoneon empty or partial files, causingGetSpacing()to fail.🐛 Proposed fix
self.mapper = _in self.bounds = bounds_in(self.reader) - self.spacing = self.reader.GetOutput().GetDataSet(0, 0).GetSpacing() + ds = self.reader.GetOutput().GetDataSet(0, 0) + if ds is None: + raise RuntimeError("VTK Error: dataset (0,0) not available") + self.spacing = ds.GetSpacing()
16-27: Critical:@staticmethodat module scope makes functions non-callable.Decorating module-level functions with
@staticmethodwraps them in astaticmethoddescriptor, which is not directly callable. Calls likephases[i](...)will fail withTypeError: 'staticmethod' object is not callable.🐛 Proposed fix
-@staticmethod def surface_filter(output, **kwargs): surface = vtk.vtkDataSetSurfaceFilter() surface.SetInputConnection(output.GetOutputPort()) return PhaseOutput(surface=surface) -@staticmethod def composite_data_geometry_filter(output, **kwargs): geom = vtk.vtkCompositeDataGeometryFilter() geom.SetInputConnection(output.GetOutputPort()) return PhaseOutput(geom=geom)
30-39: Add null-safety checks for array lookup.
GetArray(array_name)can returnNoneif the array doesn't exist, causingGetRange(0)to fail. Consider adding a guard or fallback to cell data.🐛 Proposed fix
def poly_data_mapper(output, **kwargs): array_name = kwargs.get("array_name", "data") mapper = vtk.vtkPolyDataMapper() mapper.SetInputConnection(output.GetOutputPort()) mapper.SelectColorArray(array_name) - mapper.SetScalarModeToUsePointData() - mapper.SetScalarRange( - output.GetOutput().GetPointData().GetArray(array_name).GetRange(0) - ) + out = output.GetOutput() + arr = out.GetPointData().GetArray(array_name) if out else None + if arr is not None: + mapper.SetScalarModeToUsePointData() + mapper.SetScalarRange(arr.GetRange(0)) + else: + raise KeyError(f"Array '{array_name}' not found in point data") return PhaseOutput(mapper=mapper)pyphare/pyphare/pharesee/hierarchy/hierarchy_compute.py (1)
84-90: Critical: Lambda captures loop variable by reference, causing incorrect results.The lambda
lambda ds: _(ds, data[name])capturesnameby reference. Since this is inside a generator expression, all lambdas will reference the final value ofnamefrom the loop, leading to incorrect data associations.🐛 Proposed fix - capture by value using default argument
data = DataAccessor(hinfo, other) return tuple( { "name": name, - "data": _compute_copy_do(pd, lambda ds: _(ds, data[name])), + "data": _compute_copy_do(pd, lambda ds, n=name: _(ds, data[n])), } for name, pd in patch.patch_datas.items() )pyphare/pyphare/pharesee/hierarchy/for_vtk/patchdata.py (1)
17-23:__getitem__ignores the requested slice.The method accepts a
sliceparameter but returns the full reshaped array regardless. This can cause incorrect results for callers expecting subset selection.🐛 Proposed fix
def __getitem__(self, slice): # todo finish slice/box lookup cmp_idx = self.field_data.cmp_idx off_set = self.field_data.data_offset data_size = self.field_data.data_size - return self.dataset[:, cmp_idx][off_set : off_set + data_size].reshape( + arr = self.dataset[:, cmp_idx][off_set : off_set + data_size].reshape( self.field_data.field_box.shape, order="F" ) + return arr[slice]tests/simulator/test_vtk_diagnostics.py (1)
183-195: Use a small nonzero tolerance for low MPI sizes.
atol = 0can make tests flaky due to floating‑point non‑associativity.✅ Suggested tweak
- atol = 0 if cpp.mpi_size() <= 2 else 1e-17 + atol = 1e-15 if cpp.mpi_size() <= 2 else 1e-17pyphare/pyphare/pharesee/run/utils.py (3)
99-103: Fix Jx/Jy metadata mapping to match computed shapes.
_current2dproducesJxwithBy’s shape andJywithBx’s shape, but the currentcopy_asuses the opposite layouts.🐛 Proposed fix
return ( - {"name": "Jx", "data": Bx.copy_as(Jx, centering=centering["Jx"])}, - {"name": "Jy", "data": By.copy_as(Jy, centering=centering["Jy"])}, + {"name": "Jx", "data": By.copy_as(Jx, centering=centering["Jx"])}, + {"name": "Jy", "data": Bx.copy_as(Jy, centering=centering["Jy"])}, {"name": "Jz", "data": Bz.copy_as(Jz, centering=centering["Jz"])}, )
369-375: 1D branch in_get_ranksilently returns None.
passcauses an implicitNone, which breaks downstream expectations. Either implement 1D or raise explicitly.🐛 Suggested fix (explicitly unsupported)
if ndim == 1: - pass + raise NotImplementedError("_get_rank not implemented for 1D")
433-438: Guard population pressure whenN == 0.The
mass / Ndivision can produce inf/NaN. Apply a mask or safe divide and choose a physically meaningful fallback for zero‑density cells.🐛 One possible pattern (choose fill policy per physics)
- Pxx = Mxx[:] - Fx * Fx * mass / N - Pxy = Mxy[:] - Fx * Fy * mass / N - Pxz = Mxz[:] - Fx * Fz * mass / N - Pyy = Myy[:] - Fy * Fy * mass / N - Pyz = Myz[:] - Fy * Fz * mass / N - Pzz = Mzz[:] - Fz * Fz * mass / N + mask = N > 0 + Pxx = Mxx[:].copy() + Pxy = Mxy[:].copy() + Pxz = Mxz[:].copy() + Pyy = Myy[:].copy() + Pyz = Myz[:].copy() + Pzz = Mzz[:].copy() + Pxx[mask] = Mxx[mask] - Fx[mask] * Fx[mask] * mass / N[mask] + Pxy[mask] = Mxy[mask] - Fx[mask] * Fy[mask] * mass / N[mask] + Pxz[mask] = Mxz[mask] - Fx[mask] * Fz[mask] * mass / N[mask] + Pyy[mask] = Myy[mask] - Fy[mask] * Fy[mask] * mass / N[mask] + Pyz[mask] = Myz[mask] - Fy[mask] * Fz[mask] * mass / N[mask] + Pzz[mask] = Mzz[mask] - Fz[mask] * Fz[mask] * mass / N[mask]Based on learnings, zero‑density handling should follow the physics model rather than a blanket numeric fallback.
pyphare/pyphare/pharesee/hierarchy/hierarchy_utils.py (1)
188-193: Replaceassertwith explicit validation.The
assertstatement is stripped when Python runs with-Ooptimization, allowing malformedcomputeoutput to pass through silently.🐛 Proposed fix
def new_patchdatas_from(compute, patch, **kwargs): new_patch_datas = {} for data in compute(patch, **kwargs): - assert len(data.keys()) == 2 - new_patch_datas[data["name"]] = data["data"] + if len(data) != 2 or "name" not in data or "data" not in data: + raise ValueError("compute(...) must yield dicts with keys {'name','data'}") + new_patch_datas[data["name"]] = data["data"] return new_patch_dataspyphare/pyphare/pharesee/hierarchy/fromvtkhdf5.py (9)
49-50: FixValueErrormessage construction.The
ValueErroris constructed with comma-separated arguments instead of string formatting. This will display as a tuple rather than a formatted message.🐛 Proposed fix
if not Path(path).exists(): - raise ValueError("ERROR: VTKHDF file does not exist: ", path) + raise ValueError(f"ERROR: VTKHDF file does not exist: {path}")
77-86: Fix AMR box indexing whenbox_offset> 0.
boxesis sized tonum_boxes, but indexed with the absolute dataset indexbi. Whenbox_offsetis non-zero, this causes anIndexError. TheVtkPatchLevelInfo.boxes()method at lines 163-176 handles this correctly.🐛 Proposed fix
- for bi in range(box_offset, box_offset + num_boxes): + for i, bi in enumerate(range(box_offset, box_offset + num_boxes)): box_vals = amr_box_ds[bi] lo = np.zeros(dim) up = np.zeros(dim) - for i, di in enumerate(range(0, dim * 2, 2)): + for j, di in enumerate(range(0, dim * 2, 2)): - lo[i] = box_vals[di + 0] - up[i] = box_vals[di + 1] - boxes[bi] = Box(lo, up) + lo[j] = box_vals[di + 0] + up[j] = box_vals[di + 1] + boxes[i] = Box(lo, up)
97-100: Missing bounds check on time lookup result.If the requested
timedoes not exist in the file,np.wherereturns an empty array andret[0][0]will raise anIndexError. Add a check and returnNoneor raise a descriptive exception.🐛 Proposed fix
def time_idx(self, time): times = self.times() ret = np.where(np.isclose(times, float(time), 1e-10)) + if len(ret[0]) == 0: + return None return ret[0][0]
116-122: Hardcodeddimensionandinterp_orderbreak multi-dimensional support.The
dimensionproperty always returns2andinterp_orderalways returns1, which will cause incorrect behavior for 1D/3D simulations or different interpolation schemes. These values should be read from HDF5 metadata or derived from the data structure (e.g.,len(self.level_spacing())).🐛 Proposed fix
`@property` def dimension(self): - return 2 # inject into file? + return len(self.level_spacing()) `@property` def interp_order(self): - return 1 # inject into file? + # TODO: Read from HDF5 file attributes when available + return 1
136-138: Propertyhas_timeincorrectly accepts an argument.Properties cannot accept arguments. This will raise a
TypeErrorwhen called asvtk_file.has_time(time)and will always returnTruewhen accessed asvtk_file.has_time.🐛 Proposed fix - convert to method
- `@property` - def has_time(self, time): + def has_time(self, time): return self.time_idx(time) is not None
140-147: Missingreturnstatement causestimes()to returnNoneon cache hit.Line 144 evaluates
self._timesbut doesn't return it. When the cache is valid, the method falls through and re-reads from the file, defeating the caching purpose.🐛 Proposed fix
def times(self, reset=False): if reset: self._times = None if self._times is not None: - self._times + return self._times ds = self._get_path_from(h5_time_ds_path) self._times = ds[:] return self._times
185-191: Mutable default argument will cause unexpected behavior.Using a mutable list as default argument means all calls share the same list instance. Based on learnings, use
Noneand initialize inside the function.🐛 Proposed fix
-def get_all_available_quantities_from_h5(filepath, time=0, exclude=["tags"], hier=None): +def get_all_available_quantities_from_h5(filepath, time=0, exclude=None, hier=None): + if exclude is None: + exclude = ["tags"] time = format_timestamp(time)
198-207:is_particle_filealways returnsFalseandpop_nameuses incorrect suffix removal.
is_particle_fileignores its argument and always returnsFalse- this appears to be placeholder code.pop_nameusesstrip(".vtkhdf")which removes individual characters{'.', 'v', 't', 'k', 'h', 'd', 'f'}from both ends rather than the suffix. Useremovesuffix()orPath.stem.🐛 Proposed fix for pop_name
def pop_name(basename): - return basename.strip(".vtkhdf").split("_")[2] + return Path(basename).stem.split("_")[2]
456-461: Bug:timesis a method, andassert Falseshould be replaced.
- Line 457:
VtkFile(h5_filename).timesshould beVtkFile(h5_filename).times()sincetimesis a method, not a property.- Line 461:
assert Falseis removed when Python runs with-Ooptimization. Useraise AssertionError()or a more specific exception.🐛 Proposed fix
if load_all_times(time, hier): - for t in VtkFile(h5_filename).times: + for t in VtkFile(h5_filename).times(): add_time_from_h5(hier, h5_filename, t, **kwargs) return hier - assert False + raise RuntimeError("Invalid combination of time and hier arguments")
🧹 Nitpick comments (7)
pyphare/pyphare/pharesee/hierarchy/hierarchy_compute.py (1)
19-19: Consider addingstrict=Trueto zip for safety.Adding
strict=Truewill raiseValueErrorifnew_namesandpatch.patch_datashave different lengths, catching potential mismatches early.♻️ Suggested improvement
- for new_name, pd_name in zip(new_names, patch.patch_datas): + for new_name, pd_name in zip(new_names, patch.patch_datas, strict=True):pyphare/pyphare_tests/test_pharesee/test_hierarchy.py (1)
343-352: Hardcoded test count is fragile.The assertion
self.assertEqual(checks, 18)requires manual updates whenever tests are added or removed. Consider computing the expected count dynamically:♻️ Proposed refactor
def test_all(self): """ DO NOT RUN MULTIPLE SIMULATIONS! """ + test_methods = [method for method in dir(self) if method.startswith("_test_")] checks = 0 - for test in [method for method in dir(self) if method.startswith("_test_")]: + for test in test_methods: getattr(self, test)() checks += 1 - self.assertEqual(checks, 18) # update if you add new tests + self.assertGreater(checks, 0, "No _test_ methods found")pyphare/pyphare/pharesee/hierarchy/fromfunc.py (1)
17-18: Consider addingstrict=Truetozip()calls.While the preceding
assertstatements validate equal lengths, usingstrict=True(Python 3.10+) makes the intent explicit and provides a clearer error if the assertion is removed later.♻️ Example fix
- for i, (mass, density) in enumerate(zip(masses, densities)): + for i, (mass, density) in enumerate(zip(masses, densities, strict=True)):Also applies to: 31-32, 62-63, 77-78
pyphare/pyphare/pharesee/hierarchy/for_vtk/patchdata.py (1)
55-61: Redundantall()around single comparison.Wrapping a single boolean comparison in
all([...])is unnecessary and slightly obscures the intent.♻️ Proposed simplification
try: that_data = ( that[:] - if all([that.dataset.shape == self.dataset.shape]) + if that.dataset.shape == self.dataset.shape else that[that.box] )pyphare/pyphare/pharesee/hierarchy/fromvtkhdf5.py (3)
210-210: Unused function argumentpatch_idx.The
patch_idxparameter is declared but never used within the function. Either remove it or document its intended future use.♻️ Proposed fix
-def add_to_patchdata(vtk_file, lvl_info, patch_idx, patch_datas, basename, layout): +def add_to_patchdata(vtk_file, lvl_info, patch_datas, basename, layout):Note: If removed, update the call site at line 348-355.
336-341: Consider addingstrict=Truetozip()call.Adding
strict=Trueensures thatorigin,patch_box.shape, andlvl_cell_widthhave equal lengths, catching mismatches early rather than silently truncating.♻️ Proposed fix
pos_upper = [ orig + shape * dl - for orig, shape, dl in zip(origin, patch_box.shape, lvl_cell_width) + for orig, shape, dl in zip(origin, patch_box.shape, lvl_cell_width, strict=True) ]Note:
strict=Truerequires Python 3.10+. Verify the project's minimum Python version supports this.
436-436: Unused function argumentsilent.The
silentparameter is declared but never used. Either implement silent mode behavior (e.g., suppressing log output) or remove the parameter.
| class Patch(patch.Patch): | ||
| def __init__(self, *args, **kwargs): | ||
| super().__init__(*args, **kwargs) | ||
| self.attrs = {"TO": "DO"} |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -name "patch.py" -path "*/hierarchy/*" | head -10Repository: PHAREHUB/PHARE
Length of output: 157
🏁 Script executed:
cat -n pyphare/pyphare/pharesee/hierarchy/patch.pyRepository: PHAREHUB/PHARE
Length of output: 2662
Remove or complete the placeholder attrs assignment.
The self.attrs = {"TO": "DO"} is placeholder code that unconditionally overwrites the attrs parameter passed to the parent Patch.__init__(). This loses any attributes set by the parent. Either complete this with proper VTK-specific attributes or preserve parent attrs: self.attrs = {**getattr(self, 'attrs', {}), **vtk_specific_attrs}.
🤖 Prompt for AI Agents
In `@pyphare/pyphare/pharesee/hierarchy/for_vtk/patch.py` around lines 9 - 12, The
placeholder self.attrs = {"TO": "DO"} in class Patch.__init__ is overwriting
attrs set by the parent; change it to either remove the placeholder or merge
VTK-specific attributes into the existing attrs (e.g. read existing attrs via
getattr(self, "attrs", {}) and then update/merge with a vtk_specific_attrs dict)
so parent attributes are preserved and only VTK additions or overrides are
applied; update Patch.__init__ in patch.py accordingly.
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
pyphare/pyphare_tests/test_pharesee/test_hierarchy.py (1)
204-207: Avoid float bounds inBoxselection.Line 206 passes
12.5, butBoxcoerces toint, which silently truncates to12. Use integer indices or explicit rounding.🐛 Proposed fix
- box = Box((10, 5), (18, 12.5)) + box = Box((10, 5), (18, 13)) # or int(np.ceil(...))pyphare/pyphare/pharesee/run/run.py (1)
82-101: Inconsistent dual-format support inGetPressureandGetPi.These methods still hardcode
.h5file extensions for momentum tensor files (lines 83-84, 97), which will fail if only.vtkhdffiles are present. Consider updating to use_get_hier_forfor consistency.✅ Proposed fix for GetPi
def GetPi(self, time, merged=False, interp="nearest", **kwargs): - M = self._get_hierarchy(time, "ions_momentum_tensor.h5", **kwargs) + M = self._get_hier_for(time, "ions_momentum_tensor", **kwargs) massDensity = self.GetMassDensity(time, **kwargs) Vi = self._get_hier_for(time, "ions_bulkVelocity", **kwargs)
🤖 Fix all issues with AI agents
In `@tests/simulator/test_run.py`:
- Around line 192-199: The call passes all_primal=False to plot() but it should
be passed to GetJ() like the pattern used for GetB; change
run.GetJ(time).plot(...) to run.GetJ(time, all_primal=False).plot(...) and
remove all_primal=False from the plot(...) arguments so GetJ (not plot) receives
that flag; locate the call to GetJ and the subsequent .plot invocation to make
this adjustment.
♻️ Duplicate comments (20)
pyphare/pyphare/pharesee/hierarchy/for_vtk/patch.py (1)
10-12: Preserve parentattrsinstead of overwriting.
self.attrs = {"TO": "DO"}discards attributes set bypatch.Patch.__init__. Either remove this placeholder or merge VTK-specific keys into existing attrs.🔧 Minimal fix
- self.attrs = {"TO": "DO"} + vtk_specific_attrs = {} + self.attrs = {**getattr(self, "attrs", {}), **vtk_specific_attrs}pyphare/pyphare/pharesee/run/utils.py (3)
99-102: Fix Jx/Jy copy_as to match computed layouts.Jx is computed on
By’s shape while Jy is computed onBx’s shape, but Line 100–101 tags them with the opposite layout. This mislabels centering/shape metadata and can break downstream consumers.🐛 Proposed fix
- return ( - {"name": "Jx", "data": Bx.copy_as(Jx, centering=centering["Jx"])}, - {"name": "Jy", "data": By.copy_as(Jy, centering=centering["Jy"])}, - {"name": "Jz", "data": Bz.copy_as(Jz, centering=centering["Jz"])}, - ) + return ( + {"name": "Jx", "data": By.copy_as(Jx, centering=centering["Jx"])}, + {"name": "Jy", "data": Bx.copy_as(Jy, centering=centering["Jy"])}, + {"name": "Jz", "data": Bz.copy_as(Jz, centering=centering["Jz"])}, + )
369-375: Handle 1D _get_rank explicitly.Line 370 still falls through, so the 1D case returns
Nonewhile 2D returns a tuple. This breaks API consistency and is hard to diagnose.🐛 Proposed fix (match 2D output shape)
- if ndim == 1: - pass - - elif ndim == 2: + if ndim == 1: + data = np.zeros(shape) + int(patch.id.strip("p").split("#")[0]) + pd = reference_pd.copy_as(data, centering=centering) + return ({"name": "rank", "data": pd},) + elif ndim == 2: data = np.zeros(shape) + int(patch.id.strip("p").split("#")[0]) pd = reference_pd.copy_as(data, centering=centering) return ({"name": "rank", "data": pd},)
433-438: Guard division by zero in population pressure.Line 433–438 divide by
Nwithout a zero guard, producing inf/NaN for zero-density cells. This needs a physics-consistent handling strategy rather than a silent numerical fix.🐛 Proposed fix (safe divide scaffold)
- Pxx = Mxx[:] - Fx * Fx * mass / N - Pxy = Mxy[:] - Fx * Fy * mass / N - Pxz = Mxz[:] - Fx * Fz * mass / N - Pyy = Myy[:] - Fy * Fy * mass / N - Pyz = Myz[:] - Fy * Fz * mass / N - Pzz = Mzz[:] - Fz * Fz * mass / N + inv = np.divide(mass, N, out=np.zeros_like(N), where=N > 0) + Pxx = Mxx[:] - Fx * Fx * inv + Pxy = Mxy[:] - Fx * Fy * inv + Pxz = Mxz[:] - Fx * Fz * inv + Pyy = Myy[:] - Fy * Fy * inv + Pyz = Myz[:] - Fy * Fz * inv + Pzz = Mzz[:] - Fz * Fz * invBased on learnings, the N==0 behavior should be aligned with the intended physics, not just a numerical fallback.
pyphare/pyphare/pharesee/phare_vtk/base.py (4)
16-27: Blocker:@staticmethodat module scope makes these functions non-callable.These module-level functions are decorated with
@staticmethod, which wraps them in astaticmethoddescriptor object. At module scope this makes them uncallable—surface_filter(...)will raiseTypeError: 'staticmethod' object is not callable. Remove the@staticmethoddecorators.🐛 Proposed fix
-@staticmethod def surface_filter(output, **_): surface = vtk.vtkDataSetSurfaceFilter() surface.SetInputConnection(output.GetOutputPort()) return PhaseOutput(surface=surface) -@staticmethod def composite_data_geometry_filter(output, **_): geom = vtk.vtkCompositeDataGeometryFilter() geom.SetInputConnection(output.GetOutputPort()) return PhaseOutput(geom=geom)
30-39: Guard againstNonefromGetArray()to prevent runtime crash.
GetArray(array_name)returnsNonewhen the array is missing. Calling.GetRange(0)onNonewill raiseAttributeError. Add a guard or fallback to cell data.🐛 Proposed fix
def poly_data_mapper(output, **kwargs): array_name = kwargs.get("array_name", "data") mapper = vtk.vtkPolyDataMapper() mapper.SetInputConnection(output.GetOutputPort()) mapper.SelectColorArray(array_name) - mapper.SetScalarModeToUsePointData() - mapper.SetScalarRange( - output.GetOutput().GetPointData().GetArray(array_name).GetRange(0) - ) + out = output.GetOutput() + arr = out.GetPointData().GetArray(array_name) if out else None + if arr is not None: + mapper.SetScalarModeToUsePointData() + mapper.SetScalarRange(arr.GetRange(0)) + elif out and out.GetCellData().GetArray(array_name): + arr = out.GetCellData().GetArray(array_name) + mapper.SetScalarModeToUseCellData() + mapper.SetScalarRange(arr.GetRange(0)) + else: + raise KeyError(f"Array '{array_name}' not found in point or cell data") return PhaseOutput(mapper=mapper)
71-88: Multiple issues: mutable default, staticmethod call, and falsy-zero bug.
phases=_phases()—_phasesis a@staticmethod; calling it directly at class definition time fails. Even if it worked, a mutable list as default leads to shared state.time if time else self.times[-1]— treatstime=0.0as falsy, incorrectly defaulting to the last timestep.🐛 Proposed fix
- def __init__(self, filename, time=None, array_name="data", phases=_phases()): + def __init__(self, filename, time=None, array_name="data", phases=None): + if phases is None: + phases = self._phases() if len(phases) == 0: raise RuntimeError("Error: Zero phases!") self.array_name = array_name self.reader = vtk.vtkHDFReader() self.reader.SetFileName(filename) self.reader.Update() self.reader.UpdateInformation() self.times = all_times_in(self.reader) - self.reader.UpdateTimeStep(time if time else self.times[-1]) + self.reader.UpdateTimeStep(self.times[-1] if time is None else time)Based on learnings, mutable default arguments should be replaced with
Noneand initialized within the function.
89-101: GuardGetDataSet(0, 0)and simplify loop.
GetDataSet(0, 0)can returnNoneon empty/partial files, causing.GetSpacing()to crash. Also, prefer direct iteration overfor i in range(len(phases)).🐛 Proposed fix
_in = self.reader - for i in range(len(phases)): - ret = phases[i](_in, array_name=array_name) + for phase in phases: + ret = phase(_in, array_name=array_name) for key, val in ret: if hasattr(val, "Update"): val.Update() setattr(self, key, val) _in = val self.mapper = _in self.bounds = bounds_in(self.reader) - self.spacing = self.reader.GetOutput().GetDataSet(0, 0).GetSpacing() + ds00 = self.reader.GetOutput().GetDataSet(0, 0) + if ds00 is None: + raise RuntimeError("VTK Error: missing dataset (level=0, index=0)") + self.spacing = ds00.GetSpacing()pyphare/pyphare/pharesee/hierarchy/fromvtkhdf5.py (8)
67-87: Fix AMR box indexing whenAMRBoxOffset> 0.Line 79 indexes
boxes[bi]with the absolute dataset index, which will go out of bounds whenbox_offsetis non‑zero. Use a local index.🐛 Proposed fix
- for bi in range(box_offset, box_offset + num_boxes): + for i, bi in enumerate(range(box_offset, box_offset + num_boxes)): box_vals = amr_box_ds[bi] lo = np.zeros(dim) up = np.zeros(dim) for i, di in enumerate(range(0, dim * 2, 2)): lo[i] = box_vals[di + 0] up[i] = box_vals[di + 1] - boxes[bi] = Box(lo, up) + boxes[i] = Box(lo, up)
97-100: Handle missing time values gracefully.Line 100 assumes a match;
np.wherecan return empty, leading toIndexError.🐛 Proposed fix
def time_idx(self, time): times = self.times() ret = np.where(np.isclose(times, float(time), 1e-10)) + if ret[0].size == 0: + return None return ret[0][0]
116-122: Derive dimension/interp_order from file metadata.Hardcoding
dimension=2andinterp_order=1will mis-handle 1D/3D or non‑default interpolation.🐛 Proposed fix
`@property` def dimension(self): - return 2 # inject into file? + spacing = self.level_spacing(0) + return len(spacing) `@property` def interp_order(self): - return 1 # inject into file? + level_group = self._get_path_from(level_base_path + "0") + return level_group.attrs.get("InterpOrder", 1)
136-138:@propertywith parameters will raiseTypeError.
has_timemust be a method, not a property.🐛 Proposed fix
- `@property` - def has_time(self, time): + def has_time(self, time): return self.time_idx(time) is not None
140-147: Return cached times on cache hit.Line 144 evaluates
self._timesbut doesn’t return it, so cached access returnsNone.🐛 Proposed fix
def times(self, reset=False): if reset: self._times = None if self._times is not None: - self._times + return self._times ds = self._get_path_from(h5_time_ds_path) self._times = ds[:] return self._times
185-191: Avoid mutable default forexclude.Using a list as a default argument shares state across calls.
Based on learnings, avoid mutable defaults.🐛 Proposed fix
-def get_all_available_quantities_from_h5(filepath, time=0, exclude=["tags"], hier=None): +def get_all_available_quantities_from_h5(filepath, time=0, exclude=None, hier=None): + if exclude is None: + exclude = ["tags"] time = format_timestamp(time)
202-207: Fix filename parsing and clarify particle support.
strip(".vtkhdf")removes characters, not the suffix, andis_particle_fileis a silent stub.Consider implementing `is_particle_file` detection or raising a clear `NotImplementedError` to avoid silent misclassification.🐛 Proposed fix (suffix handling)
def pop_name(basename): - return basename.strip(".vtkhdf").split("_")[2] + return Path(basename).stem.split("_")[2]
456-461: Fixtimes()call and replaceassert False.Line 457 treats
timesas an attribute; it’s a method. Alsoassert Falsecan be stripped with-O.🐛 Proposed fix
if load_all_times(time, hier): - for t in VtkFile(h5_filename).times: + for t in VtkFile(h5_filename).times(): add_time_from_h5(hier, h5_filename, t, **kwargs) return hier - assert False + raise RuntimeError("Invalid combination of time and hier arguments")pyphare/pyphare/pharesee/hierarchy/for_vtk/patchdata.py (1)
17-23: Apply the requested slice inVtkFieldDatasetAccessor.__getitem__.Right now the slice argument is ignored, so callers receive the full array even when a subset is requested.
🩹 Proposed fix
def __getitem__(self, slice): # todo finish slice/box lookup cmp_idx = self.field_data.cmp_idx off_set = self.field_data.data_offset data_size = self.field_data.data_size - return self.dataset[:, cmp_idx][off_set : off_set + data_size].reshape( - self.field_data.field_box.shape, order="F" - ) + arr = self.dataset[:, cmp_idx][off_set : off_set + data_size].reshape( + self.field_data.field_box.shape, order="F" + ) + return arr[slice]pyphare/pyphare/pharesee/hierarchy/patchdata.py (1)
174-181: Normalize scalarghosts_nbrto avoidany(int)TypeError.If callers pass a scalar,
_resolve_ghost_nbrreturns it unchanged andany(self.ghosts_nbr)will fail.🩹 Proposed fix
def _resolve_ghost_nbr(self, **kwargs): layout = self.layout ghosts_nbr = kwargs.get("ghosts_nbr", np.zeros(self.ndim, dtype=int)) + if phut.is_scalar(ghosts_nbr): + ghosts_nbr = np.full(self.ndim, ghosts_nbr, dtype=int) if "ghosts_nbr" not in kwargs: if self.field_name != "tags": for i, centering in enumerate(self.centerings): ghosts_nbr[i] = layout.nbrGhosts(layout.interp_order, centering) return ghosts_nbrpyphare/pyphare/pharesee/hierarchy/hierarchy_compute.py (1)
79-90: Bug: Lambda captures loop variable by reference, causing incorrect results.The lambda on line 87 captures
nameby reference. Since this is inside a generator expression consumed bytuple(), all lambdas will use the final value ofnamefrom the loop, causing incorrect data associations.🐛 Proposed fix - capture by value using default argument
def _compute_copy_op(patch, op, hinfo, other, reverse=False): def _(a, b): return op(b, a) if reverse else op(a, b) data = DataAccessor(hinfo, other) return tuple( { "name": name, - "data": _compute_copy_do(pd, lambda ds: _(ds, data[name])), + "data": _compute_copy_do(pd, lambda ds, n=name: _(ds, data[n])), } for name, pd in patch.patch_datas.items() )pyphare/pyphare/pharesee/hierarchy/hierarchy_utils.py (1)
188-193: Replace theassertwith explicit validation.
assertis stripped under-O, so malformedcomputeoutput can slip through and fail later with unclear errors.✅ Proposed fix
def new_patchdatas_from(compute, patch, **kwargs): new_patch_datas = {} for data in compute(patch, **kwargs): - assert len(data.keys()) == 2 + if len(data) != 2 or "name" not in data or "data" not in data: + raise ValueError("compute(...) must yield dicts with keys {'name','data'}") new_patch_datas[data["name"]] = data["data"] return new_patch_datas
🧹 Nitpick comments (5)
pyphare/pyphare/pharesee/hierarchy/patchlevel.py (1)
25-27: Consider guarding empty patch levels for clearer errors.
cell_widthassumesself.patches[0]exists; if an emptyPatchLevelis possible, this will raise anIndexErrorwith limited context. Consider validating in__init__or raising a clearer error here.♻️ Possible guard in property
`@property` def cell_width(self): - return self.patches[0].layout.dl + if not self.patches: + raise ValueError("PatchLevel has no patches") + return self.patches[0].layout.dltests/simulator/test_vtk_diagnostics.py (2)
23-24: Consider expanding the test matrix.Currently limited to 2D and interp_order=1. The comments indicate 3D is TODO and 1D is not supported. Consider tracking this as a follow-up task.
Do you want me to open an issue to track adding 3D support to the VTK diagnostics tests?
134-152: Consider usingassertEqualfor clearer failure messages.Line 148 uses
assertTrue(len(simulation.cells) == ndim)which produces a generic failure message. UsingassertEqualprovides better diagnostics.♻️ Proposed fix
- self.assertTrue(len(simulation.cells) == ndim) + self.assertEqual(len(simulation.cells), ndim)pyphare/pyphare/pharesee/run/run.py (2)
251-255: Consider usingnext()for first element access.Using
next(iter(...))is more idiomatic than indexing a list comprehension.♻️ Suggested refactor
# assumes all qties in the hierarchy have the same ghost width # so take the first patch data of the first patch of the first level.... - nbrGhosts = list(hierarchy.level(0).patches[0].patch_datas.values())[ - 0 - ].ghosts_nbr + nbrGhosts = next( + iter(hierarchy.level(0).patches[0].patch_datas.values()) + ).ghosts_nbr
40-42:GetTagsuses hardcoded.h5extension.For consistency with other Get* methods, consider using
_get_hier_for(time, "tags", **kwargs)if tags are also written in VTKHDF format.
| run.GetJ(time).plot( | ||
| all_primal=False, | ||
| filename=plot_file_for_qty("jz", time), | ||
| filename=plot_file_for_qty(plot_dir, "jz", time), | ||
| qty="z", | ||
| plot_patches=True, | ||
| vmin=-2, | ||
| vmax=2, | ||
| ) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find GetJ and GetB method definitions
echo "=== Searching for GetJ and GetB method definitions ==="
rg "def GetJ|def GetB" --type py -A 3
echo ""
echo "=== Searching for plot method definition ==="
rg "def plot" --type py -A 3 | head -50
echo ""
echo "=== All usages of all_primal in the codebase ==="
rg "all_primal" --type py -B 2 -A 2Repository: PHAREHUB/PHARE
Length of output: 9702
🏁 Script executed:
#!/bin/bash
# Find plot method definition on field objects
# Look in the hierarchy file where Field classes are defined
echo "=== Searching for plot method in hierarchy classes ==="
rg "def plot\(" pyphare/pyphare/pharesee/hierarchy/hierarchy.py -A 5
echo ""
echo "=== Check file context around test_run.py line 188 (GetB call) ==="
sed -n '185,195p' tests/simulator/test_run.pyRepository: PHAREHUB/PHARE
Length of output: 733
Move all_primal=False from .plot() call to GetJ() method call.
On line 192, all_primal=False is incorrectly passed to .plot(). Following the pattern on line 188 where run.GetB(time, all_primal=False) is used, this parameter should be passed to GetJ() instead:
Suggested fix:
- run.GetJ(time).plot(
- all_primal=False,
+ run.GetJ(time, all_primal=False).plot(
filename=plot_file_for_qty(plot_dir, "jz", time),📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| run.GetJ(time).plot( | |
| all_primal=False, | |
| filename=plot_file_for_qty("jz", time), | |
| filename=plot_file_for_qty(plot_dir, "jz", time), | |
| qty="z", | |
| plot_patches=True, | |
| vmin=-2, | |
| vmax=2, | |
| ) | |
| run.GetJ(time, all_primal=False).plot( | |
| filename=plot_file_for_qty(plot_dir, "jz", time), | |
| qty="z", | |
| plot_patches=True, | |
| vmin=-2, | |
| vmax=2, | |
| ) |
🤖 Prompt for AI Agents
In `@tests/simulator/test_run.py` around lines 192 - 199, The call passes
all_primal=False to plot() but it should be passed to GetJ() like the pattern
used for GetB; change run.GetJ(time).plot(...) to run.GetJ(time,
all_primal=False).plot(...) and remove all_primal=False from the plot(...)
arguments so GetJ (not plot) receives that flag; locate the call to GetJ and the
subsequent .plot invocation to make this adjustment.
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@pyphare/pyphare/pharesee/hierarchy/for_vtk/patchdata.py`:
- Around line 72-79: In copy_as, the expression "data if data is not None else
self.dataset," is a no-op so data remains None and the VtkFieldDatasetAccessor
branch never executes; replace that expression with an assignment that sets data
to self.dataset when None (e.g., data = self.dataset if data is None else data)
and consider using isinstance(data, VtkFieldDatasetAccessor) instead of
type(...) in the conditional so the accessor rewrap path (the block constructing
type(self)(self.layout, self.field_name, data, data.cmp_idx, data.offset,
**kwargs)) actually runs when given a VtkFieldDatasetAccessor; keep the fallback
to super().copy_as(data, **kwargs).
- Around line 25-27: The shape property dereferences a non-existent attribute
self.field_data; replace that reference with the actual attribute that holds the
field_box (e.g. self.field_box) so the property returns the real box shape —
update def shape to return self.field_box.shape (or the correct attribute name
used elsewhere in the class that stores field_box).
♻️ Duplicate comments (14)
pyphare/pyphare/pharesee/hierarchy/patchdata.py (1)
176-183: Normalize scalarghosts_nbrto avoidany(int)crashes.If callers pass a scalar,
any(self.ghosts_nbr)inyeeCoordsForwill raiseTypeError. Normalize scalars to per-dimension arrays.🛠️ Proposed fix
def _resolve_ghost_nbr(self, **kwargs): layout = self.layout ghosts_nbr = kwargs.get("ghosts_nbr", np.zeros(self.ndim, dtype=int)) + if np.isscalar(ghosts_nbr): + ghosts_nbr = np.full(self.ndim, ghosts_nbr, dtype=int) if "ghosts_nbr" not in kwargs: if self.field_name != "tags": for i, centering in enumerate(self.centerings): ghosts_nbr[i] = layout.nbrGhosts(layout.interp_order, centering) return ghosts_nbrpyphare/pyphare/pharesee/hierarchy/for_vtk/patchdata.py (1)
19-23: Apply the requested slice inVtkFieldDatasetAccessor.__getitem__.Currently every slice returns the full array, breaking selection semantics.
🐛 Proposed fix
def __getitem__(self, slice): # todo finish slice/box lookup - return self.dataset[:, self.cmp_idx][ - self.offset : self.offset + self.box.size() - ].reshape(self.box.shape, order="F") + arr = self.dataset[:, self.cmp_idx][ + self.offset : self.offset + self.box.size() + ].reshape(self.box.shape, order="F") + return arr[slice]pyphare/pyphare/pharesee/run/utils.py (2)
372-378: 1D_get_rankstill returnsNone.This silently breaks downstream code expecting a tuple. Prefer a real implementation or an explicit error.
🛠️ Proposed fix (explicit error)
if ndim == 1: - pass + raise NotImplementedError("_get_rank not implemented for 1D")
436-441: Guard against division by zero in population pressure.
Ncan be zero, which yields NaN/inf and propagates into diagnostics. Use a masked division (or physics‑appropriate fallback) before computingPxx..Pzz. Based on learnings, this should align with the physical meaning for zero-density cells rather than a naive numeric fix.Based on learnings, ensure the zero-density handling matches the intended physics.🛠️ Proposed fix (safe division)
- Pxx = Mxx[:] - Fx * Fx * mass / N - Pxy = Mxy[:] - Fx * Fy * mass / N - Pxz = Mxz[:] - Fx * Fz * mass / N - Pyy = Myy[:] - Fy * Fy * mass / N - Pyz = Myz[:] - Fy * Fz * mass / N - Pzz = Mzz[:] - Fz * Fz * mass / N + inv = np.divide(mass, N, out=np.zeros_like(N), where=N > 0) + Pxx = Mxx[:] - Fx * Fx * inv + Pxy = Mxy[:] - Fx * Fy * inv + Pxz = Mxz[:] - Fx * Fz * inv + Pyy = Myy[:] - Fy * Fy * inv + Pyz = Myz[:] - Fy * Fz * inv + Pzz = Mzz[:] - Fz * Fz * invpyphare/pyphare/pharesee/hierarchy/fromvtkhdf5.py (10)
67-87:level_boxescan index out of range whenAMRBoxOffset> 0.You allocate
boxeswithnum_boxesbut write using absolutebi.🛠️ Proposed fix
- for bi in range(box_offset, box_offset + num_boxes): + for i, bi in enumerate(range(box_offset, box_offset + num_boxes)): box_vals = amr_box_ds[bi] lo = np.zeros(dim) up = np.zeros(dim) for i, di in enumerate(range(0, dim * 2, 2)): lo[i] = box_vals[di + 0] up[i] = box_vals[di + 1] - boxes[bi] = Box(lo, up) + boxes[i] = Box(lo, up)
97-100: Missing guard when a requested time is absent.
ret[0][0]will raise if the time isn’t found.🛠️ Proposed fix
def time_idx(self, time): times = self.times() ret = np.where(np.isclose(times, float(time), 1e-10)) + if len(ret[0]) == 0: + return None return ret[0][0]
116-122:dimensionandinterp_orderare hardcoded.This breaks 1D/3D or non‑order‑1 runs; read from file metadata instead.
🛠️ Proposed direction (confirm attribute names in VTKHDF spec)
`@property` def dimension(self): - return 2 # inject into file? + return len(self.level_spacing(0)) `@property` def interp_order(self): - return 1 # inject into file? + return int(self.file["VTKHDF"].attrs.get("InterpOrder", 1))
136-138:has_timecannot be a property with a parameter.This will raise a
TypeErrorwhen called.🛠️ Proposed fix
- `@property` - def has_time(self, time): + def has_time(self, time): return self.time_idx(time) is not None
140-147: Cachedtimes()path returnsNone.The cache hit branch never returns the cached value.
🛠️ Proposed fix
def times(self, reset=False): if reset: self._times = None if self._times is not None: - self._times + return self._times ds = self._get_path_from(h5_time_ds_path) self._times = ds[:] return self._times
185-191: Mutable defaultexclude=["tags"]risks cross-call leakage.Use
Noneand initialize inside the function. Based on learnings, mutable defaults should be avoided.Based on learnings, avoid mutable default arguments.🛠️ Proposed fix
-def get_all_available_quantities_from_h5(filepath, time=0, exclude=["tags"], hier=None): +def get_all_available_quantities_from_h5(filepath, time=0, exclude=None, hier=None): + if exclude is None: + exclude = ["tags"] time = format_timestamp(time)
202-203:is_particle_filealways returnsFalse.If particle files are intentionally unsupported, consider an explicit error or docstring to avoid silent misdetection.
206-207:.strip(".vtkhdf")removes characters, not the suffix.This can corrupt names. Use
Path(...).stemorremovesuffix.🛠️ Proposed fix
def pop_name(basename): - return basename.strip(".vtkhdf").split("_")[2] + return Path(basename).stem.split("_")[2]
433-438:timesis a method, andassert Falseshould be an exception.This path will fail to iterate and can be stripped under
-O.🛠️ Proposed fix
if load_all_times(time, hier): - for t in VtkFile(h5_filename).times: + for t in VtkFile(h5_filename).times(): add_time_from_h5(hier, h5_filename, t, **kwargs) return hier - assert False + raise RuntimeError("Invalid combination of time and hier arguments")
49-50:ValueErrormessage builds a tuple instead of a string.The comma-separated args render poorly and hide the path.
🛠️ Proposed fix
- if not Path(path).exists(): - raise ValueError("ERROR: VTKHDF file does not exist: ", path) + if not Path(path).exists(): + raise ValueError(f"ERROR: VTKHDF file does not exist: {path}")
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@pyphare/pyphare/pharesee/run/run.py`:
- Around line 194-201: The _available_diags method currently prefers .h5
silently; change it to detect both patterns (e.g., collect files_h5 =
glob.glob(os.path.join(self.path, "*.h5")) and files_vtkhdf =
glob.glob(os.path.join(self.path, "*.vtkhdf"))) and if both lists are non-empty
raise a clear RuntimeError indicating ambiguous formats and instructing the
caller to specify a format (suggest adding a parameter/attribute like
diag_format or format with allowed values 'h5'|'vtkhdf'), or alternatively
implement using that diag_format to return the chosen list; update
_available_diags and any constructor/usage to honor the new selector so the code
no longer silently picks .h5 when both exist.
♻️ Duplicate comments (13)
pyphare/pyphare/pharesee/hierarchy/for_vtk/patchdata.py (1)
25-27: Bug:shapeproperty references non-existent attributeself.field_data.The
VtkFieldDatasetAccessorclass only definesself.box,self.cmp_idx,self.offset, andself.datasetin__init__. Theshapeproperty attempts to accessself.field_data.field_box.shape, which will raise anAttributeErrorat runtime.🐛 Proposed fix
`@property` def shape(self): - return self.field_data.field_box.shape + return self.box.shapepyphare/pyphare/pharesee/run/utils.py (3)
372-374: Implement or explicitly reject 1D in_get_rank.The
passcauses an implicitNonereturn, which breaks the tuple-of-dicts contract used by the 2D path.🔧 Suggested fix
if ndim == 1: - pass + data = np.zeros(shape) + int(patch.id.strip("p").split("#")[0]) + pd = reference_pd.copy_as(data, centering=centering) + return ({"name": "rank", "data": pd},)
436-441: Guard population pressure against zero density.
mass / Nwill produce inf/NaN whenN == 0. The handling should reflect the intended physics for empty cells.🔧 Suggested fix (choose fallback per physics)
- Pxx = Mxx[:] - Fx * Fx * mass / N - Pxy = Mxy[:] - Fx * Fy * mass / N - Pxz = Mxz[:] - Fx * Fz * mass / N - Pyy = Myy[:] - Fy * Fy * mass / N - Pyz = Myz[:] - Fy * Fz * mass / N - Pzz = Mzz[:] - Fz * Fz * mass / N + with np.errstate(divide="ignore", invalid="ignore"): + inv = np.divide(mass, N, out=np.full_like(N, np.nan, dtype=float), where=N > 0) + + Pxx = Mxx[:] - Fx * Fx * inv + Pxy = Mxy[:] - Fx * Fy * inv + Pxz = Mxz[:] - Fx * Fz * inv + Pyy = Myy[:] - Fy * Fy * inv + Pyz = Myz[:] - Fy * Fz * inv + Pzz = Mzz[:] - Fz * Fz * invBased on learnings, the zero-density behavior should be defined by the physics requirements, not just a numerical fallback.
73-80: Use list centerings for 1D current outputs.
copy_aselsewhere expects per-dimension centerings; passing"primal"risks being treated as an iterable of characters or inconsistent metadata. Use a list sized tondimfor consistency.🔧 Suggested fix
- return ( - {"name": "Jy", "data": By.copy_as(Jy, centering="primal")}, - {"name": "Jz", "data": Bz.copy_as(Jz, centering="primal")}, - ) + return ( + {"name": "Jy", "data": By.copy_as(Jy, centering=["primal"] * ndim)}, + {"name": "Jz", "data": Bz.copy_as(Jz, centering=["primal"] * ndim)}, + )pyphare/pyphare/pharesee/hierarchy/fromvtkhdf5.py (4)
97-100: Guardtime_idxwhen time is not found.
np.wherecan return an empty result, which makesret[0][0]raiseIndexError.🐛 Proposed fix
def time_idx(self, time): times = self.times() ret = np.where(np.isclose(times, float(time), 1e-10)) + if ret[0].size == 0: + return None return ret[0][0]
177-183: Avoid mutable default forexclude.A shared list across calls can lead to surprising behavior. Based on learnings, this should default to
None.🐛 Proposed fix
-def get_all_available_quantities_from_h5(filepath, time=0, exclude=["tags"], hier=None): +def get_all_available_quantities_from_h5(filepath, time=0, exclude=None, hier=None): + if exclude is None: + exclude = ["tags"] time = format_timestamp(time)Based on learnings, avoid mutable default arguments.
198-199:pop_nameignores the input basename and will fail.
Path(".vtkhdf")is constant, sosplit("_")[2]will almost always IndexError.🐛 Proposed fix
def pop_name(basename): - return Path(".vtkhdf").stem.split("_")[2] + return Path(basename).stem.split("_")[2]
425-430: Calltimes()and replaceassert False.
timesis a method, andassert Falseis removed with-O.🐛 Proposed fix
if load_all_times(time, hier): - for t in VtkFile(h5_filename).times: + for t in VtkFile(h5_filename).times(): add_time_from_h5(hier, h5_filename, t, **kwargs) return hier - assert False + raise RuntimeError("Invalid combination of time and hier arguments")src/diagnostic/detail/vtkh5_writer.hpp (2)
15-17: Particle diagnostics are currently dropped.Mapping
"particle"toNullTypeWritermeans particle diagnostics are silently ignored. If particle diagnostics are expected in VTKHDF, wire the real writer (and friend/include); otherwise, consider rejecting the type early with a clear error.Suggested fix
+#include "diagnostic/detail/vtk_types/particle.hpp" ... - {"particle", make_writer<NullTypeWriter>()} // + {"particle", make_writer<ParticlesDiagnosticWriter<This>>()} // ... friend class H5TypeWriter<This>; friend class FluidDiagnosticWriter<This>; friend class ElectromagDiagnosticWriter<This>; + friend class ParticlesDiagnosticWriter<This>;Also applies to: 127-133, 151-156
142-145: Guard against empty diagnostic names infileString.Accessing
fileStr[0]is undefined for empty strings.Suggested fix
- if (fileStr[0] == '/') + if (!fileStr.empty() && fileStr[0] == '/') fileStr = fileStr.substr(1);src/diagnostic/detail/vtkh5_type_writer.hpp (2)
264-282: Guard Steps metadata updates to a single MPI rank.NSteps and Steps/Values are updated without a rank guard, which can corrupt metadata under MPI. Wrap those blocks with a rank‑0 guard (or use collective HDF5 calls if required).
Suggested fix
- { + if (core::mpi::rank() == 0) { PHARE_LOG_SCOPE(3, "VTKFileInitializer::VTKFileInitializer::0"); h5file.create_resizable_1d_data_set<FloatType>(base + "/Steps/Values"); auto steps_group = h5file.file().getGroup(base + "/Steps"); if (!steps_group.hasAttribute("NSteps")) steps_group.template createAttribute<int>("NSteps", HighFive::DataSpace::From(0)) .write(0); auto steps_attr = steps_group.getAttribute("NSteps"); steps_attr.write(steps_attr.template read<int>() + 1); } - { + if (core::mpi::rank() == 0) { PHARE_LOG_SCOPE(3, "VTKFileInitializer::VTKFileInitializer::1"); auto const& timestamp = typewriter->h5Writer_.timestamp(); auto ds = h5file.getDataSet(base + "/Steps/Values"); auto const old_size = ds.getDimensions()[0]; ds.resize({old_size + 1}); ds.select({old_size}, {1}).write(timestamp); }
364-368: Use 64‑bit offsets for dataset indices.Offsets are stored as
intbut are derived fromstd::size_t; large runs can overflow. Preferstd::int64_tdatasets and cast the values on write.Suggested fix
- h5file.create_resizable_1d_data_set<int>(step_level + lvl + "/AMRBoxOffset"); - h5file.create_resizable_1d_data_set<int>(step_level + lvl + "/NumberOfAMRBox"); - h5file.create_resizable_1d_data_set<int>(step_level + lvl + "/PointDataOffset/data"); + h5file.create_resizable_1d_data_set<std::int64_t>(step_level + lvl + "/AMRBoxOffset"); + h5file.create_resizable_1d_data_set<std::int64_t>(step_level + lvl + "/NumberOfAMRBox"); + h5file.create_resizable_1d_data_set<std::int64_t>(step_level + lvl + "/PointDataOffset/data"); ... - ds.select({old_size}, {1}).write(data_offset); + ds.select({old_size}, {1}).write(static_cast<std::int64_t>(data_offset)); ... - ds.select({old_size}, {1}).write(box_offset); + ds.select({old_size}, {1}).write(static_cast<std::int64_t>(box_offset));Also applies to: 399-405, 440-445
tests/simulator/test_run.py (1)
245-250: Avoid exact float equality for timestamps.Exact equality can be flaky due to serialization/rounding. Prefer
np.allclose(ornp.testing.assert_allclose) for stability.Suggested fix
- self.assertTrue(all(run.times("EM_B") == timestamps)) + self.assertTrue(np.allclose(run.times("EM_B"), timestamps))
🧹 Nitpick comments (2)
pyphare/pyphare/pharesee/hierarchy/for_vtk/patchdata.py (2)
34-36: Consider usingisinstance()for type checking.Using
type(data) is Xwon't match subclasses. If strict type matching is intentional, this is fine. Otherwise,isinstanceis more Pythonic and flexible.♻️ Suggested refactor
data_t = type(data) - if not (data_t is h5py.Dataset or data_t is VtkFieldDatasetAccessor): + if not isinstance(data, (h5py.Dataset, VtkFieldDatasetAccessor)): raise RuntimeError("VtkFieldData only handles vtkhdf datasets")
53-57: Redundantall()wrapper on single-element list.
that.dataset.shape == self.dataset.shapecompares tuples and returns a boolean directly. Wrapping it inall([...])adds no value.♻️ Suggested simplification
that_data = ( that[:] - if all([that.dataset.shape == self.dataset.shape]) + if that.dataset.shape == self.dataset.shape else that[that.box] )
| def _available_diags(self): | ||
| files = glob.glob(os.path.join(self.path, "*.h5")) | ||
| if files: | ||
| return files | ||
| files = glob.glob(os.path.join(self.path, "*.vtkhdf")) | ||
| if files: | ||
| return files | ||
| raise RuntimeError(f"No HDF5 files found at: {self.path}") |
There was a problem hiding this comment.
Disambiguate when both .h5 and .vtkhdf exist.
Right now .h5 wins silently if both formats are present, which can read stale data after format switches. Consider raising an explicit error (or adding a format selector) to avoid ambiguity.
Possible tweak
- files = glob.glob(os.path.join(self.path, "*.h5"))
- if files:
- return files
- files = glob.glob(os.path.join(self.path, "*.vtkhdf"))
- if files:
- return files
+ h5_files = glob.glob(os.path.join(self.path, "*.h5"))
+ vtk_files = glob.glob(os.path.join(self.path, "*.vtkhdf"))
+ if h5_files and vtk_files:
+ raise RuntimeError("Both .h5 and .vtkhdf found; please disambiguate the run directory")
+ if h5_files:
+ return h5_files
+ if vtk_files:
+ return vtk_files
raise RuntimeError(f"No HDF5 files found at: {self.path}")🧰 Tools
🪛 Ruff (0.14.13)
201-201: Avoid specifying long messages outside the exception class
(TRY003)
🤖 Prompt for AI Agents
In `@pyphare/pyphare/pharesee/run/run.py` around lines 194 - 201, The
_available_diags method currently prefers .h5 silently; change it to detect both
patterns (e.g., collect files_h5 = glob.glob(os.path.join(self.path, "*.h5"))
and files_vtkhdf = glob.glob(os.path.join(self.path, "*.vtkhdf"))) and if both
lists are non-empty raise a clear RuntimeError indicating ambiguous formats and
instructing the caller to specify a format (suggest adding a parameter/attribute
like diag_format or format with allowed values 'h5'|'vtkhdf'), or alternatively
implement using that diag_format to return the chosen list; update
_available_diags and any constructor/usage to honor the new selector so the code
no longer silently picks .h5 when both exist.
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@pyphare/pyphare/pharesee/hierarchy/fromh5.py`:
- Around line 114-117: The ValueError message in the duplicate check uses the
wrong variable name: when checking "if pdata_name in patch_datas" the exception
still references dataset_name; update the exception in the block around the "if
pdata_name in patch_datas" check so it uses pdata_name (or otherwise the actual
key used) in the error string to accurately report the conflicting key
(affecting the patch_datas assignment of pdata).
In `@pyphare/pyphare/pharesee/hierarchy/fromvtkhdf5.py`:
- Around line 198-199: In pop_name, replace the hardcoded Path(".vtkhdf").stem
usage with the provided basename argument so you call
Path(basename).stem.split("_")[2] (and add a simple guard to raise a clear
ValueError if the split has fewer than 3 parts) to correctly extract the third
underscore-separated token from the given basename.
In `@src/diagnostic/detail/vtkh5_type_writer.hpp`:
- Around line 264-282: The NSteps attribute increment and the timestamp dataset
write in VTKFileInitializer (inside the block using h5file, steps_group,
steps_attr and the block using ds and typewriter->h5Writer_.timestamp()) must be
executed only on rank 0 to avoid parallel metadata corruption; wrap the first
block that creates/updates the "NSteps" attribute and the second block that
resizes/writes the Steps/Values dataset in an if (core::mpi::rank() == 0) guard
so only rank 0 performs steps_group.template createAttribute<int>("NSteps",
...), steps_attr.write(...), ds.resize(...), and
ds.select(...).write(timestamp), leaving other ranks to skip these metadata
operations.
♻️ Duplicate comments (9)
src/diagnostic/detail/vtkh5_type_writer.hpp (1)
364-367: Consider 64-bit types for offset datasets.These offset datasets (
AMRBoxOffset,NumberOfAMRBox,PointDataOffset/data) are created withinttype. For large-scale simulations, 64-bit types would prevent overflow. This aligns with the recommendation from a previous review.src/diagnostic/detail/vtkh5_writer.hpp (2)
127-133: Particle diagnostics mapped to NullTypeWriter.The
"particle"type is mapped toNullTypeWriter, which means particle diagnostics will be silently logged but not written. A previous review flagged thatParticlesDiagnosticWriterexists and should be wired here.
142-148: Empty string access not guarded.
fileStr[0]is accessed without checking if the string is empty, which could cause undefined behavior. A previous review flagged this.pyphare/pyphare/pharesee/run/run.py (1)
194-201: Ambiguous format selection when both.h5and.vtkhdfexist.When both formats are present,
.h5wins silently, which can read stale data. Consider raising an error or adding a format selector parameter.🔧 Possible improvement
def _available_diags(self): - files = glob.glob(os.path.join(self.path, "*.h5")) - if files: - return files - files = glob.glob(os.path.join(self.path, "*.vtkhdf")) - if files: - return files + h5_files = glob.glob(os.path.join(self.path, "*.h5")) + vtk_files = glob.glob(os.path.join(self.path, "*.vtkhdf")) + if h5_files and vtk_files: + raise RuntimeError("Both .h5 and .vtkhdf found; specify format explicitly") + if h5_files: + return h5_files + if vtk_files: + return vtk_files raise RuntimeError(f"No HDF5 files found at: {self.path}")pyphare/pyphare/pharesee/run/utils.py (1)
372-380: 1D case in_get_rankstill returnsNoneimplicitly.The
passstatement in the 1D branch causes the function to returnNone, breaking downstream code expecting a tuple of dictionaries. Either implement 1D support or raiseNotImplementedError.🐛 Proposed fix
if ndim == 1: - pass + raise NotImplementedError("_get_rank not implemented for 1D") elif ndim == 2:pyphare/pyphare/pharesee/hierarchy/fromvtkhdf5.py (4)
97-100: Missing bounds check on time lookup can raise IndexError.If the requested
timeis not found,np.wherereturns an empty array andret[0][0]will raiseIndexError. ReturnNoneor raise a descriptive error.🐛 Proposed fix
def time_idx(self, time): times = self.times() ret = np.where(np.isclose(times, float(time), 1e-10)) + if len(ret[0]) == 0: + return None return ret[0][0]
177-183: Mutable default argumentexclude=["tags"]is a Python gotcha.Using a mutable list as default means all calls share the same instance. Use
Noneand initialize within the function.🐛 Proposed fix
-def get_all_available_quantities_from_h5(filepath, time=0, exclude=["tags"], hier=None): +def get_all_available_quantities_from_h5(filepath, time=0, exclude=None, hier=None): + if exclude is None: + exclude = ["tags"] time = format_timestamp(time)
425-428: Bug:timesis a method but called as property.Line 426 accesses
VtkFile(h5_filename).timeswithout parentheses. Sincetimes()is a method (line 134), this returns the method object instead of the times array, causing the loop to iterate over nothing useful.🐛 Proposed fix
if load_all_times(time, hier): - for t in VtkFile(h5_filename).times: + for t in VtkFile(h5_filename).times(): add_time_from_h5(hier, h5_filename, t, **kwargs) return hier
430-430: Replaceassert Falsewith explicit exception.
assertstatements are removed with Python's-Oflag, making this a silent no-op in optimized mode.🐛 Proposed fix
- assert False + raise AssertionError("Invalid combination of time and hier arguments")
🧹 Nitpick comments (6)
src/diagnostic/detail/vtkh5_type_writer.hpp (2)
100-103: Consider using larger integer types for data sizes.
level_rank_data_size,n_boxes_per_level, andlevel_data_sizeare declared asintorvector<int>. In large-scale simulations, these values could overflow. Consider usingstd::int64_tfor consistency with HDF5 best practices, as noted in previous reviews regarding offset types.♻️ Suggested change
- std::vector<std::vector<int>> level_rank_data_size; - std::vector<int> n_boxes_per_level, level_data_size; + std::vector<std::vector<std::int64_t>> level_rank_data_size; + std::vector<std::int64_t> n_boxes_per_level, level_data_size;
327-331: Add comment explaining HighFive select() parameters.Per previous feedback, the
select({data_offset, 0}, {size, 1})call is unclear to readers unfamiliar with HighFive. Consider adding a brief comment.📝 Suggested comment
for (std::uint16_t i = 0; i < HierData::X_TIMES; ++i) { + // select(start_position, count) - write 'size' rows at column 0 starting at data_offset ds.select({data_offset, 0}, {size, 1}).write_raw(frimal.data()); data_offset += size; }pyphare/pyphare/pharesee/hierarchy/patchdata.py (1)
185-206: Consider using a custom exception class for validation errors.The static analysis hints flag the long exception messages at lines 193-195 and 204-206. While functional, extracting these to a dedicated exception class (e.g.,
CenteringValidationError) would improve maintainability and allow consistent error handling across the codebase.pyphare/pyphare/pharesee/hierarchy/for_vtk/patchdata.py (1)
49-61: Clarify the shape comparison logic incompare.The condition
all([that.dataset.shape == self.dataset.shape])wraps a single boolean in a list, makingall()redundant. Additionally, the fallbackthat[that.box]assumesthathas aboxattribute, which may not always be true for arbitrary FieldData.🔧 Suggested simplification
def compare(self, that, atol=1e-16): """VTK Diagnostics do not have ghosts values!""" try: that_data = ( that[:] - if all([that.dataset.shape == self.dataset.shape]) + if that.dataset.shape == self.dataset.shape else that[that.box] ) phut.assert_fp_any_all_close(self.dataset[:], that_data, atol=atol) return True except AssertionError as e: return phut.EqualityCheck(False, str(e))pyphare/pyphare/pharesee/run/run.py (1)
250-252: Prefernext(iter(...))over single-element slice.The static analysis suggests using
next(iter(...))for better clarity when extracting the first element.♻️ Suggested refactor
- nbrGhosts = list(hierarchy.level(0).patches[0].patch_datas.values())[ - 0 - ].ghosts_nbr + nbrGhosts = next( + iter(hierarchy.level(0).patches[0].patch_datas.values()) + ).ghosts_nbrtests/simulator/test_run.py (1)
278-281: Consider adding VTK-HDF-specific assertions.The
test_run_pharevtkhdftest only runs_test_any_formatwithout VTK-specific verification. As VTK-HDF support matures, consider adding format-specific checks (e.g., verifying VTK file structure or primal data layout).
| if pdata_name in patch_datas: | ||
| raise ValueError("error - {} already in patchdata".format(dataset_name)) | ||
|
|
||
| patch_datas[pdata_name] = pdata |
There was a problem hiding this comment.
Error message references wrong variable name.
The duplicate check now uses pdata_name as the key (line 114), but the error message still references dataset_name. This will confuse debugging.
🐛 Proposed fix
if pdata_name in patch_datas:
- raise ValueError("error - {} already in patchdata".format(dataset_name))
+ raise ValueError("error - {} already in patchdata".format(pdata_name))📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if pdata_name in patch_datas: | |
| raise ValueError("error - {} already in patchdata".format(dataset_name)) | |
| patch_datas[pdata_name] = pdata | |
| if pdata_name in patch_datas: | |
| raise ValueError("error - {} already in patchdata".format(pdata_name)) | |
| patch_datas[pdata_name] = pdata |
🤖 Prompt for AI Agents
In `@pyphare/pyphare/pharesee/hierarchy/fromh5.py` around lines 114 - 117, The
ValueError message in the duplicate check uses the wrong variable name: when
checking "if pdata_name in patch_datas" the exception still references
dataset_name; update the exception in the block around the "if pdata_name in
patch_datas" check so it uses pdata_name (or otherwise the actual key used) in
the error string to accurately report the conflicting key (affecting the
patch_datas assignment of pdata).
| def pop_name(basename): | ||
| return Path(".vtkhdf").stem.split("_")[2] |
There was a problem hiding this comment.
Bug: pop_name ignores basename argument and returns incorrect value.
The function uses Path(".vtkhdf").stem which always returns "" (empty string), then tries to split and index. This will fail or return wrong results.
🐛 Proposed fix
def pop_name(basename):
- return Path(".vtkhdf").stem.split("_")[2]
+ return Path(basename).stem.split("_")[2]📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def pop_name(basename): | |
| return Path(".vtkhdf").stem.split("_")[2] | |
| def pop_name(basename): | |
| return Path(basename).stem.split("_")[2] |
🧰 Tools
🪛 Ruff (0.14.13)
198-198: Unused function argument: basename
(ARG001)
🤖 Prompt for AI Agents
In `@pyphare/pyphare/pharesee/hierarchy/fromvtkhdf5.py` around lines 198 - 199, In
pop_name, replace the hardcoded Path(".vtkhdf").stem usage with the provided
basename argument so you call Path(basename).stem.split("_")[2] (and add a
simple guard to raise a clear ValueError if the split has fewer than 3 parts) to
correctly extract the third underscore-separated token from the given basename.
There was a problem hiding this comment.
Actionable comments posted: 5
🤖 Fix all issues with AI agents
In `@pyphare/pyphare/pharein/diagnostics.py`:
- Around line 64-67: The timestamp filtering has an off-by-one: when detecting
timestamps < init_time, filter should keep timestamps >= init_time instead of
using >, so change the filtering expression that references timestamps to use >=
init_time; also fix the typo in the warning message (remove the extra closing
parenthesis) and keep the message referencing init_time and timestamps variables
for clarity.
In `@pyphare/pyphare/pharein/simulation.py`:
- Around line 163-172: The code calculates total_time and then divides by
time-step counts without validating total_time; add a guard after computing
total_time (using start_time and kwargs["final_time"]) to raise a clear
ValueError or adjust behavior when total_time <= 0 or total_time <
kwargs.get("time_step", 0): if total_time <= 0 raise an error indicating
restart_time >= final_time, and before computing time_step_nbr in the
final_and_dt branch ensure total_time >= kwargs["time_step"] (or compute
time_step_nbr = max(1, int(total_time / kwargs["time_step"]))) so division by
zero cannot occur; apply the same validation/adjustment for final_and_nsteps
branch to ensure time_step_nbr > 0 and time_step is positive.
In `@src/amr/resources_manager/amr_utils.hpp`:
- Around line 283-313: The onLevels(hierarchy, onLevel, orMissing, minlvl,
maxlvl) overload currently emits missing levels starting at hier_levels which
can be below minlvl; change the start of the missing-level loop to start =
std::max(minlvl, hier_levels) so that the second loop becomes for (std::size_t
ilvl = std::max(minlvl, hier_levels); ilvl <= maxlvl; ++ilvl) and keep the first
loop as for (std::size_t ilvl = minlvl; ilvl < hier_levels; ++ilvl)
onLevel(*hierarchy.getPatchLevel(ilvl)); to ensure missing callbacks respect
minlvl.
In `@src/amr/wrappers/hierarchy.hpp`:
- Around line 217-221: The code only checks max_nbr_levels >= 1 but must also
enforce an upper bound to prevent out-of-bounds access to fixed-size per-level
arrays; update the validation in the block that reads
dict["simulation"]["AMR"]["max_nbr_levels"] (where max_nbr_levels is set and
maxLevel_ is assigned) to also ensure max_nbr_levels <= amr::MAX_LEVEL and throw
a descriptive std::runtime_error if violated, then keep assigning maxLevel_ =
max_nbr_levels - 1.
In `@src/diagnostic/detail/vtk_types/fluid.hpp`:
- Around line 203-207: The MhdFluidWriter::operator()(auto const& layout)
currently unconditionally throws which causes runtime crashes for MHD models;
replace the throw with a graceful behavior: detect that the MHD layout is
unsupported, call the diagnostic logger (or processLogger) to emit a clear
warning including layout/model identifying info, and return early so the writer
skips MHD layouts (or alternatively set a failing configuration flag earlier if
you prefer a hard fail at startup); update
FluidDiagnosticWriter<H5Writer>::MhdFluidWriter::operator() to log the condition
and return without writing to avoid partial file corruption.
♻️ Duplicate comments (10)
src/diagnostic/diagnostic_model_view.hpp (1)
229-231: Consider renaming temporary fields for clarity.As you noted in a previous comment, these fields are now used as general temporary storage, not just for summation. Consider renaming from
"PHARE_sumField"/"PHARE_sumVec"/"PHARE_sumTensor"to"PHARE_tmpField"/"PHARE_tmpVec"/"PHARE_tmpTensor"to better reflect their purpose.pyphare/pyphare/pharesee/hierarchy/fromvtkhdf5.py (4)
198-199: Bug:pop_nameignoresbasenameargument and will fail.The function uses
Path(".vtkhdf").stemwhich returns an empty string, then tries to split and index at[2]. This will raiseIndexError.🐛 Proposed fix
def pop_name(basename): - return Path(".vtkhdf").stem.split("_")[2] + return Path(basename).stem.split("_")[2]
97-100: Missing bounds check causesIndexErrorand breakshas_time.If the requested
timedoesn't exist,np.wherereturns an empty array andret[0][0]raisesIndexError. Thehas_timemethod (line 131-132) expectstime_idxto returnNonefor missing times, but it never will.🐛 Proposed fix
def time_idx(self, time): times = self.times() ret = np.where(np.isclose(times, float(time), 1e-10)) + if len(ret[0]) == 0: + return None return ret[0][0]
426-431: Bug:timesis a method, not a property; also replaceassert False.Line 427:
VtkFile(h5_filename).timesshould beVtkFile(h5_filename).times()sincetimesis a method.Line 431:
assert Falseis removed when Python runs with-Ooptimization. Use an explicit exception.🐛 Proposed fix
if load_all_times(time, hier): - for t in VtkFile(h5_filename).times: + for t in VtkFile(h5_filename).times(): add_time_from_h5(hier, h5_filename, t, **kwargs) return hier - assert False + raise RuntimeError("Invalid combination of time and hier arguments")
177-177: Mutable default argument is a Python gotcha.Using a mutable list as default argument means all calls share the same list instance. Based on learnings from this repository, replace with
Noneand initialize inside the function.🐛 Proposed fix
-def get_all_available_quantities_from_h5(filepath, time=0, exclude=["tags"], hier=None): +def get_all_available_quantities_from_h5(filepath, time=0, exclude=None, hier=None): + if exclude is None: + exclude = ["tags"] time = format_timestamp(time)src/amr/resources_manager/amr_utils.hpp (1)
259-279: Dereference the level when forwarding from hierarchy.If
getPatchLevelreturns a pointer-like type (as in SAMRAI/PHARE wrappers), forwarding it directly won’t satisfy the overload that expects a level object; this mirrors the earlier review note on this file. Please confirm the return type and dereference as needed.🔧 Proposed fix
- return boxesPerRankOn<dim>(hierarchy.getPatchLevel(ilvl)); + return boxesPerRankOn<dim>(*hierarchy.getPatchLevel(ilvl));Run this to confirm the
getPatchLevelreturn type:#!/bin/bash # Inspect getPatchLevel declarations/definitions rg -n -C2 '\bgetPatchLevel\s*\('tests/simulator/test_vtk_diagnostics.py (1)
212-213: Avoid exact equality (atol=0) to reduce flakiness.Line 213 forces exact floating‑point equality for MPI size ≤ 2. This can still be flaky due to non‑associativity. Consider a tiny nonzero tolerance.
🐛 Proposed fix
- atol = 0 if cpp.mpi_size() <= 2 else 1e-17 + atol = 1e-15 if cpp.mpi_size() <= 2 else 1e-17src/diagnostic/detail/vtkh5_type_writer.hpp (2)
269-287: Guard NSteps and Steps/Values updates to a single rank.Lines 269-287 update
NStepsand appendSteps/Valueson all ranks. In parallel HDF5, this will race and duplicate metadata. Restrict the increment/append to rank 0.🐛 Proposed fix
{ PHARE_LOG_SCOPE(3, "VTKFileInitializer::VTKFileInitializer::0"); h5file.create_resizable_1d_data_set<FloatType>(base + "/Steps/Values"); - auto steps_group = h5file.file().getGroup(base + "/Steps"); - if (!steps_group.hasAttribute("NSteps")) - steps_group.template createAttribute<int>("NSteps", HighFive::DataSpace::From(0)) - .write(0); - auto steps_attr = steps_group.getAttribute("NSteps"); - steps_attr.write(steps_attr.template read<int>() + 1); + if (core::mpi::rank() == 0) { + auto steps_group = h5file.file().getGroup(base + "/Steps"); + if (!steps_group.hasAttribute("NSteps")) + steps_group.template createAttribute<int>("NSteps", HighFive::DataSpace::From(0)) + .write(0); + auto steps_attr = steps_group.getAttribute("NSteps"); + steps_attr.write(steps_attr.template read<int>() + 1); + } } { PHARE_LOG_SCOPE(3, "VTKFileInitializer::VTKFileInitializer::1"); - auto const& timestamp = typewriter->h5Writer_.timestamp(); - auto ds = h5file.getDataSet(base + "/Steps/Values"); - auto const old_size = ds.getDimensions()[0]; - ds.resize({old_size + 1}); - ds.select({old_size}, {1}).write(timestamp); + if (core::mpi::rank() == 0) { + auto const& timestamp = typewriter->h5Writer_.timestamp(); + auto ds = h5file.getDataSet(base + "/Steps/Values"); + auto const old_size = ds.getDimensions()[0]; + ds.resize({old_size + 1}); + ds.select({old_size}, {1}).write(timestamp); + } }
369-372: Use 64‑bit offsets for AMR/PointData offsets to avoid overflow.Offsets are written from
std::size_tintointdatasets (Lines 369-372, 404-410, 435-449). Large simulations can overflow 32‑bit offsets, corrupting file structure. Usestd::int64_tdatasets and cast writes.🐛 Proposed fix
+#include <cstdint> ... - h5file.create_resizable_1d_data_set<int>(step_level + lvl + "/AMRBoxOffset"); - h5file.create_resizable_1d_data_set<int>(step_level + lvl + "/NumberOfAMRBox"); - h5file.create_resizable_1d_data_set<int>(step_level + lvl + "/PointDataOffset/data"); + h5file.create_resizable_1d_data_set<std::int64_t>(step_level + lvl + "/AMRBoxOffset"); + h5file.create_resizable_1d_data_set<std::int64_t>(step_level + lvl + "/NumberOfAMRBox"); + h5file.create_resizable_1d_data_set<std::int64_t>(step_level + lvl + "/PointDataOffset/data"); ... - ds.select({old_size}, {1}).write(data_offset); + ds.select({old_size}, {1}).write(static_cast<std::int64_t>(data_offset)); ... - ds.select({old_size}, {1}).write(total_boxes); + ds.select({old_size}, {1}).write(static_cast<std::int64_t>(total_boxes)); ... - ds.select({old_size}, {1}).write(box_offset); + ds.select({old_size}, {1}).write(static_cast<std::int64_t>(box_offset));Also applies to: 404-410, 435-449
src/diagnostic/detail/vtkh5_writer.hpp (1)
146-151: Guard against empty diagnostic quantity infileString.Line 148 accesses
fileStr[0]without checking for an empty string, which is UB if an empty quantity slips through.🐛 Proposed fix
static std::string fileString(std::string fileStr) { - if (fileStr[0] == '/') + if (!fileStr.empty() && fileStr[0] == '/') fileStr = fileStr.substr(1); std::replace(fileStr.begin(), fileStr.end(), '/', '_'); return fileStr + ".vtkhdf"; }
🧹 Nitpick comments (3)
src/diagnostic/diagnostic_model_view.hpp (1)
154-164: Remove or enable commented-out code.This commented-out
localLevelBoxesfunction was also flagged by static analysis. Keeping dead code in comments reduces readability. Consider removing it entirely or enabling it if it's needed for the VTK-HDF diagnostics. Based on learnings, you prefer minimal code.pyphare/pyphare/pharein/simulation.py (1)
731-734: Prefer warnings/logging over print for fallback defaults.Using
warnings.warn(or logging) is the idiomatic approach and can respectstrictif desired.♻️ Suggested change
+import warnings ... - if "max_nbr_levels" not in kwargs: - print("WARNING, 'max_nbr_levels' is not set, defaulting to 1") + if "max_nbr_levels" not in kwargs: + warnings.warn( + "'max_nbr_levels' is not set; defaulting to 1", + RuntimeWarning, + stacklevel=2, + ) kwargs["max_nbr_levels"] = kwargs.get("max_nbr_levels", 1)pyphare/pyphare/pharesee/hierarchy/fromvtkhdf5.py (1)
305-310: Consider addingstrict=Truetozip()for safety.This ensures the iterables have equal length and prevents silent data truncation if there's a mismatch.
♻️ Proposed fix
pos_upper = [ orig + shape * dl - for orig, shape, dl in zip(origin, patch_box.shape, lvl_cell_width) + for orig, shape, dl in zip(origin, patch_box.shape, lvl_cell_width, strict=True) ]
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@pyphare/pyphare/pharein/simulation.py`:
- Around line 179-185: Validate total_time and incoming step params before
dividing: ensure total_time = final_time - start_time is positive, ensure
kwargs["time_step"] > 0 and kwargs["time_step_nbr"] >= 1; in the final_and_dt
branch compute time_step_nbr = max(1, int(total_time / kwargs["time_step"])) (or
raise ValueError if dt <= 0) and then recompute time_step = total_time /
time_step_nbr to avoid zero, and in the final_and_nsteps branch check
kwargs["time_step_nbr"] >= 1 (raise ValueError if zero/negative) before
assigning time_step = total_time / kwargs["time_step_nbr"]; include clear error
messages referencing total_time, time_step and time_step_nbr when raising
exceptions.
In `@src/diagnostic/detail/vtk_types/fluid.hpp`:
- Line 244: The header guard closing comment is inconsistent: the opening guard
uses PHARE_DIAGNOSTIC_DETAIL_VTK_TYPES_FLUID_HPP but the closing comment ends
with _H; update the comment on the `#endif` to match the opening guard name
(PHARE_DIAGNOSTIC_DETAIL_VTK_TYPES_FLUID_HPP) so the symbol
PHARE_DIAGNOSTIC_DETAIL_VTK_TYPES_FLUID_HPP is used in both places and the
`#endif` comment accurately reflects the guard.
In `@src/diagnostic/detail/vtkh5_type_writer.hpp`:
- Around line 269-287: The constructor VTKFileInitializer currently updates the
shared metadata NSteps and appends to Steps/Values from every MPI rank; wrap
both metadata-update blocks (the block that creates/updates the "NSteps"
attribute on steps_group and the block that resizes/writes to the dataset using
typewriter->h5Writer_.timestamp()) in a rank-0 guard so only core::mpi::rank()
== 0 performs the attribute increment and timestamp append, e.g., check
core::mpi::rank() == 0 before creating/writing
steps_group.getAttribute("NSteps") and before resizing/writing the dataset
obtained via h5file.getDataSet(base + "/Steps/Values").
♻️ Duplicate comments (7)
pyphare/pyphare/pharein/simulation.py (1)
488-488: New diagnostic format added.The
pharevtkhdfformat is now supported alongsidephareh5. This follows the existing naming convention where the format name is prefixed with "phare". As noted in past discussions, consider whethervtkhdf(without prefix) might be clearer to users familiar with the VTK-HDF standard format.src/diagnostic/detail/vtkh5_writer.hpp (1)
146-152: Guard against empty string access.Accessing
fileStr[0]without checking if the string is empty causes undefined behavior if an empty quantity name is passed.🐛 Proposed fix
static std::string fileString(std::string fileStr) { - if (fileStr[0] == '/') + if (!fileStr.empty() && fileStr[0] == '/') fileStr = fileStr.substr(1); std::replace(fileStr.begin(), fileStr.end(), '/', '_'); return fileStr + ".vtkhdf"; }pyphare/pyphare/pharesee/hierarchy/fromvtkhdf5.py (4)
97-100: Missing bounds check on time lookup result.If the requested
timedoes not exist in the file,np.wherereturns an empty array andret[0][0]will raise anIndexError. ReturnNoneor raise a descriptive exception.🐛 Proposed fix
def time_idx(self, time): times = self.times() ret = np.where(np.isclose(times, float(time), 1e-10)) + if len(ret[0]) == 0: + return None return ret[0][0]
177-183: Mutable default argument will cause unexpected behavior.Using a mutable list as default argument means all calls share the same list instance. Use
Noneand initialize inside the function.🐛 Proposed fix
-def get_all_available_quantities_from_h5(filepath, time=0, exclude=["tags"], hier=None): +def get_all_available_quantities_from_h5(filepath, time=0, exclude=None, hier=None): + if exclude is None: + exclude = ["tags"] time = format_timestamp(time)
198-199: Bug:pop_nameignoresbasenameargument and always fails.The function uses
Path(".vtkhdf").stemwhich always returns an empty string, then tries to split and index it. This will fail or return wrong results. Thebasenameparameter is never used.🐛 Proposed fix
def pop_name(basename): - return Path(".vtkhdf").stem.split("_")[2] + return Path(basename).stem.split("_")[2]
426-431: Bug:timesis a method andassert Falseshould be replaced.Line 427:
VtkFile(h5_filename).timesshould beVtkFile(h5_filename).times()sincetimesis a method, not a property.Line 431:
assert Falseis removed when Python runs with-Ooptimization. Use an explicit exception.🐛 Proposed fix
if load_all_times(time, hier): - for t in VtkFile(h5_filename).times: + for t in VtkFile(h5_filename).times(): add_time_from_h5(hier, h5_filename, t, **kwargs) return hier - assert False + raise RuntimeError("Invalid combination of time and hier arguments")src/diagnostic/detail/vtkh5_type_writer.hpp (1)
369-373: Store offsets as 64‑bit to avoid overflow in large simulations.
data_offsetandbox_offsetarestd::size_tbut are written intointdatasets (and read back as such). For large meshes, this can overflow 32‑bit ranges. Use 64‑bit datasets and cast when writing.Suggested fix
- h5file.create_resizable_1d_data_set<int>(step_level + lvl + "/AMRBoxOffset"); - h5file.create_resizable_1d_data_set<int>(step_level + lvl + "/NumberOfAMRBox"); - h5file.create_resizable_1d_data_set<int>(step_level + lvl + "/PointDataOffset/data"); + h5file.create_resizable_1d_data_set<std::int64_t>(step_level + lvl + "/AMRBoxOffset"); + h5file.create_resizable_1d_data_set<std::int64_t>(step_level + lvl + "/NumberOfAMRBox"); + h5file.create_resizable_1d_data_set<std::int64_t>(step_level + lvl + "/PointDataOffset/data");- ds.select({old_size}, {1}).write(data_offset); + ds.select({old_size}, {1}).write(static_cast<std::int64_t>(data_offset));- ds.select({old_size}, {1}).write(box_offset); + ds.select({old_size}, {1}).write(static_cast<std::int64_t>(box_offset));Also applies to: 404-410, 444-450
🧹 Nitpick comments (5)
pyphare/pyphare/pharein/simulation.py (1)
734-736: Usewarnings.warn()instead ofprint()for warnings.Using
print()for warnings is non-standard and prevents users from filtering or capturing warnings programmatically. Thewarningsmodule provides better control.Suggested change
+import warnings # add at top of file if not present + # in the wrapper function: if kwargs["refinement"] == "boxes": # ... else: if "max_nbr_levels" not in kwargs: - print("WARNING, 'max_nbr_levels' is not set, defaulting to 1") + warnings.warn( + "'max_nbr_levels' is not set, defaulting to 1", + UserWarning, + stacklevel=2, + ) kwargs["max_nbr_levels"] = kwargs.get("max_nbr_levels", 1)src/amr/resources_manager/amr_utils.hpp (1)
283-296: Consider renaminghier_levelsfor clarity.The variable
hier_levelsat line 290 stores the maximum level index (getNumberOfLevels() - 1), not the count of levels. This is inconsistent with the secondonLevelsoverload (line 306) wherehier_levelsstores the actual count. This naming discrepancy could lead to confusion or bugs during future maintenance.♻️ Suggested naming improvement
- std::size_t const hier_levels = hierarchy.getNumberOfLevels() - 1; // size vs index - std::size_t const max = hier_levels < maxlvl ? hier_levels : maxlvl; + std::size_t const max_hier_lvl = hierarchy.getNumberOfLevels() - 1; + std::size_t const max = max_hier_lvl < maxlvl ? max_hier_lvl : maxlvl;src/diagnostic/diagnostic_model_view.hpp (1)
154-164: Consider removing or documenting the commented-out code.Static analysis flagged this as commented-out code. If this is a work-in-progress feature, consider adding a TODO comment explaining the intent. If it's no longer needed, remove it to reduce maintenance burden.
tests/simulator/test_vtk_diagnostics.py (1)
38-39: Consider renaming ambiguous variableltolength_scale.The variable name
l(lowercase L) can be confused with1(digit one). While this is common in physics code, renaming tolength_scaleorL_scalewould improve readability.src/diagnostic/detail/vtk_types/fluid.hpp (1)
203-207: MHD placeholder throws at runtime.The
MhdFluidWriter::operator()throwsstd::runtime_error("not implemented"). Consider adding a compile-time check or a more descriptive error message indicating this is intentionally unimplemented for MHD models.💡 Suggestion
template<typename H5Writer> void FluidDiagnosticWriter<H5Writer>::MhdFluidWriter::operator()(auto const& layout) { - throw std::runtime_error("not implemented"); + throw std::runtime_error("MHD fluid diagnostics not yet implemented for VTK-HDF format"); }
There was a problem hiding this comment.
Actionable comments posted: 6
🤖 Fix all issues with AI agents
In `@pyphare/pyphare/pharein/simulation.py`:
- Around line 163-191: The code can divide by zero when total_time <= 0 or when
computed time_step_nbr becomes 0; add explicit guards: compute start_time and
total_time as before, then if total_time <= 0 raise a ValueError indicating
final_time must be > start_time; when in the final_and_dt branch, after
computing time_step_nbr = int(total_time / kwargs["time_step"]) verify
time_step_nbr > 0 (raise ValueError if not) before computing time_step, and
similarly in the final_and_nsteps branch validate kwargs["time_step_nbr"] > 0
before dividing to compute time_step; reference variables/functions:
_final_time, total_time, time_step_nbr, time_step, final_and_dt,
final_and_nsteps, nsteps_and_dt.
In `@pyphare/pyphare/pharesee/hierarchy/fromvtkhdf5.py`:
- Line 177: The function get_all_available_quantities_from_h5 currently uses a
mutable default exclude=["tags"]; change the signature to use exclude=None and
inside the function set exclude = ["tags"] if exclude is None (or extend/merge
with provided list as appropriate) to avoid shared-state bugs; update any
internal code that assumes exclude is a list to work with the new initialization
and keep behavior identical when callers pass their own list.
In `@pyphare/pyphare/pharesee/hierarchy/hierarchy_utils.py`:
- Around line 522-537: ParticleData.compare currently returns a NumPy array when
comparing dataset arrays which makes callers (e.g., hierarchy_utils.py expecting
a boolean) fail; update ParticleData.compare (in patchdata.py) to return a
single boolean for array comparisons (use np.array_equal for exact equality) and
also honor the atol parameter for numeric tolerance similar to FieldData.compare
(e.g., use np.allclose when atol is provided); ensure the function still returns
a descriptive non-boolean result only when you need to provide detailed diffs
(but preserve boolean semantics for success/failure so callers using bool(ret)
work).
In `@src/amr/resources_manager/amr_utils.hpp`:
- Around line 300-312: The onLevels function uses std::max in the second loop
but the header <algorithm> is not included; add the include for <algorithm> at
the top of the translation unit so std::max is defined (i.e., include
<algorithm> near other includes in src/amr/resources_manager/amr_utils.hpp) to
remove the transitive-include dependency for the onLevels function.
In `@src/diagnostic/detail/vtk_types/fluid.hpp`:
- Around line 203-207: The MhdFluidWriter currently throws in
FluidDiagnosticWriter<H5Writer>::MhdFluidWriter::operator()(auto const& layout)
while the initializer returns std::nullopt; change the writer to be a safe no-op
to match setup behavior by removing the throw and making operator() return
quietly (do nothing) when an MHD diagnostic is present but not supported; update
any related comments to reflect the no-op behavior so the runtime no longer
crashes if write() is called for MHD.
In `@src/diagnostic/detail/vtkh5_type_writer.hpp`:
- Around line 369-372: The three 1D datasets are being created with int (32-bit)
which will truncate 64-bit offsets; update the calls to
create_resizable_1d_data_set for "AMRBoxOffset", "NumberOfAMRBox", and
"PointDataOffset/data" to use a 64-bit signed type (e.g., int64_t) instead of
int, and ensure any values written (std::size_t) are converted/cast to int64_t;
modify the template parameters in the create_resizable_1d_data_set<int>(...)
invocations accordingly and add any needed <cstdint> include where
create_resizable_1d_data_set is used so the type is available.
♻️ Duplicate comments (9)
pyphare/pyphare/pharesee/hierarchy/fromvtkhdf5.py (4)
97-100: Missing bounds check causesIndexErrorwhen time not found.If the requested
timedoesn't exist in the file,np.wherereturns an empty array andret[0][0]will raise anIndexError. Thehas_timemethod (line 131) expects this to returnNoneon failure, but it never does.🐛 Proposed fix
def time_idx(self, time): times = self.times() ret = np.where(np.isclose(times, float(time), 1e-10)) + if len(ret[0]) == 0: + return None return ret[0][0]
426-428: Bug:timesis a method, not a property.Line 427:
VtkFile(h5_filename).timesshould beVtkFile(h5_filename).times()sincetimesis a method. This will iterate over the method object instead of the actual times array, causing incorrect behavior.🐛 Proposed fix
if load_all_times(time, hier): - for t in VtkFile(h5_filename).times: + for t in VtkFile(h5_filename).times(): add_time_from_h5(hier, h5_filename, t, **kwargs) return hier
198-199: Bug:pop_nameignoresbasenameargument and returns incorrect value.The function uses
Path(".vtkhdf").stemwhich always returns""(empty string), then tries to split and index. This will fail or return wrong results.🐛 Proposed fix
def pop_name(basename): - return Path(".vtkhdf").stem.split("_")[2] + return Path(basename).stem.split("_")[2]
431-431: Replaceassert Falsewithraise AssertionError().
assertstatements are removed when Python runs with-Ooptimization flag, making this a silent no-op in optimized mode. Use an explicit exception instead.🐛 Proposed fix
- assert False + raise AssertionError("Invalid combination of time and hier arguments")src/amr/resources_manager/amr_utils.hpp (1)
276-280: Dereference PatchLevel before forwarding toboxesPerRankOn.If
hierarchy.getPatchLevel(ilvl)returns a pointer/shared_ptr (as is common in SAMRAI wrappers), passing it directly to the overload expecting a level object will fail when this template is instantiated. Consider dereferencing before forwarding.🐛 Proposed fix
- return boxesPerRankOn<dim>(hierarchy.getPatchLevel(ilvl)); + return boxesPerRankOn<dim>(*hierarchy.getPatchLevel(ilvl));To verify the return type quickly:
#!/bin/bash # Inspect getPatchLevel signature / return type rg -n "getPatchLevel" -g '*.hpp' -g '*.h' src/amr -C2tests/simulator/test_vtk_diagnostics.py (1)
213-215:atol=0can be flaky even at low MPI sizes.Exact equality can still fail due to floating‑point ordering differences; consider a tiny tolerance.
✅ Suggested tolerance
- atol = 0 if cpp.mpi_size() <= 2 else 1e-16 + atol = 1e-15 if cpp.mpi_size() <= 2 else 1e-16src/diagnostic/detail/vtk_types/electromag.hpp (1)
85-86: Fix log scope label for Electromag writer.The scope currently reports
FluidDiagnosticWriter, which is misleading.📝 Suggested fix
- PHARE_LOG_SCOPE(3, "FluidDiagnosticWriter<H5Writer>::write_quantity"); + PHARE_LOG_SCOPE(3, "ElectromagDiagnosticWriter<H5Writer>::write_quantity");src/diagnostic/detail/vtkh5_writer.hpp (1)
146-151: Guard against empty diagnostic quantity.
fileStr[0]is undefined for empty strings. A small guard avoids potential UB.🐛 Suggested fix
static std::string fileString(std::string fileStr) { - if (fileStr[0] == '/') + if (!fileStr.empty() && fileStr[0] == '/') fileStr = fileStr.substr(1); std::replace(fileStr.begin(), fileStr.end(), '/', '_'); return fileStr + ".vtkhdf"; }pyphare/pyphare/pharesee/hierarchy/hierarchy_utils.py (1)
188-193: Avoid assert for compute output validation.
assertis stripped under-O, so malformed compute outputs can slip through and fail later. Prefer explicit validation with a clear exception.✅ Suggested fix
- for data in compute(patch, **kwargs): - assert len(data.keys()) == 2 - new_patch_datas[data["name"]] = data["data"] + for data in compute(patch, **kwargs): + if not isinstance(data, dict) or "name" not in data or "data" not in data: + raise ValueError("compute(...) must yield dicts with keys {'name','data'}") + new_patch_datas[data["name"]] = data["data"]
🧹 Nitpick comments (9)
pyphare/pyphare/pharesee/hierarchy/fromvtkhdf5.py (3)
202-202: Unusedpatch_idxparameter.The
patch_idxargument is never used in this function. Consider removing it if not needed, or document why it's reserved for future use.
305-310: Consider addingstrict=Truetozip()for safety.Adding
strict=Truehelps catch length mismatches betweenorigin,patch_box.shape, andlvl_cell_widthearly.♻️ Proposed fix
pos_upper = [ orig + shape * dl - for orig, shape, dl in zip(origin, patch_box.shape, lvl_cell_width) + for orig, shape, dl in zip(origin, patch_box.shape, lvl_cell_width, strict=True) ]
406-406: Unusedsilentparameter.The
silentparameter is never used. Consider removing it or implementing the intended logging behavior.pyphare/pyphare/pharein/simulation.py (1)
734-737: Preferwarnings.warnovermax_nbr_levelsUsing
warnings.warnfor standard warning handling.♻️ Suggested refactor
@@ -import os +import os +import warnings @@ else: if "max_nbr_levels" not in kwargs: - print("WARNING, 'max_nbr_levels' is not set, defaulting to 1") + warnings.warn( + "WARNING, 'max_nbr_levels' is not set, defaulting to 1", + RuntimeWarning, + ) kwargs["max_nbr_levels"] = kwargs.get("max_nbr_levels", 1)Please verify this integrates cleanly with any test harness or log-capture you rely on.
tests/simulator/test_vtk_diagnostics.py (2)
30-102: Silence Ruff warnings for unused params and ambiguousl.Several helpers accept unused
x/yandS(..., l)uses an ambiguous name. Consider prefixing unused params with_and renaminglto something clearer.✏️ Example adjustment
- def density(x, y): + def density(_x, y): @@ - def S(y, y0, l): - return 0.5 * (1.0 + np.tanh((y - y0) / l)) + def S(y, y0, length): + return 0.5 * (1.0 + np.tanh((y - y0) / length))Apply the same pattern to other helpers with unused
x/yparameters.
169-182: Unusedkwargsin_run.
kwargsisn’t used; consider renaming to**_to avoid lint noise.✏️ Proposed tweak
- def _run(self, ndim, interp, simInput, diag_dir="", **kwargs): + def _run(self, ndim, interp, simInput, diag_dir="", **_):src/diagnostic/detail/vtk_types/fluid.hpp (2)
119-124: MHD initializer is a placeholder.This is consistent with the PR scope. Consider adding a comment or TODO indicating MHD support is planned for future implementation.
137-139: Consider consolidating map operations.The current pattern performs up to 3 map lookups. A more efficient approach:
♻️ Suggested simplification
- if (mem.count(diagnostic.quantity) == 0) - mem.try_emplace(diagnostic.quantity); - auto& info = mem[diagnostic.quantity]; + auto& info = mem.try_emplace(diagnostic.quantity).first->second;src/diagnostic/detail/vtkh5_type_writer.hpp (1)
67-78: Redundant resize inside loop.
level_rank_data_size[ilvl].resize(core::mpi::size())is called on every iteration of the outer loop but only needs to be called once per level.♻️ Move resize outside loop
+ data.level_rank_data_size[ilvl].resize(core::mpi::size()); for (std::size_t i = 0; i < data.level_boxes_per_rank[ilvl].size(); ++i) { - data.level_rank_data_size[ilvl].resize(core::mpi::size()); - data.level_rank_data_size[ilvl][i] = 0;
| if not any([final_and_dt, final_and_nsteps, nsteps_and_dt]): | ||
| raise ValueError( | ||
| "Error: Specify either 'final_time' and 'time_step' or 'time_step_nbr' and 'time_step'" | ||
| + " or 'final_time' and 'time_step_nbr'" | ||
| ) | ||
|
|
||
| start_time = kwargs.get("restart_options", {}).get("restart_time", 0) | ||
|
|
||
| def _final_time(): | ||
| if "final_time" in kwargs: | ||
| return kwargs["final_time"] | ||
| return start_time + kwargs["time_step"] * kwargs["time_step_nbr"] | ||
|
|
||
| final_time = _final_time() | ||
| total_time = final_time - start_time | ||
|
|
||
| if final_and_dt: | ||
| time_step_nbr = int(kwargs["final_time"] / kwargs["time_step"]) | ||
| time_step = kwargs["final_time"] / time_step_nbr | ||
| time_step_nbr = int(total_time / kwargs["time_step"]) | ||
| time_step = total_time / time_step_nbr | ||
|
|
||
| elif final_and_nsteps: | ||
| time_step = kwargs["final_time"] / kwargs["time_step_nbr"] | ||
| time_step = total_time / kwargs["time_step_nbr"] | ||
| time_step_nbr = kwargs["time_step_nbr"] | ||
|
|
||
| elif nsteps_and_dt: | ||
| time_step = kwargs["time_step"] | ||
| time_step_nbr = kwargs["time_step_nbr"] | ||
|
|
||
| else: | ||
| raise ValueError( | ||
| "Error: Specify either 'final_time' and 'time_step' or 'time_step_nbr' and 'time_step'" | ||
| + " or 'final_time' and 'time_step_nbr'" | ||
| ) | ||
|
|
||
| return ( | ||
| time_step_nbr, | ||
| time_step, | ||
| kwargs.get("final_time", start_time + time_step * time_step_nbr), | ||
| ) | ||
| return time_step_nbr, time_step, final_time |
There was a problem hiding this comment.
Guard against zero/negative total_time causing division by zero
Line 180 can compute time_step_nbr = 0 when total_time <= 0 or total_time < time_step, which then divides by zero on Line 181; Line 184 also divides by time_step_nbr without validating it’s > 0. Add explicit validation before dividing.
🩹 Suggested fix
@@
final_time = _final_time()
total_time = final_time - start_time
+ if total_time <= 0:
+ raise ValueError("Error: final_time must be greater than restart_time")
+ if "time_step" in kwargs and kwargs["time_step"] <= 0:
+ raise ValueError("Error: time_step must be > 0")
@@
if final_and_dt:
time_step_nbr = int(total_time / kwargs["time_step"])
+ if time_step_nbr < 1:
+ raise ValueError(
+ "Error: final_time must be >= restart_time + time_step"
+ )
time_step = total_time / time_step_nbr
@@
elif final_and_nsteps:
+ if kwargs["time_step_nbr"] <= 0:
+ raise ValueError("Error: time_step_nbr must be > 0")
time_step = total_time / kwargs["time_step_nbr"]
time_step_nbr = kwargs["time_step_nbr"]📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if not any([final_and_dt, final_and_nsteps, nsteps_and_dt]): | |
| raise ValueError( | |
| "Error: Specify either 'final_time' and 'time_step' or 'time_step_nbr' and 'time_step'" | |
| + " or 'final_time' and 'time_step_nbr'" | |
| ) | |
| start_time = kwargs.get("restart_options", {}).get("restart_time", 0) | |
| def _final_time(): | |
| if "final_time" in kwargs: | |
| return kwargs["final_time"] | |
| return start_time + kwargs["time_step"] * kwargs["time_step_nbr"] | |
| final_time = _final_time() | |
| total_time = final_time - start_time | |
| if final_and_dt: | |
| time_step_nbr = int(kwargs["final_time"] / kwargs["time_step"]) | |
| time_step = kwargs["final_time"] / time_step_nbr | |
| time_step_nbr = int(total_time / kwargs["time_step"]) | |
| time_step = total_time / time_step_nbr | |
| elif final_and_nsteps: | |
| time_step = kwargs["final_time"] / kwargs["time_step_nbr"] | |
| time_step = total_time / kwargs["time_step_nbr"] | |
| time_step_nbr = kwargs["time_step_nbr"] | |
| elif nsteps_and_dt: | |
| time_step = kwargs["time_step"] | |
| time_step_nbr = kwargs["time_step_nbr"] | |
| else: | |
| raise ValueError( | |
| "Error: Specify either 'final_time' and 'time_step' or 'time_step_nbr' and 'time_step'" | |
| + " or 'final_time' and 'time_step_nbr'" | |
| ) | |
| return ( | |
| time_step_nbr, | |
| time_step, | |
| kwargs.get("final_time", start_time + time_step * time_step_nbr), | |
| ) | |
| return time_step_nbr, time_step, final_time | |
| if not any([final_and_dt, final_and_nsteps, nsteps_and_dt]): | |
| raise ValueError( | |
| "Error: Specify either 'final_time' and 'time_step' or 'time_step_nbr' and 'time_step'" | |
| " or 'final_time' and 'time_step_nbr'" | |
| ) | |
| start_time = kwargs.get("restart_options", {}).get("restart_time", 0) | |
| def _final_time(): | |
| if "final_time" in kwargs: | |
| return kwargs["final_time"] | |
| return start_time + kwargs["time_step"] * kwargs["time_step_nbr"] | |
| final_time = _final_time() | |
| total_time = final_time - start_time | |
| if total_time <= 0: | |
| raise ValueError("Error: final_time must be greater than restart_time") | |
| if "time_step" in kwargs and kwargs["time_step"] <= 0: | |
| raise ValueError("Error: time_step must be > 0") | |
| if final_and_dt: | |
| time_step_nbr = int(total_time / kwargs["time_step"]) | |
| if time_step_nbr < 1: | |
| raise ValueError( | |
| "Error: final_time must be >= restart_time + time_step" | |
| ) | |
| time_step = total_time / time_step_nbr | |
| elif final_and_nsteps: | |
| if kwargs["time_step_nbr"] <= 0: | |
| raise ValueError("Error: time_step_nbr must be > 0") | |
| time_step = total_time / kwargs["time_step_nbr"] | |
| time_step_nbr = kwargs["time_step_nbr"] | |
| elif nsteps_and_dt: | |
| time_step = kwargs["time_step"] | |
| time_step_nbr = kwargs["time_step_nbr"] | |
| return time_step_nbr, time_step, final_time |
🤖 Prompt for AI Agents
In `@pyphare/pyphare/pharein/simulation.py` around lines 163 - 191, The code can
divide by zero when total_time <= 0 or when computed time_step_nbr becomes 0;
add explicit guards: compute start_time and total_time as before, then if
total_time <= 0 raise a ValueError indicating final_time must be > start_time;
when in the final_and_dt branch, after computing time_step_nbr = int(total_time
/ kwargs["time_step"]) verify time_step_nbr > 0 (raise ValueError if not) before
computing time_step, and similarly in the final_and_nsteps branch validate
kwargs["time_step_nbr"] > 0 before dividing to compute time_step; reference
variables/functions: _final_time, total_time, time_step_nbr, time_step,
final_and_dt, final_and_nsteps, nsteps_and_dt.
| return cpy | ||
|
|
||
|
|
||
| def get_all_available_quantities_from_h5(filepath, time=0, exclude=["tags"], hier=None): |
There was a problem hiding this comment.
Mutable default argument will cause unexpected behavior.
Using a mutable list as default argument means all calls share the same list instance. Based on learnings, use None and initialize inside.
🐛 Proposed fix
-def get_all_available_quantities_from_h5(filepath, time=0, exclude=["tags"], hier=None):
+def get_all_available_quantities_from_h5(filepath, time=0, exclude=None, hier=None):
+ if exclude is None:
+ exclude = ["tags"]
time = format_timestamp(time)🧰 Tools
🪛 Ruff (0.14.13)
177-177: Do not use mutable data structures for argument defaults
Replace with None; initialize within function
(B006)
🤖 Prompt for AI Agents
In `@pyphare/pyphare/pharesee/hierarchy/fromvtkhdf5.py` at line 177, The function
get_all_available_quantities_from_h5 currently uses a mutable default
exclude=["tags"]; change the signature to use exclude=None and inside the
function set exclude = ["tags"] if exclude is None (or extend/merge with
provided list as appropriate) to avoid shared-state bugs; update any internal
code that assumes exclude is a list to work with the new initialization and keep
behavior identical when callers pass their own list.
| if patch_ref.box != patch_cmp.box: | ||
| return eqr("patch box mismatch", patch_ref.box, patch_cmp.box) | ||
|
|
||
| if patch_ref.patch_datas.keys() != patch_cmp.patch_datas.keys(): | ||
| return eqr("data keys mismatch") | ||
|
|
||
| for patch_data_key in patch_ref.patch_datas.keys(): | ||
| patch_data_ref = patch_ref.patch_datas[patch_data_key] | ||
| patch_data_cmp = patch_cmp.patch_datas[patch_data_key] | ||
|
|
||
| if not patch_data_cmp.compare(patch_data_ref, atol=atol): | ||
| ret = patch_data_ref.compare(patch_data_cmp, atol=atol) | ||
| if not bool(ret): | ||
| msg = f"data mismatch: {type(patch_data_ref).__name__} {patch_data_key}" | ||
| if type(ret) is not bool: | ||
| msg += "\n" + str(ret) | ||
| eqr(msg, patch_data_cmp, patch_data_ref) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's find the ParticleData class and its compare method
fd -t f "patchdata.py" | head -5Repository: PHAREHUB/PHARE
Length of output: 161
🏁 Script executed:
# Search for ParticleData class and compare method
rg -n "class ParticleData" --type pyRepository: PHAREHUB/PHARE
Length of output: 140
🏁 Script executed:
# Look at the specific file mentioned in the review
rg -n "def compare" pyphare/pyphare/pharesee/hierarchy/patchdata.py -A 3Repository: PHAREHUB/PHARE
Length of output: 461
🏁 Script executed:
# Check imports in patchdata.py
head -30 pyphare/pyphare/pharesee/hierarchy/patchdata.pyRepository: PHAREHUB/PHARE
Length of output: 677
🏁 Script executed:
# Verify the hierarchy_utils.py code around lines 522-537
sed -n '520,540p' pyphare/pyphare/pharesee/hierarchy/hierarchy_utils.pyRepository: PHAREHUB/PHARE
Length of output: 1047
Fix ParticleData.compare to return a boolean for array comparisons.
ParticleData.compare uses self.dataset == that.dataset, which returns a NumPy array instead of a boolean when datasets are arrays. This causes bool(ret) in hierarchy_utils.py line 533 to raise ValueError. Use np.array_equal for proper boolean comparison. Also, ParticleData.compare ignores the atol parameter passed by callers; consider whether it should handle numeric tolerance like FieldData.compare does.
🔧 Suggested fix (in pyphare/pyphare/pharesee/hierarchy/patchdata.py)
def compare(self, that, *args, **kwargs):
"""args/kwargs may include atol for consistency with field::compare"""
- return self.name == that.name and self.dataset == that.dataset
+ return self.name == that.name and np.array_equal(self.dataset, that.dataset)🤖 Prompt for AI Agents
In `@pyphare/pyphare/pharesee/hierarchy/hierarchy_utils.py` around lines 522 -
537, ParticleData.compare currently returns a NumPy array when comparing dataset
arrays which makes callers (e.g., hierarchy_utils.py expecting a boolean) fail;
update ParticleData.compare (in patchdata.py) to return a single boolean for
array comparisons (use np.array_equal for exact equality) and also honor the
atol parameter for numeric tolerance similar to FieldData.compare (e.g., use
np.allclose when atol is provided); ensure the function still returns a
descriptive non-boolean result only when you need to provide detailed diffs (but
preserve boolean semantics for success/failure so callers using bool(ret) work).
| void onLevels(auto& hierarchy, auto&& onLevel, auto&& orMissing, std::size_t const minlvl, | ||
| std::size_t const maxlvl) | ||
| { | ||
| if (hierarchy.getNumberOfLevels() < 1) | ||
| throw std::runtime_error("Hierarchy must have a level"); | ||
|
|
||
| std::size_t const hier_levels = hierarchy.getNumberOfLevels(); | ||
|
|
||
| for (std::size_t ilvl = minlvl; ilvl < hier_levels; ++ilvl) | ||
| onLevel(*hierarchy.getPatchLevel(ilvl)); | ||
|
|
||
| for (std::size_t ilvl = std::max(minlvl, hier_levels); ilvl <= maxlvl; ++ilvl) | ||
| orMissing(ilvl); |
There was a problem hiding this comment.
Include <algorithm> for std::max usage.
std::max is used in the missing-level loop; add the header to avoid relying on transitive includes.
🧹 Suggested include
`#include` <SAMRAI/geom/CartesianPatchGeometry.h>
`#include` <stdexcept>
+ `#include` <algorithm>📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| void onLevels(auto& hierarchy, auto&& onLevel, auto&& orMissing, std::size_t const minlvl, | |
| std::size_t const maxlvl) | |
| { | |
| if (hierarchy.getNumberOfLevels() < 1) | |
| throw std::runtime_error("Hierarchy must have a level"); | |
| std::size_t const hier_levels = hierarchy.getNumberOfLevels(); | |
| for (std::size_t ilvl = minlvl; ilvl < hier_levels; ++ilvl) | |
| onLevel(*hierarchy.getPatchLevel(ilvl)); | |
| for (std::size_t ilvl = std::max(minlvl, hier_levels); ilvl <= maxlvl; ++ilvl) | |
| orMissing(ilvl); | |
| `#include` <SAMRAI/geom/CartesianPatchGeometry.h> | |
| `#include` <stdexcept> | |
| `#include` <algorithm> |
🤖 Prompt for AI Agents
In `@src/amr/resources_manager/amr_utils.hpp` around lines 300 - 312, The onLevels
function uses std::max in the second loop but the header <algorithm> is not
included; add the include for <algorithm> at the top of the translation unit so
std::max is defined (i.e., include <algorithm> near other includes in
src/amr/resources_manager/amr_utils.hpp) to remove the transitive-include
dependency for the onLevels function.
| h5file.create_resizable_2d_data_set<int, boxValsIn3D>(level_base + lvl + "/AMRBox"); | ||
| h5file.create_resizable_1d_data_set<int>(step_level + lvl + "/AMRBoxOffset"); | ||
| h5file.create_resizable_1d_data_set<int>(step_level + lvl + "/NumberOfAMRBox"); | ||
| h5file.create_resizable_1d_data_set<int>(step_level + lvl + "/PointDataOffset/data"); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's examine the file at the specified lines
head -n 375 src/diagnostic/detail/vtkh5_type_writer.hpp | tail -n +360Repository: PHAREHUB/PHARE
Length of output: 727
🏁 Script executed:
# Let's also see the broader context around these lines
head -n 420 src/diagnostic/detail/vtkh5_type_writer.hpp | tail -n +350Repository: PHAREHUB/PHARE
Length of output: 2805
🏁 Script executed:
# Find where data_offset and box_offset are declared/used
rg "data_offset|box_offset" src/diagnostic/detail/vtkh5_type_writer.hpp -B 2 -A 2Repository: PHAREHUB/PHARE
Length of output: 2336
🏁 Script executed:
# Check the complete signature of create_resizable_1d_data_set and create_resizable_2d_data_set
rg "create_resizable.*_data_set" src/diagnostic/detail/vtkh5_type_writer.hpp -A 5 | head -n 30Repository: PHAREHUB/PHARE
Length of output: 1177
🏁 Script executed:
# Search for type definitions in the file
rg "std::size_t|int64" src/diagnostic/detail/vtkh5_type_writer.hppRepository: PHAREHUB/PHARE
Length of output: 981
🏁 Script executed:
# Search for VTK format related comments in the file
rg "VTK|OverlappingAMR|HDF5" src/diagnostic/detail/vtkh5_type_writer.hpp -B 2 -A 2 | head -n 50Repository: PHAREHUB/PHARE
Length of output: 1537
🏁 Script executed:
# Check the file header and comments
head -n 100 src/diagnostic/detail/vtkh5_type_writer.hppRepository: PHAREHUB/PHARE
Length of output: 3476
🏁 Script executed:
# Search for int64, hid_t, or other 64-bit type usage patterns in the file
rg "int64|h5t|H5T|hid_t" src/diagnostic/detail/vtkh5_type_writer.hppRepository: PHAREHUB/PHARE
Length of output: 1369
🌐 Web query:
VTK HDF5 OverlappingAMR format offset data type specification
💡 Result:
Short answer: Offsets in VTKHDF OverlappingAMR are HDF5 integer datasets (written by VTK as integer types and mapped on read to the corresponding H5T_NATIVE_* → vtk type). In practice VTK’s examples and tests store Offsets / NumberOf* / AMRBoxOffsets / Point/Cell/FieldDataOffsets as 64-bit signed integers (H5T_STD_I64LE). [1][2]
Sources:
- VTKHDF specification: dataset types are stored in HDF and matched to H5T_NATIVE_* on read (VTKHDF / OverlappingAMR). [1]
- VTKHDF examples (amr_gaussian_pulse.hdf): shown Offsets / NumberOfPoints / NumberOfCells etc. with DATATYPE H5T_STD_I64LE. [2]
Use int64 for offset datasets to comply with VTK VTKHDF specification.
The datasets for AMRBoxOffset, NumberOfAMRBox, and PointDataOffset/data are created as int, but the VTK VTKHDF OverlappingAMR specification requires 64-bit signed integers (H5T_STD_I64LE). Writing std::size_t values (64-bit) to int (32-bit) storage causes silent truncation, resulting in corrupted data and files incompatible with standard VTK readers.
🤖 Prompt for AI Agents
In `@src/diagnostic/detail/vtkh5_type_writer.hpp` around lines 369 - 372, The
three 1D datasets are being created with int (32-bit) which will truncate 64-bit
offsets; update the calls to create_resizable_1d_data_set for "AMRBoxOffset",
"NumberOfAMRBox", and "PointDataOffset/data" to use a 64-bit signed type (e.g.,
int64_t) instead of int, and ensure any values written (std::size_t) are
converted/cast to int64_t; modify the template parameters in the
create_resizable_1d_data_set<int>(...) invocations accordingly and add any
needed <cstdint> include where create_resizable_1d_data_set is used so the type
is available.
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@pyphare/pyphare/pharein/diagnostics.py`:
- Around line 64-67: The warning print in the block that filters timestamps (the
np.any(timestamps < init_time) check) has an extra closing parenthesis; update
the print statement (within that timestamps >= init_time filtering branch) to a
correctly formatted f-string without the stray ')' — e.g. "print(f'Warning: some
timestamps below ({init_time}) are filtered')" or a clearer variant like
"print(f'Warning: timestamps below {init_time} were filtered')" so the message
is syntactically correct and readable.
In `@pyphare/pyphare/pharesee/hierarchy/fromvtkhdf5.py`:
- Around line 190-199: The stubs break particle detection and pop extraction:
reimplement is_particle_file(basename) to mirror the pattern-matching used in
fromh5.py (check the basename/Path(stem) and match particle filename
patterns/extensions rather than returning False) so is_pop_fluid_file works
correctly, and fix pop_name(basename) to operate on the passed basename (e.g.,
use Path(basename).stem and split or a regex to extract the third underscore
field) with a defensive check to avoid IndexError for unexpected names; ensure
both functions use the same basename-handling logic as fromh5.py and keep
is_pop_fluid_file's current call pattern intact.
♻️ Duplicate comments (13)
pyphare/pyphare/pharesee/hierarchy/fromvtkhdf5.py (3)
177-183: Avoid mutable default arguments.Line 177 uses a shared list default. This can leak state across calls and is a common Python pitfall. Based on learnings, prefer
None+ initialization.🧹 Proposed fix
-def get_all_available_quantities_from_h5(filepath, time=0, exclude=["tags"], hier=None): +def get_all_available_quantities_from_h5(filepath, time=0, exclude=None, hier=None): + if exclude is None: + exclude = ["tags"]Based on learnings, avoid mutable defaults in this module.
97-100: Guard missing time matches intime_idxto preventIndexErrorand matchhas_timeexpectations.The current implementation will raise
IndexErrorat line 100 when the time is not present, becausenp.where()returns an empty array tuple. This breaks thehas_timemethod which expectstime_idx()to returnNonefor missing times (line:return self.time_idx(time) is not None), and will also fail any callers expecting a safe lookup.Proposed fix
def time_idx(self, time): times = self.times() ret = np.where(np.isclose(times, float(time), 1e-10)) + if ret[0].size == 0: + return None return ret[0][0]
426-431: Fix missing parentheses ontimes()method call and replaceassert Falsewith proper exception.Line 427 iterates over a method object instead of calling it, and line 431 uses
assert Falsewhich can be stripped with the-Oflag. Replace with a proper exception.Proposed fix
if load_all_times(time, hier): - for t in VtkFile(h5_filename).times: + for t in VtkFile(h5_filename).times(): add_time_from_h5(hier, h5_filename, t, **kwargs) return hier - assert False + raise RuntimeError("Invalid combination of time and hier arguments")pyphare/pyphare/pharesee/hierarchy/hierarchy_utils.py (1)
188-192: Replaceassertwith explicit validation.Line 191 uses
assert, which is stripped under-O, allowing malformed compute outputs to pass silently.🧹 Proposed fix
for data in compute(patch, **kwargs): - assert len(data.keys()) == 2 + if len(data) != 2 or "name" not in data or "data" not in data: + raise ValueError("compute(...) must yield dicts with keys {'name','data'}") new_patch_datas[data["name"]] = data["data"]src/diagnostic/detail/vtk_types/electromag.hpp (2)
76-78: Ensuresetup()was called beforewrite().Line 77 uses
mem[diagnostic.quantity], which default‑constructsInfoif setup wasn’t called, yielding invalid offsets.🛡️ Proposed fix
- auto& info = mem[diagnostic.quantity]; + auto& info = mem.at(diagnostic.quantity);
85-87: Fix log scope class name.Line 86 logs
"FluidDiagnosticWriter"insideElectromagDiagnosticWriter, which is misleading in traces.📝 Proposed fix
- PHARE_LOG_SCOPE(3, "FluidDiagnosticWriter<H5Writer>::write_quantity"); + PHARE_LOG_SCOPE(3, "ElectromagDiagnosticWriter<H5Writer>::write_quantity");tests/simulator/test_vtk_diagnostics.py (1)
213-215: Use a small non‑zero atol to avoid flaky compares.Line 214 sets
atol = 0whencpp.mpi_size() <= 2, which still risks floating‑point non‑associativity failures. A tiny epsilon is safer.🧪 Proposed fix
- atol = 0 if cpp.mpi_size() <= 2 else 1e-16 + atol = 1e-15 if cpp.mpi_size() <= 2 else 1e-16src/diagnostic/detail/vtkh5_writer.hpp (2)
15-17: Particle diagnostics are silently dropped.Line 136 maps
"particle"toNullTypeWriter, so particle diagnostics are ignored even when present. Wire the particle writer and add the friend declaration.🐛 Proposed fix
`#include` "diagnostic/detail/vtk_types/fluid.hpp" `#include` "diagnostic/detail/vtk_types/electromag.hpp" +#include "diagnostic/detail/vtk_types/particle.hpp" @@ - {"particle", make_writer<NullTypeWriter>()} // + {"particle", make_writer<ParticlesDiagnosticWriter<This>>()} // }; @@ friend class H5TypeWriter<This>; friend class FluidDiagnosticWriter<This>; friend class ElectromagDiagnosticWriter<This>; + friend class ParticlesDiagnosticWriter<This>;Also applies to: 131-137, 155-160
146-151: Guard against empty diagnostic quantity.Line 148 reads
fileStr[0]without checking emptiness, which is undefined if an empty quantity slips through.🛡️ Proposed fix
- if (fileStr[0] == '/') + if (!fileStr.empty() && fileStr[0] == '/') fileStr = fileStr.substr(1);src/diagnostic/detail/vtkh5_type_writer.hpp (1)
17-19: Offsets must be 64‑bit per VTKHDF.Lines 369–372 create offset datasets as
int, but offsets arestd::size_tand VTKHDF expects 64‑bit signed integers. This can truncate large runs and break reader compatibility.🧩 Proposed fix
`#include` <string> `#include` <unordered_map> +#include <cstdint> @@ - h5file.create_resizable_1d_data_set<int>(step_level + lvl + "/AMRBoxOffset"); - h5file.create_resizable_1d_data_set<int>(step_level + lvl + "/NumberOfAMRBox"); - h5file.create_resizable_1d_data_set<int>(step_level + lvl + "/PointDataOffset/data"); + h5file.create_resizable_1d_data_set<std::int64_t>(step_level + lvl + "/AMRBoxOffset"); + h5file.create_resizable_1d_data_set<std::int64_t>(step_level + lvl + "/NumberOfAMRBox"); + h5file.create_resizable_1d_data_set<std::int64_t>(step_level + lvl + "/PointDataOffset/data"); @@ - ds.select({old_size}, {1}).write(data_offset); + ds.select({old_size}, {1}).write(static_cast<std::int64_t>(data_offset)); @@ - ds.select({old_size}, {1}).write(total_boxes); + ds.select({old_size}, {1}).write(static_cast<std::int64_t>(total_boxes)); @@ - ds.select({old_size}, {1}).write(box_offset); + ds.select({old_size}, {1}).write(static_cast<std::int64_t>(box_offset));Also applies to: 369-372, 404-410, 433-449
src/amr/resources_manager/amr_utils.hpp (2)
5-21: Add<algorithm>forstd::max.Line 311 uses
std::max, but<algorithm>isn’t included here, so this relies on transitive headers.🔧 Proposed fix
`#include` <SAMRAI/hier/HierarchyNeighbors.h> `#include` <SAMRAI/geom/CartesianPatchGeometry.h> `#include` <stdexcept> +#include <algorithm>To verify, confirm that
std::maxusage is present and<algorithm>is included:#!/bin/bash # Verify std::max usage and <algorithm> include rg -n "std::max" src/amr/resources_manager/amr_utils.hpp rg -n "#include <algorithm>" src/amr/resources_manager/amr_utils.hpp
276-279: Dereference the patch level before forwarding.Line 279 passes the
getPatchLevelresult directly; elsewhere (Line 294) it’s treated as pointer-like. Pass the dereferenced level to match the overload that expects a level object.🐛 Proposed fix
- return boxesPerRankOn<dim>(hierarchy.getPatchLevel(ilvl)); + return boxesPerRankOn<dim>(*hierarchy.getPatchLevel(ilvl));src/diagnostic/diagnostic_model_view.hpp (1)
155-164: Remove commented-outlocalLevelBoxesblock.Line 155–164 is commented-out code; prefer deleting it (history stays in VCS) or gating it with a compile-time flag if you intend to resurrect it soon.
🧹 Suggested cleanup
- // auto localLevelBoxes(auto const ilvl) - // { - // std::vector<core::Box<int, dimension>> boxes; - // auto const& lvl = *hierarchy_.getPatchLevel(ilvl); - // boxes.reserve(lvl.getLocalNumberOfPatches()); - // visitHierarchy( - // [&](auto& layout, auto const&, auto const) { boxes.emplace_back(layout.AMRBox()); }, - // ilvl, ilvl); - // return boxes; - // }
| if np.any(timestamps < init_time): | ||
| raise RuntimeError( | ||
| f"Error: timestamp({sim.time_step_nbr}) cannot be less than simulation.init_time({init_time}))" | ||
| ) | ||
| timestamps = timestamps[timestamps >= init_time] | ||
| print(f"Warning: some timestamps below ({init_time}) are filtered)") | ||
|
|
There was a problem hiding this comment.
Fix the warning message formatting.
Line 66 has an extra closing parenthesis, which reads oddly in user output.
🧹 Proposed fix
- print(f"Warning: some timestamps below ({init_time}) are filtered)")
+ print(f"Warning: some timestamps below ({init_time}) are filtered")🤖 Prompt for AI Agents
In `@pyphare/pyphare/pharein/diagnostics.py` around lines 64 - 67, The warning
print in the block that filters timestamps (the np.any(timestamps < init_time)
check) has an extra closing parenthesis; update the print statement (within that
timestamps >= init_time filtering branch) to a correctly formatted f-string
without the stray ')' — e.g. "print(f'Warning: some timestamps below
({init_time}) are filtered')" or a clearer variant like "print(f'Warning:
timestamps below {init_time} were filtered')" so the message is syntactically
correct and readable.
| def is_pop_fluid_file(basename): | ||
| return (is_particle_file(basename) is False) and "pop" in basename | ||
|
|
||
|
|
||
| def is_particle_file(filename): | ||
| return False | ||
|
|
||
|
|
||
| def pop_name(basename): | ||
| return Path(".vtkhdf").stem.split("_")[2] |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -type f -name "fromvtkhdf5.py" | head -5Repository: PHAREHUB/PHARE
Length of output: 109
🏁 Script executed:
cat -n pyphare/pyphare/pharesee/hierarchy/fromvtkhdf5.py | sed -n '180,210p'Repository: PHAREHUB/PHARE
Length of output: 1136
🏁 Script executed:
rg "is_particle_file|pop_name" -A 2 -B 2 --type pyRepository: PHAREHUB/PHARE
Length of output: 16481
🏁 Script executed:
rg "particle_files_patterns|is_particle" --type py -A 3 -B 1Repository: PHAREHUB/PHARE
Length of output: 2555
is_particle_file() and pop_name() are broken stub implementations that will crash on real inputs.
is_particle_file()always returnsFalse, so particle file detection is non-functional (though particle support appears intentionally disabled, the stub should at least match the pattern fromfromh5.py).pop_name()usesPath(".vtkhdf")instead of thebasenameparameter, yielding an empty stem that causesIndexErrorwhen indexed at position [2] for any pop-prefixed file processed by line 215.
Use the pattern-matching approach and proper basename handling from fromh5.py as a reference.
🧰 Tools
🪛 Ruff (0.14.13)
194-194: Unused function argument: filename
(ARG001)
198-198: Unused function argument: basename
(ARG001)
🤖 Prompt for AI Agents
In `@pyphare/pyphare/pharesee/hierarchy/fromvtkhdf5.py` around lines 190 - 199,
The stubs break particle detection and pop extraction: reimplement
is_particle_file(basename) to mirror the pattern-matching used in fromh5.py
(check the basename/Path(stem) and match particle filename patterns/extensions
rather than returning False) so is_pop_fluid_file works correctly, and fix
pop_name(basename) to operate on the passed basename (e.g., use
Path(basename).stem and split or a regex to extract the third underscore field)
with a defensive check to avoid IndexError for unexpected names; ensure both
functions use the same basename-handling logic as fromh5.py and keep
is_pop_fluid_file's current call pattern intact.
Write diagnostics in VTKHDF file format
tested via manual inspection in paraview
We should probably use the vtk module in a test and generate some artifacts