Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds a VTK‑HDF5 diagnostic writer and VTK-based Python plotting, extends AMR/GridLayout APIs and ndarray/tensor utilities, tightens time/diagnostic validation and simulator restart time handling, removes legacy phare_init scripts, and updates tests and CI whitespace. Changes
Sequence Diagram(s)mermaid Test->>H5W: dump(diagnostics, timestamp) Estimated code review effort🎯 4 (Complex) | ⏱️ ~55 minutes Possibly related PRs
Suggested labels
Suggested reviewers
🚥 Pre-merge checks | ❌ 3❌ Failed checks (1 warning, 2 inconclusive)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
🤖 Fix all issues with AI agents
In `@pyphare/pyphare/pharein/diagnostics.py`:
- Around line 64-67: The warning string printed when filtering timestamps is
malformed; update the print statement that uses variables timestamps and
init_time in pyphare/pyphare/pharein/diagnostics.py so the message reads
correctly (e.g., "Warning: some timestamps below {init_time} are filtered."),
removing the extra closing parenthesis and fixing punctuation; optionally
consider using warnings.warn instead of print for the same message to follow
warning conventions.
In `@pyphare/pyphare/pharesee/phare_vtk/base.py`:
- Around line 16-27: The two functions surface_filter and
composite_data_geometry_filter are incorrectly decorated with `@staticmethod` at
module scope, producing descriptor objects instead of callables; remove the
`@staticmethod` decorators so both surface_filter(...) and
composite_data_geometry_filter(...) are plain functions that create vtk filters,
call SetInputConnection(output.GetOutputPort()), and return
PhaseOutput(surface=...) or PhaseOutput(geom=...) respectively; ensure any
callers that expect to call these (e.g., the pipeline that invokes
surface_filter and composite_data_geometry_filter) now receive regular
callables.
In `@pyphare/pyphare/pharesee/phare_vtk/plot.py`:
- Around line 17-19: The actor is constructed using the vtk.vtkActor(mapper=...)
syntax which only works in VTK >= 9.4; replace that with creating the actor via
vtk.vtkActor() and then call actor.SetMapper(vtk_file.mapper) before adding it
to renderer (refer to renderer, vtk.vtkActor, actor.SetMapper, and
vtk_file.mapper) so the code is compatible with older VTK versions.
In `@src/diagnostic/detail/vtk_types/electromag.hpp`:
- Around line 85-86: The log scope string inside the write_quantity lambda
mistakenly references "FluidDiagnosticWriter..."—update the PHARE_LOG_SCOPE call
in the lambda (the auto const write_quantity = [&](auto& layout, auto const&,
auto const) { ... } block) to use the correct class name, e.g.,
"ElectromagDiagnosticWriter<H5Writer>::write_quantity", so the scope matches the
ElectromagDiagnosticWriter class.
In `@src/diagnostic/detail/vtkh5_writer.hpp`:
- Around line 146-151: The function fileString is reading fileStr[0] without
validating emptiness; update fileString to first check if fileStr.empty() and
handle that case (e.g., return a sensible default filename like ".vtkhdf" or
"unnamed.vtkhdf"), then only test the first character with if (!fileStr.empty()
&& fileStr[0] == '/'); preserve the subsequent std::replace and return fileStr +
".vtkhdf".
🧹 Nitpick comments (15)
.github/workflows/cmake_macos.yml (1)
77-81: Consider matching Debug optimization level with project convention (-O3).Debug builds here are typically paired with
-O3to keep asserts while retaining performance; using-O2diverges from that convention and may affect parity with other CI targets. If the intent is consistency, switch to-O3(or scope the flag to Debug viaCMAKE_CXX_FLAGS_DEBUG). Based on learnings.🔧 Suggested tweak
- -DCMAKE_CXX_FLAGS="-O2 -DPHARE_DIAG_DOUBLES=1" + -DCMAKE_CXX_FLAGS="-O3 -DPHARE_DIAG_DOUBLES=1"pyphare/pyphare_tests/test_pharesee/test_hierarchy.py (1)
343-352: Consider usingsetUpClassfor better test isolation (optional).The current approach works and avoids multiple simulation runs. However, an alternative would be to use
setUpClass/tearDownClassinstead ofsetUp, which would:
- Keep tests as
test_*for individual discoverability- Maintain per-test failure isolation and granular reporting
- Still run the simulation only once
The hardcoded
18is a reasonable safeguard but requires manual updates when tests are added/removed.♻️ Alternative approach using setUpClass
`@ddt` class PatchHierarchyTest(unittest.TestCase): + `@classmethod` + def setUpClass(cls): + import pyphare.pharein as ph + ph.global_vars.sim = None + # ... config function and Simulator run here ... + cls._diag_dir = diag_outputs + cls.__name__ + Simulator(config()).run() + def diag_dir(self): - return diag_outputs + self._testMethodName + return self._diag_dir - def setUp(self): - import pyphare.pharein as ph - # ... current setUp code ... - Simulator(config()).run() + # Keep tests as test_* instead of _test_* + def test_data_is_a_hierarchy(self): + # ...This would allow keeping the original
test_*naming while running the simulation only once per test class.src/diagnostic/detail/vtk_types/electromag.hpp (1)
76-77: Consider defensive check before accessingmem.In
write(),mem[diagnostic.quantity]is accessed without verifying the key exists. Ifwrite()is called without a priorsetup(), this will silently insert a defaultInfowith zero offsets, which may cause incorrect behavior.Proposed defensive check
auto& modelView = this->h5Writer_.modelView(); - auto& info = mem[diagnostic.quantity]; + auto it = mem.find(diagnostic.quantity); + if (it == mem.end()) + throw std::runtime_error("write() called before setup() for: " + diagnostic.quantity); + auto& info = it->second;src/diagnostic/detail/vtk_types/fluid.hpp (3)
33-33: Redundant semicolon after function body.The semicolon after
{}invoid compute(DiagnosticProperties&) override {};is unnecessary.Proposed fix
- void compute(DiagnosticProperties&) override {}; + void compute(DiagnosticProperties&) override {}
217-218: Consider defensive check before accessingmem.Similar to the pattern in
electromag.hpp,mem[diagnostic.quantity]is accessed without verifying the key exists. This could silently use default zero offsets ifwrite()is called without priorsetup().Proposed defensive check
auto& modelView = this->h5Writer_.modelView(); - auto const& info = mem[diagnostic.quantity]; + auto it = mem.find(diagnostic.quantity); + if (it == mem.end()) + throw std::runtime_error("write() called before setup() for: " + diagnostic.quantity); + auto const& info = it->second;
186-198: Consider early return after writing population data.The loop over
ionspopulations continues iterating even after a match is found and data is written. Adding an early return would improve clarity and efficiency.Proposed fix with early returns
else { for (auto& pop : ions) { auto const pop_tree = tree + "pop/" + pop.name() + "/"; if (isActiveDiag(diagnostic, pop_tree, "density")) + { file_writer.writeField(pop.particleDensity(), layout); + return; + } else if (isActiveDiag(diagnostic, pop_tree, "charge_density")) + { file_writer.writeField(pop.chargeDensity(), layout); + return; + } else if (isActiveDiag(diagnostic, pop_tree, "flux")) + { file_writer.template writeTensorField<1>(pop.flux(), layout); + return; + } } }src/core/data/ndarray/ndarray_vector.hpp (1)
19-30: Potential integer type mismatch inidxwith array-based indexing.The
idxoverload acceptingIndexes<Index, dim>extracts elements from the array and passes them to the scalaridxoverloads which expectstd::uint32_t. IfIndexis a larger type (e.g.,std::size_ton 64-bit), this could cause implicit narrowing conversions. Consider either:
- Templating the scalar
idxoverloads on the index type, or- Adding explicit casts with assertions that values fit in
std::uint32_t.src/core/data/grid/gridlayoutdefs.hpp (1)
5-8: Include order: standard library headers typically precede project headers.The reordering places
<cstddef>after project-specific headers. Common C++ conventions place standard library includes before project headers to ensure standard definitions are available first. However, if this ordering is intentional for this project's build dependencies, it's acceptable.pyphare/pyphare/pharesee/phare_vtk/plot.py (1)
26-27: Remove the unused render-window interactor.Lines 26–27 create an interactor that is never used; it can be dropped to keep the code minimal and avoid extra VTK objects. Based on learnings, a leaner implementation is preferred.
♻️ Suggested cleanup
- iren = vtk.vtkRenderWindowInteractor() - iren.SetRenderWindow(ren_win)tests/simulator/test_vtk_diagnostics.py (1)
169-170: Drop the unused**kwargsparameter.Line 169 defines
**kwargsbut never uses it, which is flagged by Ruff and adds noise to the API.♻️ Suggested cleanup
- def _run(self, ndim, interp, simInput, diag_dir="", **kwargs): + def _run(self, ndim, interp, simInput, diag_dir=""):pyphare/pyphare/pharesee/phare_vtk/base.py (2)
76-78: Avoid function call in default argument.Calling
_phases()in the default argument is evaluated once at function definition time, not at each call. While_phases()returns a new list, this pattern is still discouraged (Ruff B008) as it can lead to unexpected behavior if the returned value were ever mutable or cached.♻️ Proposed fix
- def __init__(self, filename, time=None, array_name="data", phases=_phases()): - if len(phases) == 0: + def __init__(self, filename, time=None, array_name="data", phases=None): + if phases is None: + phases = self._phases() + if len(phases) == 0: raise RuntimeError("Error: Zero phases!")
103-110: Consider removing empty subclasses or adding differentiation.
VtkFieldFileandVtkTensorFieldFileare currently identical toVtkFile. If they're placeholders for future differentiation, consider adding a TODO comment. Otherwise, they could be type aliases.src/diagnostic/detail/vtkh5_writer.hpp (1)
1-2: Align header guard with full-path naming convention.The file name is
vtkh5_writer.hpp, so the guard should matchPHARE_DIAGNOSTIC_DETAIL_VTKH5_WRITER_HPPto follow the repo’s path-based pattern and avoid drift. Based on learnings, this matches the preferred guard scheme.♻️ Suggested header guard update
-#ifndef PHARE_DIAGNOSTIC_DETAIL_VTK_H5_WRITER_HPP -#define PHARE_DIAGNOSTIC_DETAIL_VTK_H5_WRITER_HPP +#ifndef PHARE_DIAGNOSTIC_DETAIL_VTKH5_WRITER_HPP +#define PHARE_DIAGNOSTIC_DETAIL_VTKH5_WRITER_HPP @@ -#endif /* PHARE_DIAGNOSTIC_DETAIL_VTK_H5_WRITER_HPP */ +#endif /* PHARE_DIAGNOSTIC_DETAIL_VTKH5_WRITER_HPP */Also applies to: 215-215
src/diagnostic/detail/vtkh5_type_writer.hpp (2)
1-2: Align header guard with full-path naming convention.For
vtkh5_type_writer.hpp, the guard should bePHARE_DIAGNOSTIC_DETAIL_VTKH5_TYPE_WRITER_HPPto match the repository’s full-path guard style. Based on learnings, this is the preferred pattern.♻️ Suggested header guard update
-#ifndef PHARE_DIAGNOSTIC_DETAIL_VTK_H5_TYPE_WRITER_HPP -#define PHARE_DIAGNOSTIC_DETAIL_VTK_H5_TYPE_WRITER_HPP +#ifndef PHARE_DIAGNOSTIC_DETAIL_VTKH5_TYPE_WRITER_HPP +#define PHARE_DIAGNOSTIC_DETAIL_VTKH5_TYPE_WRITER_HPP @@ -#endif // PHARE_DIAGNOSTIC_DETAIL_VTK_H5_TYPE_WRITER_HPP +#endif // PHARE_DIAGNOSTIC_DETAIL_VTKH5_TYPE_WRITER_HPPAlso applies to: 466-466
67-78: Avoid resizinglevel_rank_data_sizeinside the rank loop.This can be moved once before the loop to cut redundant work. Based on learnings, this aligns with the preference for minimal/efficient code.
♻️ Small efficiency tweak
- for (std::size_t i = 0; i < data.level_boxes_per_rank[ilvl].size(); ++i) - { - data.level_rank_data_size[ilvl].resize(core::mpi::size()); + data.level_rank_data_size[ilvl].resize(core::mpi::size()); + for (std::size_t i = 0; i < data.level_boxes_per_rank[ilvl].size(); ++i) + {
There was a problem hiding this comment.
Actionable comments posted: 6
🤖 Fix all issues with AI agents
In `@pyphare/pyphare/pharein/simulation.py`:
- Around line 163-191: Guard against non-positive total_time and zero/negative
step sizes by validating values before computing divisions: after computing
final_time and total_time in _final_time(), raise a ValueError if total_time <=
0; validate that any provided time_step (kwargs["time_step"]) and time_step_nbr
(kwargs["time_step_nbr"]) are > 0 before using them; in the final_and_dt branch
check computed time_step_nbr = int(total_time / kwargs["time_step"]) and if this
is < 1 raise a ValueError explaining the time_step is too large for the
total_time (or instruct the caller to provide valid values) rather than allowing
a division by zero; apply similar positive checks in final_and_nsteps and the
nsteps_and_dt branch to ensure no zero/negative divisors are used (references:
_final_time, final_time, total_time, final_and_dt, final_and_nsteps,
nsteps_and_dt).
In `@src/amr/resources_manager/amr_utils.hpp`:
- Around line 309-339: The file uses std::max inside the second onLevels
overload (the one taking onLevel and orMissing) but does not include
<algorithm>; add a direct include of <algorithm> at the top of
src/amr/resources_manager/amr_utils.hpp so std::max is available portably and to
avoid relying on transitive includes.
In `@src/core/data/ndarray/ndarray_vector.hpp`:
- Around line 16-66: The idx and at helpers in NdArrayViewer currently use
std::uint32_t and can overflow for large shapes; change all idx overloads and
their intermediate index computations to use std::size_t (return type and local
temporaries) and update the at(...) functions to compute i as std::size_t and
use product(nCells, std::size_t{1}) in the assert and bounds check so
comparisons are done in size_t; ensure data indexing uses that size_t index when
returning data[i]. Reference the template struct NdArrayViewer and the idx(...)
overloads (1/2/3-arg versions) and both at(...) templates when making these
edits.
In `@src/core/utilities/algorithm.hpp`:
- Around line 103-115: The assert in convert_to_fortran_primal that verifies dst
is primal is compiled out in release; replace it with a runtime guard that
checks layout.centering(dst) all-equals QtyCentering::primal and throws a
descriptive exception (e.g., std::invalid_argument or std::logic_error
mentioning convert_to_fortran_primal and dst centering) if the check fails;
likewise enforce the src precondition by validating all(layout.centering(src))
== QtyCentering::primal (using the existing all_primal) and throw if false so
both preconditions are enforced in release builds.
In `@src/diagnostic/detail/vtkh5_type_writer.hpp`:
- Around line 340-359: Rename the misspelled variable frimal to primal in
H5TypeWriter<Writer>::VTKFileWriter::writeTensorField: update the declaration
line that assigns core::convert_to_fortran_primal(...) to use the name primal
and then update all uses (e.g., the ds.select(...).write_raw(frimal[c].data())
call and any other references) to primal[c].data() so the identifier matches the
intended semantic name.
- Line 327: There is a typo: the local variable named "frimal" (assigned from
core::convert_to_fortran_primal(modelView.tmpField(), field, layout)) should be
renamed to "primal" (same typo also present where convert_to_fortran_primal is
used on line ~347); update all references to this variable within
vtkh5_type_writer.hpp (e.g., the assignment and subsequent uses) to use "primal"
so the identifier matches intent and the convert_to_fortran_primal call.
🧹 Nitpick comments (14)
src/core/utilities/box/box.hpp (2)
229-237: Iterator operators deviate from standard semantics.
operator++()returnsvoidinstead ofiterator&. Standard pre-increment should return a reference to*thisto allow chaining (e.g.,++it; *itvs*++it).
operator*()returns a new tuple by value on every dereference. This is potentially inefficient and differs from typical iterator semantics whereoperator*returns a reference.If full STL iterator compliance isn't needed and the current usage only involves simple range-for loops, these deviations are acceptable but worth documenting.
♻️ Suggested fix for standard iterator semantics
- void operator++() + iterator& operator++() { for_N<N>([&](auto i) { ++std::get<i>(its); }); + return *this; } - auto operator*() + auto operator*() const { return for_N<N>([&](auto i) { return *std::get<i>(its); }); }
239-245: Consider addingoperator==for iterator completeness.Some STL algorithms and C++20 ranges may expect both equality and inequality operators. Adding
operator==would improve compatibility.♻️ Proposed addition
auto operator==(iterator const& that) const { return !(*this != that); }pyphare/pyphare_tests/test_pharesee/test_hierarchy.py (1)
343-352: Magic number 18 is brittle for maintenance.The hardcoded test count requires manual updates whenever tests are added or removed. Consider computing the expected count dynamically or defining it as a constant near the test definitions.
Additionally, if any
_test_*method fails, subsequent tests won't run. If test isolation is needed for debugging individual failures, consider wrapping each call in a try-except to collect all failures.♻️ Suggested approach
def test_all(self): """ DO NOT RUN MULTIPLE SIMULATIONS! """ + test_methods = [method for method in dir(self) if method.startswith("_test_")] + expected_count = 18 # Update when adding/removing tests - checks = 0 - for test in [method for method in dir(self) if method.startswith("_test_")]: + for test in test_methods: getattr(self, test)() - checks += 1 - self.assertEqual(checks, 18) # update if you add new tests + + self.assertEqual( + len(test_methods), + expected_count, + f"Expected {expected_count} tests but found {len(test_methods)}. " + "Update expected_count if tests were intentionally added/removed.", + )src/hdf5/detail/h5/h5_file.hpp (2)
116-125: Passchunkby const reference to avoid copying the vector.The
chunkparameter is passed by value (auto const chunk), which copies thestd::vector<hsize_t>. Since it's only read, pass by const reference for efficiency.♻️ Suggested fix
template<typename Type> - auto create_chunked_data_set(auto const& path, auto const chunk, auto const& dataspace) + auto create_chunked_data_set(auto const& path, auto const& chunk, auto const& dataspace) {
277-282: Consider makinggetDataSetconst for consistency withexist().Since
h5file_.getDataSet()is a const method (evidenced by its use in const member functions likeread_data_setat line 83), this wrapper should also be markedconst. The method performs only a const delegation without modifying object state.♻️ Const-correctness improvement
- auto getDataSet(std::string const& s) + auto getDataSet(std::string const& s) const { if (!exist(s)) throw std::runtime_error("Dataset does not exist: " + s); return h5file_.getDataSet(s); }src/amr/amr_constants.hpp (1)
1-16: Align header guard with full-path convention.Please include the full path in the guard name (e.g.,
PHARE_AMR_AMR_CONSTANTS_HPP) for consistency with the established convention.♻️ Proposed change
-#ifndef PHARE_AMR_CONSTANTS_HPP -#define PHARE_AMR_CONSTANTS_HPP +#ifndef PHARE_AMR_AMR_CONSTANTS_HPP +#define PHARE_AMR_AMR_CONSTANTS_HPP @@ -#endif // PHARE_AMR_CONSTANTS_HPP +#endif // PHARE_AMR_AMR_CONSTANTS_HPPBased on learnings, prefer full-path header guard names.
tests/simulator/test_vtk_diagnostics.py (1)
169-182: Consider removing unused**kwargsparameter.The
_runmethod accepts**kwargsbut never uses it. If this is intended for future extensibility, consider documenting that intent; otherwise, remove it to avoid confusion.♻️ Proposed fix
- def _run(self, ndim, interp, simInput, diag_dir="", **kwargs): + def _run(self, ndim, interp, simInput, diag_dir=""):pyphare/pyphare/pharesee/phare_vtk/base.py (1)
76-78: Avoid function call in default argument.
phases=_phases()is evaluated once at class definition time. UseNoneas the default and resolve inside the function body.♻️ Proposed fix
- def __init__(self, filename, time=None, array_name="data", phases=_phases()): - if len(phases) == 0: + def __init__(self, filename, time=None, array_name="data", phases=None): + if phases is None: + phases = self._phases() + if len(phases) == 0:src/diagnostic/detail/vtkh5_writer.hpp (3)
92-96: Potentialstd::out_of_rangeexception if type is not in registry.
typeWriters_.at(type)will throw if the requested type is not registered. While the internaltypeWriters_map has entries for known types including aNullTypeWriterfallback, a caller usinggetDiagnosticWriterForTypewith an unknown type will get an exception.Consider using
find()with a fallback or documenting that only registered types are valid:Suggested defensive approach
template<typename String> auto getDiagnosticWriterForType(String& type) { - return typeWriters_.at(type); + auto it = typeWriters_.find(type); + if (it == typeWriters_.end()) + return typeWriters_.at("info"); // fallback to NullTypeWriter + return it->second; }
119-120: Public mutable state breaks encapsulation.
minLevelandmaxLevelare public and mutable, allowing external code to modify internal state. This is used bydump_levelto temporarily adjust the range.Consider making these private with accessor methods, or using a scoped guard pattern in
dump_level:RAII guard approach for dump_level
+private: + std::size_t minLevel_ = 0, maxLevel_ = modelView_.maxLevel(); + +public: + auto minLevel() const { return minLevel_; } + auto maxLevel() const { return maxLevel_; }Then in
dump_level, use a simple scope guard to restore values.
194-209: State restoration is not exception-safe.If
this->dump(diagnostics, timestamp)throws,minLevelandmaxLevelwill not be restored to their original values. Consider using an RAII scope guard for exception safety.Exception-safe approach
void H5Writer<ModelView>::dump_level(std::size_t level, std::vector<DiagnosticProperties*> const& diagnostics, double timestamp) { - std::size_t _minLevel = this->minLevel; - std::size_t _maxLevel = this->maxLevel; - - this->minLevel = level; - this->maxLevel = level; - - this->dump(diagnostics, timestamp); - - this->minLevel = _minLevel; - this->maxLevel = _maxLevel; + auto restore = core::scope_guard([&, oldMin = minLevel, oldMax = maxLevel] { + minLevel = oldMin; + maxLevel = oldMax; + }); + minLevel = level; + maxLevel = level; + this->dump(diagnostics, timestamp); }src/diagnostic/detail/vtkh5_type_writer.hpp (3)
67-78: Redundant resize inside loop.
level_rank_data_size[ilvl].resize(core::mpi::size())is called inside the loop for everyi, but it always resizes to the same size. Move this outside the loop for clarity and minor efficiency.Suggested fix
+ data.level_rank_data_size[ilvl].resize(core::mpi::size()); for (std::size_t i = 0; i < data.level_boxes_per_rank[ilvl].size(); ++i) { - data.level_rank_data_size[ilvl].resize(core::mpi::size()); - data.level_rank_data_size[ilvl][i] = 0;
34-37: Consider named constants or formula for dimension multiplier.The magic array
{4, 2, 1}represents2^(3-dim)for padding lower dimensions to 3D. While the comment explains the intent, a formula might be clearer:constexpr static auto X_TIMES = 1 << (3 - dim); // 2^(3-dim)This makes the mathematical relationship explicit.
31-43: Add a comment documenting thatHierarchyDataassumes single-threaded execution per MPI rank.The Meyer's singleton pattern correctly ensures thread-safe initialization, but the mutable state accessed via
reset()is unsynchronized. While no multi-threaded constructs are currently present in the codebase, adding a brief comment clarifying the single-threaded assumption would help prevent accidental introduction of data races if threading is added in the future.
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@src/core/utilities/box/box.hpp`:
- Around line 229-232: The pre-increment operator operator++() currently returns
void; change its signature to return iterator& (or the actual iterator class
name& if different) and after calling for_N<N>(...) that increments each
element, return *this so it follows standard iterator pre-increment semantics;
locate operator++() in the iterator implementation in box.hpp and update the
signature and add the return statement.
In `@src/diagnostic/detail/vtk_types/fluid.hpp`:
- Around line 173-199: The HybridFluidWriter::operator() currently returns
silently when no isActiveDiag match is found; update it to emit a clear fallback
(e.g., log a warning or assert) so unmatched diagnostics are visible. After the
existing ion-level checks and after the population loop (inside the else branch
of FluidDiagnosticWriter<H5Writer>::HybridFluidWriter::operator()), detect that
no writeField/writeTensorField was invoked for the requested diagnostic and call
the writer's logger (or assert in debug) with the diagnostic identifier (use
diagnostic, tree, and pop.name() to build the path) so something like
"/ions/…/quantity" is reported; keep the existing behavior otherwise. Ensure you
reference HybridFluidWriter::operator(), isActiveDiag, file_writer, ions/pop,
and diagnostic when implementing the fallback.
In `@src/diagnostic/detail/vtkh5_type_writer.hpp`:
- Around line 269-287: The constructor code in VTKFileInitializer performs
unsynchronized per-rank metadata RMW on the NSteps attribute and resizes the
Steps/Values dataset (via h5file, steps_group, NSteps, and ds) which breaks
parallel HDF5; change to a single-writer pattern: have only rank 0 perform
creation of the dataset and the NSteps attribute and its increment (use
steps_group.createAttribute / steps_attr.write and ds.resize()/ds.select().write
only on rank 0), then call core::mpi::barrier() so all ranks see the updates
before any rank reads timestamp or writes per-rank step values (the per-rank
write of timestamp via typewriter->h5Writer_.timestamp() and
ds.select(...).write(...) can remain non-metadata writes on each rank after the
barrier), or alternatively implement a collective update where all ranks agree
on the new NSteps and call the HDF5 metadata operations collectively.
🧹 Nitpick comments (13)
pyphare/pyphare_tests/test_pharesee/test_hierarchy.py (1)
343-352: Consider usingsubTestfor better test isolation and reporting.The current implementation stops on the first failure, preventing subsequent tests from running. Using
self.subTest()would allow all tests to execute and report failures individually.♻️ Optional: Use subTest for better isolation
def test_all(self): """ DO NOT RUN MULTIPLE SIMULATIONS! """ checks = 0 for test in [method for method in dir(self) if method.startswith("_test_")]: - getattr(self, test)() - checks += 1 + with self.subTest(test=test): + getattr(self, test)() + checks += 1 self.assertEqual(checks, 18) # update if you add new testssrc/core/utilities/box/box.hpp (1)
209-259: Consider documenting or validating that all boxes must have the same size.
boxes_iteratorperforms parallel iteration over multiple boxes, but there's no compile-time or runtime check that all boxes have equal dimensions. If boxes have different sizes, the iterators will reach their ends at different times, leading to undefined behavior inoperator!=andoperator*.If this is intentional for performance-critical code, a brief comment documenting the precondition would help maintainability.
pyphare/pyphare/pharein/simulation.py (1)
734-737: Consider usingwarnings.warninstead ofUsing Python's
warningsmodule allows users to control warning behavior (filter, raise as errors for strict mode, etc.) and integrates better with logging frameworks.💡 Suggested change
- if "max_nbr_levels" not in kwargs: - print("WARNING, 'max_nbr_levels' is not set, defaulting to 1") - kwargs["max_nbr_levels"] = kwargs.get("max_nbr_levels", 1) + if "max_nbr_levels" not in kwargs: + import warnings + warnings.warn("'max_nbr_levels' is not set, defaulting to 1", UserWarning) + kwargs["max_nbr_levels"] = 1 + else: + kwargs["max_nbr_levels"] = kwargs["max_nbr_levels"]pyphare/pyphare/pharesee/phare_vtk/base.py (1)
74-76: Avoid function call in default argument.
phases=_phases()evaluates at function definition time, not call time. If_phases()returned a mutable object, this would be a classic mutable default argument bug. UseNoneand resolve inside the function.💡 Proposed fix
- def __init__(self, filename, time=None, array_name="data", phases=_phases()): - if len(phases) == 0: + def __init__(self, filename, time=None, array_name="data", phases=None): + if phases is None: + phases = self._phases() + if len(phases) == 0: raise RuntimeError("Error: Zero phases!")src/amr/amr_constants.hpp (1)
1-2: Header guard should include full path segments.Preferred pattern is
PHARE_[PATH_WITH_UNDERSCORES]_HPP; for this file that would bePHARE_AMR_AMR_CONSTANTS_HPP.Based on learnings, please keep header guards aligned with the full path convention.♻️ Proposed guard rename
-#ifndef PHARE_AMR_CONSTANTS_HPP -#define PHARE_AMR_CONSTANTS_HPP +#ifndef PHARE_AMR_AMR_CONSTANTS_HPP +#define PHARE_AMR_AMR_CONSTANTS_HPP ... -#endif // PHARE_AMR_CONSTANTS_HPP +#endif // PHARE_AMR_AMR_CONSTANTS_HPPAlso applies to: 16-16
src/diagnostic/detail/vtk_types/electromag.hpp (1)
46-48: Simplify map initialization.The pattern
mem.count() == 0followed bytry_emplaceand then immediatemem[key]access is slightly redundant. Consider usingtry_emplacedirectly and using the returned iterator.Proposed simplification
- if (mem.count(diagnostic.quantity) == 0) - mem.try_emplace(diagnostic.quantity); - auto& info = mem[diagnostic.quantity]; + auto& info = mem.try_emplace(diagnostic.quantity).first->second;tests/simulator/test_vtk_diagnostics.py (3)
38-39: Rename ambiguous variablelto improve readability.The variable
lcan be easily confused with1orI. Consider renaming to something more descriptive likelength_scaleorlayer_width.Proposed fix
- def S(y, y0, l): - return 0.5 * (1.0 + np.tanh((y - y0) / l)) + def S(y, y0, layer_width): + return 0.5 * (1.0 + np.tanh((y - y0) / layer_width))
79-83: Consider using a conditional check instead of assert for temperature validation.The
assertstatement may be stripped when Python runs with optimizations (-Oflag), which could allow invalid temperature values to pass silently. For physics validation, consider using an explicit check with a descriptive error.Proposed fix
def T(x, y): K = 0.7 temp = 1.0 / density(x, y) * (K - b2(x, y) * 0.5) - assert np.all(temp > 0) + if not np.all(temp > 0): + raise ValueError("Temperature must be positive everywhere") return temp
169-169: Remove unusedkwargsparameter.The
kwargsparameter is captured but never used. Consider removing it or document its intended purpose if it's for future use.Proposed fix
- def _run(self, ndim, interp, simInput, diag_dir="", **kwargs): + def _run(self, ndim, interp, simInput, diag_dir=""):src/diagnostic/detail/vtk_types/fluid.hpp (3)
33-33: Extra semicolon after empty function body.The semicolon after the closing brace is redundant (though harmless).
Proposed fix
- void compute(DiagnosticProperties&) override {}; + void compute(DiagnosticProperties&) override {}
137-139: Simplify map initialization (same pattern as electromag.hpp).Proposed simplification
- if (mem.count(diagnostic.quantity) == 0) - mem.try_emplace(diagnostic.quantity); - auto& info = mem[diagnostic.quantity]; + auto& info = mem.try_emplace(diagnostic.quantity).first->second;
119-124: MHD support is stubbed out.
MhdFluidInitializerreturnsnulloptandMhdFluidWriterthrows "not implemented". This is acceptable for initial implementation, but consider adding a TODO comment or logging a warning if MHD diagnostics are requested to help future developers.Proposed improvement for MhdFluidWriter
template<typename H5Writer> void FluidDiagnosticWriter<H5Writer>::MhdFluidWriter::operator()(auto const& layout) { - throw std::runtime_error("not implemented"); + throw std::runtime_error("MHD fluid diagnostics not yet implemented"); }Also applies to: 203-207
src/diagnostic/detail/vtkh5_type_writer.hpp (1)
1-2: Align header guard naming with full-path pattern.Given the file path
src/diagnostic/detail/vtkh5_type_writer.hpp, the guard can match the full path more closely (e.g.,PHARE_DIAGNOSTIC_DETAIL_VTKH5_TYPE_WRITER_HPP).♻️ Suggested rename
-#ifndef PHARE_DIAGNOSTIC_DETAIL_VTK_H5_TYPE_WRITER_HPP -#define PHARE_DIAGNOSTIC_DETAIL_VTK_H5_TYPE_WRITER_HPP +#ifndef PHARE_DIAGNOSTIC_DETAIL_VTKH5_TYPE_WRITER_HPP +#define PHARE_DIAGNOSTIC_DETAIL_VTKH5_TYPE_WRITER_HPP ... -#endif // PHARE_DIAGNOSTIC_DETAIL_VTK_H5_TYPE_WRITER_HPP +#endif // PHARE_DIAGNOSTIC_DETAIL_VTKH5_TYPE_WRITER_HPPBased on learnings, PhilipDeegan prefers header guards that map directly to the full path with underscores.
Also applies to: 466-466
There was a problem hiding this comment.
Actionable comments posted: 5
🤖 Fix all issues with AI agents
In `@pyphare/pyphare/pharesee/phare_vtk/base.py`:
- Line 98: The code assigns self.spacing from
self.reader.GetOutput().GetDataSet(0, 0).GetSpacing() without checking for None;
update the logic in the constructor or initializer that contains this line
(referencing self.reader.GetOutput() and GetDataSet) to first retrieve ds =
self.reader.GetOutput().GetDataSet(0, 0), check if ds is None, and handle that
case (either raise a clear exception with context or set a safe default for
self.spacing) before calling ds.GetSpacing(); ensure the replacement preserves
existing behavior when ds is non-None and includes an informative error message
or fallback so AttributeError cannot occur.
In `@src/amr/wrappers/hierarchy.hpp`:
- Around line 218-225: The validation for max_nbr_levels is off-by-one: after
computing maxLevel_ = max_nbr_levels - 1 the code uses "if (maxLevel_ >=
MAX_LEVEL)" which rejects a valid maxLevel_ equal to MAX_LEVEL; change the check
in the hierarchy constructor/initializer to "if (maxLevel_ > MAX_LEVEL)" so
allowed range matches the error text and intended maximum, keeping the same
error message thrown (std::runtime_error) and referencing max_nbr_levels,
maxLevel_, and MAX_LEVEL to locate the fix.
In `@src/diagnostic/detail/h5writer.hpp`:
- Around line 165-170: The class stores maxLevel initialized from
modelView_.maxLevel(), which can go stale if the hierarchy changes; update
H5Writer to either stop caching maxLevel (remove the member and use
modelView_.maxLevel() wherever needed) or explicitly refresh maxLevel at the
start of dump() (or any public write method) by assigning maxLevel =
modelView_.maxLevel(); locate usages via the ModelView member modelView_, the
minLevel/maxLevel members, the constructor(s) that set maxLevel, and the dump()
method to apply the change and ensure the writer sees newly added levels.
In `@src/diagnostic/detail/vtk_types/fluid.hpp`:
- Around line 203-207: The MHD writer currently throws in
FluidDiagnosticWriter<H5Writer>::MhdFluidWriter::operator(), which can occur
after partial initialization; instead, fail fast by moving the runtime error
into the MhdFluidWriter::setup() (or prevent registering/constructing
MhdFluidWriter) so the exception is raised during setup/registration; update the
code path that constructs/registers MhdFluidWriter so it either throws in
setup() with the same "not implemented" message or skips registration when MHD
is unsupported, ensuring operator() is never reached for an unimplemented
writer.
In `@src/diagnostic/detail/vtkh5_writer.hpp`:
- Around line 195-208: The dump_level method mutates
this->minLevel/this->maxLevel and currently restores them only after calling
this->dump, leaving state inconsistent if dump() throws; make it exception-safe
by saving the original values and restoring them via an RAII scope guard (or
try/finally pattern) so restoration always runs even on exceptions—update
H5Writer<ModelView>::dump_level to capture _minLevel/_maxLevel, install a guard
that resets this->minLevel/this->maxLevel on scope exit, then call
this->dump(diagnostics, timestamp).
🧹 Nitpick comments (12)
pyphare/pyphare_tests/test_pharesee/test_hierarchy.py (1)
343-352: Use subTest to keep failures isolated while still running once.
This keeps the “single simulation” intent while reporting all check failures instead of stopping at the first one.♻️ Proposed refactor
- checks = 0 - for test in [method for method in dir(self) if method.startswith("_test_")]: - getattr(self, test)() - checks += 1 - self.assertEqual(checks, 18) # update if you add new tests + tests = [m for m in dir(self) if m.startswith("_test_")] + for name in tests: + with self.subTest(name=name): + getattr(self, name)() + self.assertEqual(len(tests), 18) # update if you add new testspyphare/pyphare/pharein/simulation.py (1)
734-737: LGTM!Replacing the assertion with a warning and default value improves usability. Users are notified via
print()while still getting sensible default behavior (single level, no refinement).Consider using
warnings.warn()instead ofprint()for proper warning semantics that can be filtered/captured by test frameworks.pyphare/pyphare/pharesee/phare_vtk/base.py (1)
74-74: Mutable default argument evaluated at definition time.The static analysis flags
phases=_phases()being evaluated at function definition. While_phases()returns a new list each time (avoiding the mutable default pitfall), the default is still computed once. Consider:♻️ Suggested fix
- def __init__(self, filename, time=None, array_name="data", phases=_phases()): + def __init__(self, filename, time=None, array_name="data", phases=None): if len(phases) == 0: + if phases is None: + phases = self._phases() + if len(phases) == 0: raise RuntimeError("Error: Zero phases!")pyphare/pyphare/pharesee/phare_vtk/plot.py (2)
26-27: Unused interactor object.The
ireninteractor is created but never used. For offscreen rendering workflows, an interactor is not required. Consider removing these lines to avoid confusion.Suggested removal
- iren = vtk.vtkRenderWindowInteractor() - iren.SetRenderWindow(ren_win) - ren_win.Render()
10-38: Consider adding basic error handling and a return value.The function silently proceeds even if the input file is invalid or rendering fails. Returning the output path (or a success indicator) would improve usability, and validating that
vtk_file.geom.GetOutput()contains data before accessingGetPointData()would prevent cryptic VTK errors.src/core/data/ndarray/ndarray_vector.hpp (1)
19-30: Potential silent truncation whenIndexis a signed or larger type.The
idxfunction acceptsIndexes<Index, dim>whereIndexcan be any type, but the values are passed to overloads expectingstd::uint32_t. IfIndexis signed (e.g.,int) and a caller passes a negative value, it will wrap to a large unsigned value, potentially causing out-of-bounds access that the subsequentassertmay not catch reliably (since the wrapped value could still be less than product(nCells)).Consider adding a static_assert or explicit check that Index is unsigned, or document that negative indices are undefined behavior.
src/hdf5/detail/h5/h5_file.hpp (1)
127-134: Consider documenting the fixed column count for resizable 2D datasets.The template parameter
colsfixes the second dimension while allowing the first dimension to grow. The defaulty_chunk = 1means each chunk spans one column, which may impact write performance for row-oriented access. This is likely intentional for time-series data but could benefit from a brief inline comment.src/diagnostic/detail/vtk_types/electromag.hpp (1)
46-49: Sizeoffset_per_levelto the active max level.
offset_per_levelis fixed-size, whileonLevelsindexes by level number. Resizing tomodelView.maxLevel()+1keeps capacity aligned with the hierarchy and avoids relying onamr::MAX_LEVELsemantics.♻️ Suggested adjustment
if (mem.count(diagnostic.quantity) == 0) mem.try_emplace(diagnostic.quantity); auto& info = mem[diagnostic.quantity]; + info.offset_per_level.resize(modelView.maxLevel() + 1);tests/simulator/test_vtk_diagnostics.py (1)
27-134: Consider lowering defaultppcto keep the test lightweight.The diagnostic output doesn’t need a large particle count; dropping the default keeps CI runtime/memory lower without reducing coverage.
♻️ Suggested tweak
-def setup_model(sim, ppc=100): +def setup_model(sim, ppc=20):src/diagnostic/detail/vtk_types/fluid.hpp (1)
36-39: Resizeoffset_per_levelto match the active max level.Using a fixed
amr::MAX_LEVELassumes a particular interpretation of that constant and can under/over-allocate. Resizing frommodelView.maxLevel()+1avoids that assumption and keeps the vector aligned with actual levels.♻️ Suggested adjustment
if (mem.count(diagnostic.quantity) == 0) mem.try_emplace(diagnostic.quantity); auto& info = mem[diagnostic.quantity]; + info.offset_per_level.resize(modelView.maxLevel() + 1);src/diagnostic/detail/vtkh5_type_writer.hpp (1)
67-78: Avoid repeated resizing inside the per-rank loop.
level_rank_data_size[ilvl].resize(...)is executed once per rank, which can trigger repeated reallocations. Pre-size once to keep the reset path tight.♻️ Suggested refactor
- for (std::size_t i = 0; i < data.level_boxes_per_rank[ilvl].size(); ++i) - { - data.level_rank_data_size[ilvl].resize(core::mpi::size()); - - data.level_rank_data_size[ilvl][i] = 0; + auto const ranks = data.level_boxes_per_rank[ilvl].size(); + data.level_rank_data_size[ilvl].assign(ranks, 0); + for (std::size_t i = 0; i < ranks; ++i) + { for (auto box : data.level_boxes_per_rank[ilvl][i]) {Based on learnings, keeping this loop minimal/efficient aligns with prior feedback.
src/diagnostic/detail/vtkh5_writer.hpp (1)
99-103: Avoid potential key collisions infile_flags.
Concatenatingtype + quantitycan collide (e.g.,"ab"+"c"vs"a"+"bc"). Consider a delimiter or a small helper to build a stable composite key.♻️ Suggested refactor
- return std::make_unique<HighFiveFile>(filePath_ + "/" + fileString(diagnostic.quantity), - file_flags[diagnostic.type + diagnostic.quantity]); + auto const key = diagnostic.type + ":" + diagnostic.quantity; + return std::make_unique<HighFiveFile>(filePath_ + "/" + fileString(diagnostic.quantity), + file_flags[key]); ... - if (!file_flags.count(diagnostic->type + diagnostic->quantity)) - file_flags[diagnostic->type + diagnostic->quantity] = this->flags; + auto const key = diagnostic->type + ":" + diagnostic->quantity; + if (!file_flags.count(key)) + file_flags[key] = this->flags; ... - file_flags[diagnostic->type + diagnostic->quantity] = READ_WRITE; + file_flags[key] = READ_WRITE;Also applies to: 176-191
|
|
||
|
|
||
|
|
||
| struct AMRLocalIndexer // |
There was a problem hiding this comment.
moved to box.hpp for reuse
There was a problem hiding this comment.
Actionable comments posted: 10
🤖 Fix all issues with AI agents
In `@pyphare/pyphare/pharein/simulation.py`:
- Around line 171-182: The code computing time_step_nbr and time_step in the
final_and_dt branch can produce time_step_nbr == 0 when total_time <
kwargs["time_step"], causing a ZeroDivisionError; update the logic in the block
that sets time_step_nbr/time_step (symbols: final_and_dt, total_time,
kwargs["time_step"], time_step_nbr, time_step) to guard against zero by forcing
a minimum of 1 (e.g., time_step_nbr = max(1, int(total_time /
kwargs["time_step"]))) or by detecting zero and setting time_step_nbr = 1 before
recomputing time_step; make the same change in the analogous section mentioned
(lines 183-186) so time_step is never recomputed with a zero divisor.
In `@pyphare/pyphare/pharesee/phare_vtk/base.py`:
- Around line 84-86: The timestep selection treats 0 as falsy; change the
conditional in the constructor where self.times = all_times_in(self.reader) and
self.reader.UpdateTimeStep(time if time else self.times[-1]) is called to
explicitly check for None (e.g., use "if time is not None") so that a requested
time of 0 is respected, otherwise fall back to self.times[-1]; update the
expression around self.reader.UpdateTimeStep to use that explicit None check.
- Around line 28-36: The poly_data_mapper function calls GetRange() on
GetArray(array_name) without checking for None; update poly_data_mapper to guard
the scalar array retrieval from output.GetOutput().GetPointData() by first
checking HasArray(array_name) or assigning the result of GetArray(array_name) to
a variable and verifying it is not None before calling GetRange(); if the array
is missing, avoid calling SetScalarRange (or provide a sensible default range)
so mapper.SelectColorArray and SetScalarModeToUsePointData remain safe.
- Around line 69-76: The class crashes because the default argument
phases=_phases() attempts to call the `@staticmethod` object at class-definition
time (broken on Python 3.8/3.9) and also uses a mutable list as a shared
default; change __init__ to use phases=None and inside __init__ set phases =
list(self._phases()) (or list(VtkFile._phases()) ) when phases is None to call
the static method at instance time and make a fresh list per instance; update
the __init__ signature (def __init__(self, filename, time=None,
array_name="data", phases=None)) and keep the existing VtkFile._phases()
implementation.
In `@src/amr/resources_manager/amr_utils.hpp`:
- Around line 311-341: The overload onLevels(auto& hierarchy, auto&& onLevel,
auto&& orMissing, std::size_t const minlvl, std::size_t const maxlvl) ignores
maxlvl when iterating existing levels; restrict the first loop to not exceed
maxlvl and compute an inclusive max index: derive hier_levels_index =
hierarchy.getNumberOfLevels() ? hierarchy.getNumberOfLevels() - 1 : 0 (or handle
empty case), then set max = std::min(hier_levels_index, maxlvl) and loop ilvl
from minlvl to max calling onLevel(*hierarchy.getPatchLevel(ilvl)); afterwards
call orMissing for ilvl from std::max(minlvl, max + 1) up to maxlvl. Ensure you
guard/getPatchLevel only for valid indices.
- Around line 287-308: The second overload of boxesPerRankOn is forwarding
hierarchy.getPatchLevel(ilvl) (a pointer-like handle) to boxesPerRankOn<dim>
which expects a reference and calls .getProcessorMapping()/.getBoxes();
dereference the handle before forwarding (i.e., pass the object referred to by
getPatchLevel) so boxesPerRankOn<dim> receives a concrete level reference and
compilation succeeds.
In `@src/diagnostic/detail/vtk_types/fluid.hpp`:
- Around line 119-124: The MHD initializer currently returns std::nullopt
causing writes to throw later; update
FluidDiagnosticWriter<H5Writer>::MhdFluidInitializer::operator() to fail fast by
throwing a clear std::runtime_error (e.g. "MHD diagnostics not implemented") so
setup is deterministic, and make the same change for the other MHD initializer
overloads/variants referenced in this file (the other Mhd* initializer(s) around
the same section) to ensure all MHD paths error early rather than returning
std::nullopt.
- Around line 1-14: The header is missing the <vector> include even though the
Info type uses std::vector, which makes the header non‑self‑contained; add
`#include` <vector> near the other standard headers (alongside <string>,
<optional>, <unordered_map>) so the Info declaration compiles without relying on
transitive includes and update any forward declarations if necessary.
In `@src/diagnostic/detail/vtkh5_type_writer.hpp`:
- Around line 269-278: The NSteps attribute update in
VTKFileInitializer::VTKFileInitializer uses a non-atomic read-modify-write on
steps_group.getAttribute("NSteps"), which can cause lost updates under MPI;
change the logic so only one safe collective update occurs (e.g., have rank 0
perform the read-modify-write and then broadcast the new value to all ranks, or
use an HDF5 collective attribute update if available) — locate the block that
creates/reads/writes NSteps on steps_group (the createAttribute<int>("NSteps",
...) and steps_attr.read<int>() + 1 sequence) and modify it so the increment
happens atomically by restricting the write to a single rank or using a
collective HDF5 operation, ensuring all ranks see the consistent value
afterward.
In `@tests/simulator/test_vtk_diagnostics.py`:
- Around line 3-8: The test writes PNG outputs from all MPI ranks causing
clashes; modify the test to only perform plotting/writing on rank 0 by checking
MPI.COMM_WORLD.Get_rank() (or the test's existing MPI communicator) before any
calls that produce B{ndim}d.vtk.png / E{ndim}d.vtk.png, and ensure the files are
written into the test's local_out directory rather than the shared path; apply
the same guard around the other plotting/writing region referenced (lines
~153-158) so only rank 0 writes to local_out.
🧹 Nitpick comments (5)
src/core/utilities/box/box.hpp (1)
208-259:boxes_iteratorstores copies of boxes rather than references.The constructor uses
std::forward_as_tuple(boxes...)which creates a tuple of references, but it's assigned tostd::tuple<Boxes...> boxes(line 258) which stores copies. IfBoxestypes areconst&qualified from the deduction, this works correctly. However, if used with large boxes or if the intent is to iterate over existing boxes without copying, consider usingstd::tuple<Boxes const&...>explicitly for the member type.This may be intentional for safety (avoiding dangling references), but worth verifying the design intent.
pyphare/pyphare_tests/test_pharesee/test_hierarchy.py (1)
343-352: Hardcoded test count is fragile.The assertion
self.assertEqual(checks, 18)will silently pass even if a new_test_*method is added but fails to run due to naming issues. Consider computing the expected count dynamically or removing this assertion:♻️ Suggested improvement
def test_all(self): """ DO NOT RUN MULTIPLE SIMULATIONS! """ - - checks = 0 + tests = [method for method in dir(self) if method.startswith("_test_")] + self.assertGreater(len(tests), 0, "No _test_ methods found") - for test in [method for method in dir(self) if method.startswith("_test_")]: + for test in tests: getattr(self, test)() - checks += 1 - self.assertEqual(checks, 18) # update if you add new testsAlternatively, if the count is intentional as a safeguard against accidental test removal, document this more explicitly.
src/core/utilities/algorithm.hpp (1)
102-135: Remove the line 117 dereference concern; consider adding ghost box validation but it's optional.The
*lcl_box.shape()dereference is correct and intentional.shape()returns aPoint, which providesoperator*()to conveniently access its underlyingstd::array, exactly whatmake_array_viewexpects. This is a clean design pattern used elsewhere in the codebase.The ghost box limitation is valid: the function explicitly notes "DOES NOT WORK ON GHOST BOX!" and uses
AMRBoxFor()(physical domain) rather thanAMRGhostBoxFor(). Adding a runtime assertion to detect and fail early if called with ghost box data would be helpful, though it requires comparing boxes at runtime. Alternatively, strengthen the documentation to make the physical-domain-only requirement explicit.src/hdf5/detail/h5/h5_file.hpp (1)
116-125: Consider verifying dataset properties when returning existing datasets.The pattern of returning an existing dataset without verifying its properties (chunk size, dataspace dimensions) could lead to subtle bugs if a dataset was created with different settings in a previous run.
For diagnostic/debug builds, consider adding assertions to verify the returned dataset matches expected properties.
src/diagnostic/detail/vtkh5_type_writer.hpp (1)
67-78: Move resize outside the inner loop.
level_rank_data_size[ilvl].resize(core::mpi::size())is called inside the loop overlevel_boxes_per_rank[ilvl], but it always resizes to the same value. This should be moved before the loop to avoid redundant resize calls.♻️ Proposed fix
+ data.level_rank_data_size[ilvl].resize(core::mpi::size()); for (std::size_t i = 0; i < data.level_boxes_per_rank[ilvl].size(); ++i) { - data.level_rank_data_size[ilvl].resize(core::mpi::size()); - data.level_rank_data_size[ilvl][i] = 0;
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@tests/simulator/test_vtk_diagnostics.py`:
- Around line 155-156: The PNG outputs are being written to the current working
directory because plot_vtk is called with only a filename; update the calls to
plot_vtk in test_vtk_diagnostics.py (the two calls to plot_vtk for
"B{ndim}d.vtk.png" and "E{ndim}d.vtk.png") to pass full target paths under
local_out (e.g., join local_out with the filename) so the files are created in
the test-specific output directory rather than cwd; keep the input VTK paths
unchanged.
🧹 Nitpick comments (8)
pyphare/pyphare/pharein/simulation.py (1)
738-740: Preferwarnings.warnover
This keeps warnings filterable/testable and avoids polluting stdout.🔧 Proposed change
-import os +import os +import warnings @@ - if "max_nbr_levels" not in kwargs: - print("WARNING, 'max_nbr_levels' is not set, defaulting to 1") + if "max_nbr_levels" not in kwargs: + warnings.warn( + "'max_nbr_levels' is not set; defaulting to 1", + RuntimeWarning, + ) kwargs["max_nbr_levels"] = kwargs.get("max_nbr_levels", 1)pyphare/pyphare/pharesee/phare_vtk/plot.py (2)
26-27: Unused interactor can be removed.The
vtkRenderWindowInteractoris created but never used since rendering is entirely offscreen. This can be safely removed to eliminate unnecessary object creation.Proposed cleanup
ren_win.OffScreenRenderingOn() # do not flash image on screen - iren = vtk.vtkRenderWindowInteractor() - iren.SetRenderWindow(ren_win) - ren_win.Render()
13-13: Consider parameterizing the scalar name.The scalar name
"data"is hardcoded. If different VTK files use different scalar names, this function won't work without modification. Consider adding an optional parameter.Proposed enhancement
-def plot(vtk_file, out_file="vtk.png"): +def plot(vtk_file, out_file="vtk.png", scalar_name="data"): if isinstance(vtk_file, str): vtk_file = VtkTensorFieldFile(vtk_file) - vtk_file.geom.GetOutput().GetPointData().SetActiveScalars("data") + vtk_file.geom.GetOutput().GetPointData().SetActiveScalars(scalar_name)tests/simulator/test_vtk_diagnostics.py (1)
35-36: Consider renaming ambiguous variableltolengthorscale.Ruff flags
las ambiguous (E741) because it can be confused with1orI. In physics code,Lis common for length scales, but lowercaselreduces readability.Suggested fix
- def S(x, x0, l): - return 0.5 * (1 + np.tanh((x - x0) / l)) + def S(x, x0, length): + return 0.5 * (1 + np.tanh((x - x0) / length))src/diagnostic/detail/vtk_types/fluid.hpp (2)
34-34: Extraneous semicolon after empty function body.The trailing semicolon after
{}is syntactically valid but unconventional and could be a typo.Suggested fix
- void compute(DiagnosticProperties&) override {}; + void compute(DiagnosticProperties&) override {}
188-200: Population loop continues after finding a match.Once a matching population diagnostic is found and written, the loop continues iterating through remaining populations unnecessarily. Consider adding a
breakorreturnafter writing.Proposed fix
for (auto& pop : ions) { auto const pop_tree = tree + "pop/" + pop.name() + "/"; if (isActiveDiag(diagnostic, pop_tree, "density")) + { file_writer.writeField(pop.particleDensity(), layout); + return; + } else if (isActiveDiag(diagnostic, pop_tree, "charge_density")) + { file_writer.writeField(pop.chargeDensity(), layout); + return; + } else if (isActiveDiag(diagnostic, pop_tree, "flux")) + { file_writer.template writeTensorField<1>(pop.flux(), layout); + return; + } }src/diagnostic/detail/vtkh5_writer.hpp (2)
101-105: Consider validating diagnostic.quantity for path safety.The
makeFilemethod constructs a file path usingdiagnostic.quantitywhich may contain path separators. WhilefileStringhandles leading slashes and replaces/with_, consider adding validation for other potentially problematic characters.
121-123: Public mutable state for level range.
minLevelandmaxLevelare public and mutable, which is used bydump_levelto temporarily narrow the dump scope. While functional, this design exposes internal state. Consider if a cleaner approach (like passing level range to dump) would be preferable.
|
|
573945e to
8ca1181
Compare
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Fix all issues with AI agents
In `@pyphare/pyphare/pharesee/phare_vtk/base.py`:
- Around line 28-37: In poly_data_mapper, guard against
output.GetOutput().GetPointData().GetArray(array_name) returning None by
retrieving the array into a local variable, checking it for None, and either
raising a clear exception (e.g., ValueError with array_name and available
arrays) or using a safe fallback before calling GetRange; update the function
that constructs the vtkPolyDataMapper (poly_data_mapper) to perform this check
and only call array.GetRange(0) when array is not None, then return
PhaseOutput(mapper=mapper) as before.
In `@src/core/data/grid/gridlayout.hpp`:
- Around line 1219-1226: The boxes_iterator is capturing temporaries as dangling
references because boxes{std::make_tuple(boxes...)} in box.hpp stores reference
types when Boxes are const&; update boxes_iterator's constructor (and/or the
boxes wrapper) to store decayed value types instead of references so temporaries
from AMRToLocal(box) don't dangle: change the tuple construction to use
value-decayed copies (e.g. std::make_tuple(std::decay_t<Boxes>(boxes)...) or
otherwise ensure the member tuple type is std::tuple<std::decay_t<Boxes>...>)
and adjust the boxes_iterator constructor signature to accept and move those
decayed values; this will fix amr_lcl_idx(box) returning boxes_iterator{box,
AMRToLocal(box)} safely.
In `@src/diagnostic/detail/vtkh5_type_writer.hpp`:
- Line 82: The lambda currently declared as "[](int const ilvl)" should accept
the same integral type used by amr::onLevels to avoid narrowing; change the
parameter type to std::size_t (or the exact typedef used by amr::onLevels) in
the lambda (the orMissing callback in vtkh5_type_writer.hpp) and update any
local uses/comparisons inside the lambda that assumed signed int to work
correctly with std::size_t.
In `@tests/simulator/test_vtk_diagnostics.py`:
- Around line 39-40: The function S(y, y0, l) uses an ambiguous parameter name
`l`; rename it to a descriptive name like `scale_length` by updating the
function signature to S(y, y0, scale_length) and replacing uses of `l` inside
the function with `scale_length`; also update any callers in the test file that
pass that argument so they use the new parameter name or positional argument
remains correct (ensure references to `S` within tests match the renamed
parameter).
🧹 Nitpick comments (21)
pyphare/pyphare_tests/test_pharesee/test_hierarchy.py (1)
343-352: Hardcoded test count is fragile — consider deriving it dynamically.The magic number
18must be manually bumped every time a_test_*method is added or removed. You could compute the expected count from the list itself and just assert it's non-zero, or keep the hardcoded count but add a comment listing what's included so the next person knows to update it.♻️ Suggested alternative
checks = 0 - for test in [method for method in dir(self) if method.startswith("_test_")]: + tests = [method for method in dir(self) if method.startswith("_test_")] + for test in tests: getattr(self, test)() checks += 1 - self.assertEqual(checks, 18) # update if you add new tests + self.assertGreater(checks, 0, "no _test_ methods found") + self.assertEqual(checks, len(tests))src/hdf5/detail/h5/h5_file.hpp (3)
106-125: Silent short-circuit on existing datasets skips type/shape validation.Both
create_data_setandcreate_chunked_data_setreturn the existing dataset whenexist(path)is true, without verifying that its type or shape matches the requested parameters. If two code paths accidentally target the same HDF5 path with different schemas, this will silently succeed and produce corrupt output.If this is intentional for the MPI collective-create pattern (where every rank must call create), a brief comment documenting that assumption would help future readers.
277-282:getDataSetcan likely beconst.
exist()is alreadyconst, andHighFive::File::getDataSetis a const operation. Marking this methodconstwould allow it to be called on const references, improving API consistency withexist().Proposed diff
- auto getDataSet(std::string const& s) + auto getDataSet(std::string const& s) const
284-287: Deleted move operations restrict usage patterns.Deleting both copy and move means
HighFiveFilecannot be stored instd::optional, returned from factory functions, or placed in containers. If the intent is to force heap allocation (e.g., viastd::unique_ptr), this is fine—but if move semantics would be safe (HighFive::File is movable), consider enabling them.pyphare/pyphare/pharein/simulation.py (1)
738-741:print()used for warning — considerwarnings.warn().Using
print()for warnings is easy to miss in logs and cannot be filtered by warning-level controls.warnings.warn(...)integrates with Python's warning framework and can be promoted to an error viakwargs["strict"]if desired.This is a minor suggestion and can be deferred.
src/core/utilities/box/box.hpp (1)
208-259: Newboxes_iteratorfor coordinated multi-box iteration — clean design.A few observations:
begin()andend()are non-const, so aconst boxes_iteratorcannot be iterated. If this is only used as a temporary in range-for loops, that's fine. If const usage is ever needed, consider addingconstoverloads.
operator*()is non-const — same consideration applies.
operator!=usesfor_N_any(true if any pair differs). This is correct for equal-length boxes but could produce unexpected results if boxes have mismatched sizes. Astatic_assertor runtime check on matching box sizes could be a safety net.These are all minor given the current usage context.
src/core/data/ndarray/ndarray_vector.hpp (1)
32-49: Implicit narrowing fromIndextostd::uint32_tinidxdispatch.The template
idxon lines 19-30 forwardsindexes[i](of typeIndex, which may beintorstd::size_t) to the scalaridxoverloads that takestd::uint32_t. IfIndexis a signed type likeint, this implicit narrowing may trigger-Wconversionwarnings. IfIndexisstd::size_t(64-bit), it's a narrowing from 64 to 32 bits.This is unlikely to cause runtime issues in practice, but could produce compiler warnings depending on flags.
pyphare/pyphare/pharesee/phare_vtk/plot.py (2)
26-27: UnusedvtkRenderWindowInteractor. The interactor is created and wired to the render window but is never started or otherwise used. For offscreen rendering it serves no purpose — consider removing these two lines to keep the function minimal.Proposed fix
- iren = vtk.vtkRenderWindowInteractor() - iren.SetRenderWindow(ren_win) - ren_win.Render()
13-13: Hardcoded scalar name"data"may be fragile. If the VTK-HDF writer ever changes the array name, this will silently render nothing. Consider accepting the scalar name as a parameter with"data"as the default.src/amr/resources_manager/amr_utils.hpp (1)
306-338: Inconsistent null-safety between the twoonLevelsoverloads.The single-action overload (line 317) guards
getPatchLevelwithif (auto lvl = ...)before dereferencing, but the two-callback overload (line 334) dereferences unconditionally with*hierarchy.getPatchLevel(ilvl). IfgetPatchLevelcan return a nullshared_ptrfor a level within[0, getNumberOfLevels()), the second overload would crash.If SAMRAI guarantees non-null for existing levels, remove the null check in the first overload for consistency. Otherwise, add it to the second.
src/diagnostic/detail/vtk_types/fluid.hpp (3)
90-117: Unusedionsfetch for scalar-quantity paths.
modelView.getIons()(line 95) is called unconditionally, but only the per-population loop (line 105) usesions. The scalar checks (charge_density,mass_density,bulkVelocity) don't reference it. This is harmless but worth noting for clarity —ionscould be fetched only inside theelsefall-through if performance ofgetIons()is nontrivial.
205-209: MHD path throws at runtime — add a compile-time guard or TODO.
MhdFluidWriter::operator()unconditionally throwsstd::runtime_error("not implemented"). If an MHD model is ever instantiated, this will crash at runtime with no compile-time indication. Consider astatic_assertor at minimum a prominent TODO.
213-239:write()accessesmem[diagnostic.quantity]without checking existence.On line 220,
mem[diagnostic.quantity]will silently default-construct anInfoentry (with all-zero offsets) ifsetup()was never called for this quantity. Anassert(mem.count(diagnostic.quantity))or.at()would make this invariant explicit and catch misuse earlier in debug builds.Proposed defensive check
- auto const& info = mem[diagnostic.quantity]; + assert(mem.count(diagnostic.quantity) && "setup() must be called before write()"); + auto const& info = mem.at(diagnostic.quantity);src/diagnostic/detail/vtk_types/electromag.hpp (1)
87-93: Inner loop inwrite_quantitydoesn't break after finding the matching field.The loop on line 90 checks each electromag field and writes when matched, but continues iterating the remaining fields even though
diagnostic.quantitycan match at most one. Consider adding abreak(orreturn) after the write on line 92 to skip the redundant comparisons.Proposed fix
for (auto* vecField : this->h5Writer_.modelView().getElectromagFields()) if (diagnostic.quantity == "/" + vecField->name()) + { writer.template writeTensorField<1>(*vecField, layout); + break; + }tests/simulator/test_vtk_diagnostics.py (1)
152-163:permute()mutates its input dict in place.
dic.update(simArgs.copy())on line 155 mutates the caller'sdic. Currently only{}is passed, so it's harmless, but this is fragile if reused. Consider operating on a copy instead.Proposed fix
def permute(dic): ndims = [2] interp_orders = [1] - dic.update(simArgs.copy()) return [ dict( ndim=ndim, interp=interp_order, - simInput=deepcopy(dic), + simInput={**deepcopy(dic), **simArgs}, ) for ndim, interp_order in itertools.product(ndims, interp_orders) ]pyphare/pyphare/pharesee/phare_vtk/base.py (1)
89-96: Pipeline wiring assumes each phase produces exactly one output.The loop sets
_into the lastvalfrom each phase'sPhaseOutput. If a phase ever returns multiple key-value pairs,_inwill silently be set to whatever happens to come out last from dict iteration. This works today because all phases return a single entry, but it's fragile. Consider documenting this contract or assertinglen(ret.kwargs) == 1.src/core/utilities/algorithm.hpp (1)
7-13:algorithm.hppis accumulating heavy domain-specific includes.This utility header now pulls in
field.hpp,box.hpp,gridlayoutdefs.hpp,ndarray_vector.hpp, andtensorfield.hpp. Any file that needs onlyaverage()ornotIn()now transitively includes the full field/tensor/grid stack, which can impact compile times.Consider splitting the new
convert_to_*functions into a dedicated header (e.g.,core/utilities/field_conversion.hpp) to keepalgorithm.hpplightweight.src/diagnostic/detail/vtkh5_type_writer.hpp (4)
39-43: Global mutable singleton forHierarchyData.
INSTANCE()returns a process-wide static singleton. This works for the single-hierarchy-per-process model but will silently break if the code ever needs to support multiple concurrent hierarchies (e.g., in tests). This is acceptable for now but worth noting.
67-78:level_rank_data_sizeresize is redundant inside the per-rank loop.Line 69 (
data.level_rank_data_size[ilvl].resize(core::mpi::size())) is called on every iteration of the outeriloop, but only needs to happen once per level. Move it before the loop.Proposed fix
+ data.level_rank_data_size[ilvl].resize(core::mpi::size()); for (std::size_t i = 0; i < data.level_boxes_per_rank[ilvl].size(); ++i) { - data.level_rank_data_size[ilvl].resize(core::mpi::size()); - data.level_rank_data_size[ilvl][i] = 0;
296-302: NSteps attribute: all ranks read, only rank 0 writes.Line 302 evaluates
steps_attr.template read<int>()on every rank (it's an argument towrite_on_rank_0), but only rank 0 performs the write. This is wasteful but functional if the file is opened with parallel HDF5 access. However, if any rank's read returns a stale value due to caching or buffering, the increment could be wrong on rank 0 (though unlikely with attribute reads).Consider restructuring so only rank 0 performs both the read and write:
Suggested approach
- typewriter->write_on_rank_0(steps_attr, steps_attr.template read<int>() + 1); + if (core::mpi::rank() == 0) + steps_attr.write(steps_attr.template read<int>() + 1);
347-362:writeField: variable namingfrimallooks like a typo forprimal.The name
frimal(Fortran + primal?) is used at line 352 and 372. If intentional (portmanteau), a brief comment would help; otherwise consider renaming to something clearer likeprimal_forfortran_primal.
| """ | ||
|
|
||
| checks = 0 | ||
| for test in [method for method in dir(self) if method.startswith("_test_")]: |
There was a problem hiding this comment.
I do not understand the goal of doing this manually can you explain ?
There was a problem hiding this comment.
each test runs the same simulation, which is somewhat redundant, we just need one set of diagnostics to test with
There was a problem hiding this comment.
I was testing the new vtk python hierarchies and found it somewhat inconvenient!
|
|
||
|
|
||
|
|
||
| void onLevels(auto& hierarchy, auto&& onLevel, auto&& orMissing, std::size_t const minlvl, |
There was a problem hiding this comment.
it's not clear to me what the orMissing arg, how can we make it clearer what this function does differently than the one above? maybe a documentation comment
There was a problem hiding this comment.
Actionable comments posted: 5
🤖 Fix all issues with AI agents
In `@pyphare/pyphare/pharein/diagnostics.py`:
- Around line 64-67: The filter that replaces timestamps with
timestamps[timestamps >= init_time] can produce an empty array if all entries
are below init_time; after the assignment to timestamps (the timestamps =
timestamps[timestamps >= init_time] expression) add a guard that checks if
timestamps.size == 0 (or len(timestamps) == 0) and either raise a ValueError
(e.g., "no timestamps >= init_time") or log/print a clear error and abort
creation of the diagnostic so you don't create a diagnostic with no write
timestamps; reference and update the handling around the timestamps and
init_time variables where the filtering occurs.
In `@pyphare/pyphare/pharesee/phare_vtk/base.py`:
- Around line 28-37: In poly_data_mapper, guard against
output.GetOutput().GetPointData().GetArray(array_name) returning None: fetch the
array into a local variable, check for None, and if missing either use a
sensible fallback scalar range (e.g., default range or skip
mapper.SetScalarRange) and/or log/warn, before calling GetRange; update the
mapper setup in poly_data_mapper (and return PhaseOutput(mapper=mapper) as
before) so the function no longer dereferences a None array.
In `@src/diagnostic/detail/vtkh5_writer.hpp`:
- Around line 148-154: The fileString function dereferences fileStr[0] which is
UB for an empty string; add a guard at the start of fileString (the static
std::string fileString(std::string fileStr) function) to handle empty input
(e.g. check if fileStr.empty() and return an appropriate filename like ".vtkhdf"
or "_ .vtkhdf" per project convention) before accessing fileStr[0], then proceed
with the existing leading-slash strip and replace logic.
- Line 29: The template parameter name "_ModelView" is reserved (leading
underscore + uppercase) — rename the template parameter to a non-reserved
identifier (e.g., ModelViewT) and update all usages: replace "template<typename
_ModelView>" with "template<typename ModelViewT>" and change every occurrence of
_ModelView (including the internal alias/using/typedef that referenced it) to
ModelViewT so the code compiles without using a reserved identifier.
🧹 Nitpick comments (13)
pyphare/pyphare/pharein/simulation.py (1)
742-744: Replacing assertion with warning + default formax_nbr_levels.This is a more user-friendly approach. Consider using
warnings.warn()instead ofprint()so the message can be controlled via standard Python warning filters (e.g., suppressed in tests or promoted to an error withstrictmode).src/amr/resources_manager/amr_utils.hpp (1)
306-341: Inconsistent null-check ongetPatchLevelbetween the twoonLevelsoverloads.The first overload (Line 317) defensively null-checks
getPatchLevel(ilvl), but the second overload (Line 337) dereferences it unconditionally. Within the computed range both should always return a valid pointer, but for consistency (and safety if SAMRAI ever returns null for a level in range), consider guarding the second overload as well — or dropping the check from both if it's guaranteed non-null.src/core/data/ndarray/ndarray_vector.hpp (1)
16-67: Clean refactor: removingDataTypefromNdArrayViewerand centralizing index computation.The
idxoverloads correctly handle 1D/2D/3D in both C and Fortran ordering, and theatmethods provide bounds-checking viaassert. Good simplification.One minor nit: Lines 56 and 63 use
auto const& i = idx(...)to bind a temporarystd::uint32_tby const reference. While valid (lifetime extension), a plainauto const iis more idiomatic and clearer in intent.✏️ Suggested tweak
- auto const& i = idx(nCells, indexes); + auto const i = idx(nCells, indexes); assert(i < product(nCells, std::uint32_t{1})); return data[i]; } - auto const& i = idx(nCells, indexes...); + auto const i = idx(nCells, indexes...); assert(i < product(nCells, std::uint32_t{1})); return data[i];src/core/utilities/algorithm.hpp (1)
76-101:convert_to_primalonly handles B and E quantities — J/other dual quantities will throw at runtime.This function supports
Bx/By/BzandEx/Ey/Ezbut will throw for any other non-primal quantity (e.g.,Jx/Jy/Jz). Sinceconvert_to_fortran_primalskips this path for all-primal sources, the throw is only hit for mixed-centering quantities not in B/E. If J fields are never passed toconvert_to_fortran_primal, this is fine — but the genericFieldsignature doesn't enforce that.Consider either documenting this limitation or adding J support for completeness (the
JxToMoments/JyToMoments/JzToMomentsprojections already exist).♻️ Proposed addition for J support
else if (qty == PQ::Ez) return layout.project(src, lix, layout.EzToMoments()); + else if (qty == PQ::Jx) + return layout.project(src, lix, layout.JxToMoments()); + else if (qty == PQ::Jy) + return layout.project(src, lix, layout.JyToMoments()); + else if (qty == PQ::Jz) + return layout.project(src, lix, layout.JzToMoments()); + throw std::runtime_error("Quantity not supported for conversion to primal.");src/diagnostic/detail/h5writer.hpp (1)
165-169: Consider splitting the comma-separated declaration for readability.Declaring
minLevelandmaxLevelon the same line with a comma obscures that they have very different initialization strategies (literal0vs. a member function call). Splitting them makes review and maintenance easier.♻️ Suggested change
- std::size_t minLevel = 0, maxLevel = modelView_.maxLevel(); + std::size_t minLevel = 0; + std::size_t maxLevel = modelView_.maxLevel();pyphare/pyphare/pharesee/phare_vtk/plot.py (2)
13-13: Hardcoded scalar name"data"— should usevtk_file.array_name.
VtkFile.__init__already stores thearray_nameparameter. Usingvtk_file.array_namehere would keep the function consistent with whatever name was configured during file construction.♻️ Suggested fix
- vtk_file.geom.GetOutput().GetPointData().SetActiveScalars("data") + vtk_file.geom.GetOutput().GetPointData().SetActiveScalars(vtk_file.array_name)
26-27: UnusedvtkRenderWindowInteractor— remove dead code.
irenis created and assigned a render window but is never started or used for interaction. Since this is an offscreen rendering path, the interactor serves no purpose.♻️ Suggested fix
- iren = vtk.vtkRenderWindowInteractor() - iren.SetRenderWindow(ren_win) - ren_win.Render()tests/simulator/test_vtk_diagnostics.py (2)
39-40: Rename ambiguous variablelto something clearer.Static analysis flags
las ambiguous (E741) — it's easily confused with1orI. A descriptive name likescale_lengthimproves readability at no cost.♻️ Suggested fix
- def S(y, y0, l): - return 0.5 * (1.0 + np.tanh((y - y0) / l)) + def S(y, y0, scale_length): + return 0.5 * (1.0 + np.tanh((y - y0) / scale_length))
152-163:permutemutates its input dict viadic.update(simArgs.copy()).Currently the only call site passes
{}(line 188), so this is safe. But the mutation ofdicis a latent footgun if the function is reused. Consider making a local copy first.♻️ Suggested fix
def permute(dic): ndims = [2] interp_orders = [1] - dic.update(simArgs.copy()) + dic = {**dic, **simArgs} return [pyphare/pyphare/pharesee/phare_vtk/base.py (1)
89-98: Pipeline wiring is fragile — relies on each phase returning exactly one key-value pair.The loop on lines 92-96 iterates all key-value pairs from each
PhaseOutput, and_inis overwritten on each iteration. If a future phase returns multiple pairs, only the last one becomes the input for the next phase. This implicit contract is not enforced or documented.Additionally, line 98 (
self.mapper = _in) is redundant when using default phases, sincesetattr(self, "mapper", val)on line 95 already setsself.mapper.Consider adding a brief comment or assertion to make the single-output-per-phase contract explicit:
for phase in phases: ret = phase(_in, array_name=array_name) for key, val in ret: if hasattr(val, "Update"): val.Update() setattr(self, key, val) _in = valsrc/diagnostic/detail/vtkh5_type_writer.hpp (1)
67-78: Redundantresize()call inside the inner loop.
level_rank_data_size[ilvl].resize(core::mpi::size())is executed on every iteration of theiloop, but the target size is constant. Move it before the loop.Proposed fix
+ data.level_rank_data_size[ilvl].resize(core::mpi::size()); for (std::size_t i = 0; i < data.level_boxes_per_rank[ilvl].size(); ++i) { - data.level_rank_data_size[ilvl].resize(core::mpi::size()); - data.level_rank_data_size[ilvl][i] = 0;src/diagnostic/detail/vtkh5_writer.hpp (2)
200-215:dump_levelis not exception-safe —minLevel/maxLevelwon't be restored ifdump()throws.Consider a scope guard or a simple RAII helper to restore the original values on any exit path.
Example using a lambda scope guard
void H5Writer<ModelView>::dump_level(std::size_t level, std::vector<DiagnosticProperties*> const& diagnostics, double timestamp) { std::size_t _minLevel = this->minLevel; std::size_t _maxLevel = this->maxLevel; + auto restore = [&] { this->minLevel = _minLevel; this->maxLevel = _maxLevel; }; this->minLevel = level; this->maxLevel = level; - this->dump(diagnostics, timestamp); - - this->minLevel = _minLevel; - this->maxLevel = _maxLevel; + try { + this->dump(diagnostics, timestamp); + } catch (...) { + restore(); + throw; + } + restore(); }
121-123: Consider restrictingminLevel,maxLevel, andflagstoprivatewith accessor methods.These are mutated internally by
dump_leveland read by friend classes, but beingpublicallows uncontrolled external modification that could break the dump invariants.
|
is there anything you have locally for mhd/vtk that you would rather see merged sooner @UCaromel ? |
initial changes without changing python hierarchies (to come next)