Conversation
📝 WalkthroughWalkthroughRoutes DataWrangler per level between Hybrid and MHD patch-levels, adds sync, refactors PatchLevel into model-parameterized AnyPatchLevel/PatchLevel specializations, updates Python/C++ bindings, adjusts hierarchy construction/patch layout handling, and adds/updates related utilities, tests, and MPI/Span helpers. Changes
Sequence Diagram(s)sequenceDiagram
participant Caller as Calling code
participant DW as DataWrangler
participant Sim as Simulator
participant PL as PatchLevel<Model>
participant Model as Model instance
Caller->>DW: getPatchLevel(lvl)
DW->>DW: read modelsPerLevel[lvl]
alt modelsPerLevel[lvl] == "Hybrid"
DW->>DW: getHybridPatchLevel(lvl)
DW->>Sim: getHybridModel()
Sim-->>DW: HybridModel&
else modelsPerLevel[lvl] == "MHD"
DW->>DW: getMHDPatchLevel(lvl)
DW->>Sim: getMHDModel()
Sim-->>DW: MHDModel&
end
DW->>PL: construct PatchLevel<Model>(hierarchy, model, lvl)
DW-->>Caller: return PatchLevel<Model>
Caller->>PL: getB("x")
PL->>PL: getField(B component)
PL->>Model: access field via resourcesManager
PL->>PL: build/layout/view (GridLayout/dl)
PL-->>Caller: return component data (memory view)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested labels
Suggested reviewers
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 11
🧹 Nitpick comments (4)
tests/simulator/test_data_wrangler.py (2)
4-4: Unused import:numpy.The
numpymodule is imported asnpbut never used in the test file.♻️ Remove unused import
import unittest -import numpy as np from pyphare import cpp🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/simulator/test_data_wrangler.py` at line 4, Remove the unused import statement "import numpy as np" from the module; locate the top-level import of numpy (the symbol np) in the test file (tests/simulator/test_data_wrangler.py) and delete it, and run the tests to confirm nothing else references np—if any test actually needs numpy, replace the unused import with the correct, minimal import at the usage site.
43-48: Address or track the commented-out density plotting.The comment
# fails?suggests this is a known issue. Consider either:
- Adding a TODO with a tracking issue reference
- Removing if not intended to be implemented
- Investigating and fixing the root cause
Would you like me to open an issue to track this, or help investigate why density plotting fails?
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/simulator/test_data_wrangler.py` around lines 43 - 48, The commented-out hier.plot call in tests/simulator/test_data_wrangler.py (the block calling hier.plot with filename="data_wrangler.Ni.png", qty="density", plot_patches=True, levels=(0,)) is a known/marked failure; either remove the dead code or replace it with a clear TODO that references a tracking issue (or add an issue number) so it’s explicit this test is pending; if you choose to investigate, run hier.plot in an isolated test to capture the exception and fix the root cause in the plotting logic, but at minimum add a TODO above the hier.plot block mentioning the issue ID and why it’s disabled.pyphare/pyphare/pharesee/hierarchy/patch.py (1)
43-48: Consider using logging instead of print for debug output.The debug print statement could produce noisy output in tests or production. Additionally, the static analysis correctly notes that
raise eshould beraisefor cleaner exception re-raising.♻️ Proposed refactor
def __getitem__(self, key): try: return self.patch_datas[key] except KeyError as e: - print(key, "not in", self.patch_datas.keys()) - raise e + import logging + logging.debug(f"{key} not in {list(self.patch_datas.keys())}") + raise🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyphare/pyphare/pharesee/hierarchy/patch.py` around lines 43 - 48, In __getitem__ (the method accessing self.patch_datas) replace the print-based debug with the project logger: import logging, create/get a logger (e.g. logger = logging.getLogger(__name__)) and call logger.debug or logger.warning to report the missing key and available keys; also re-raise the caught KeyError using plain raise (not raise e) to preserve the original traceback. Ensure the logging call references the same key and self.patch_datas.keys() for context.src/python3/patch_level.hpp (1)
44-51: Consider performance implications of capturing lambdas.The
getFieldandgetVecFieldComponentmethods use lambdas that capture variables by reference. While this is acceptable for Python binding operations (not hot-path particle loops), ensure this pattern isn't extended to performance-critical particle processing code.Based on learnings: "for runtime particle-based operations, avoid lambdas with captures due to performance issues."
Also applies to: 53-61
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/python3/patch_level.hpp` around lines 44 - 51, getField and getVecFieldComponent create lambdas that capture local variables by reference (used by amr::visitLevel and calling setPyPatchDataFromField), which can hurt performance if reused in hot particle loops; replace the capturing lambdas with non-capturing alternatives: either a small function object/struct (with operator()) that holds needed data explicitly, or a static/non-capturing lambda and pass PatchData container and field through the visit API parameters so the visitor does not capture locals. Update usages of setPyPatchDataFromField, PatchData, and amr::visitLevel accordingly and add a short comment in getField/getVecFieldComponent noting this pattern is for Python binding code only and should not be used in runtime particle code.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@pyphare/pyphare/data/wrangler.py`:
- Around line 15-16: modelsPerLevel is being initialized to "Hybrid" for every
level so getPatchLevel() can never dispatch to getMHDPatchLevel(); change the
initialization of self.modelsPerLevel in the constructor (and the analogous
initializations around lines 30-35) to use the correct default marker for MHD
patches (e.g., "MHD" or the value your dispatch expects) or derive the per-level
model type from the existing metadata (self.cpp or config) so getPatchLevel()
can select getMHDPatchLevel() when appropriate; update all occurrences that
currently set "Hybrid" so they match the dispatch strings used by
getPatchLevel()/getMHDPatchLevel().
In `@pyphare/pyphare/pharesee/hierarchy/fromsim.py`:
- Around line 62-71: The loop over getters[qty](pop) uses an undefined variable
and the wrong data wrapper: replace the reference to patch_data with the loop
variable patch when calling patch_gridlayout(patch, lvl_cell_width,
simulator.interp_order()), and wrap particle payloads using ParticleData instead
of FieldData when constructing the Patch entry (i.e., create Patch({qty:
ParticleData(layout, "tags", patch.data)}) so downstream code expecting particle
behavior and attributes like pop_name works correctly); keep existing symbols
getters, patch, patch_gridlayout, lvl_cell_width, simulator.interp_order,
patches[ilvl], Patch, qty, and patch.data when making these changes.
- Around line 78-84: During the hierarchy merge in fromsim.py, ensure the patch
counts match before combining patch data: check that for each level (iterating
hier.levels(hier.times()[0]).items()) the number of patches in level.patches
equals the number in patch_levels[lvl_nbr] (new_level) and raise a clear error
or assert if they differ; if counts match, proceed to merge as currently done
(patch.patch_datas = {**patch.patch_datas, **new_level[ip].patch_datas}) so you
avoid IndexError and make mismatches explicit.
In `@pyphare/pyphare/pharesee/hierarchy/patchlevel.py`:
- Around line 25-27: The cell_width property currently assumes self.patches[0]
exists and will raise IndexError when patches is empty; modify the property
(cell_width) to guard against an empty self.patches by checking if self.patches
is truthy and either raising a clear ValueError (or returning a sensible
default/None) with a descriptive message or by selecting the first available
patch safely (e.g., using next/iter). Update the property to reference
self.patches and include the guard and clear error text so callers get an
explicit failure instead of an IndexError.
In `@src/core/utilities/span.hpp`:
- Around line 29-33: The Span converting constructor currently has default
arguments and allows implicit conversion from T*; fix this by adding an explicit
default constructor (e.g. Span() noexcept = default;) and changing the
two-argument constructor signature Span(T* ptr, SIZE s) noexcept (remove the
default values) and mark the two-arg constructor explicit if you want to
disallow implicit conversions (explicit Span(T* ptr, SIZE s) noexcept :
ptr{ptr}, s{s} {...}). Update the Span(T* ptr_, SIZE s_) declaration/definition
in span.hpp accordingly.
In `@src/python3/cpp_etc.cpp`:
- Around line 40-43: The __getitem__ binding currently uses std::size_t and
operator[] which allows UB on out-of-range access and forbids negative indices;
change the lambda for ParticleArray.__getitem__ to accept py::ssize_t idx, if
idx is negative add idx += static_cast<py::ssize_t>(self.size()), then check 0
<= idx < static_cast<py::ssize_t>(self.size()) and if not throw
pybind11::index_error("ParticleArray index out of range"); finally return
self[static_cast<std::size_t>(idx)] (or use self.at(...) if available). Update
the .def("__getitem__", ...) binding accordingly to enforce Python-style
indexing and raise IndexError on invalid indices.
In `@src/python3/data_wrangler.hpp`:
- Around line 118-126: The collect lambda calls
shape<dimension>(patch_data.data) and makeSpan(patch_data.data) unconditionally,
which crashes for sentinel default-constructed PatchData (empty.data.ndim==0 or
null). Fix by first detecting an empty patch (e.g., check
patch_data.data.ndim==0 or data.ptr==nullptr) and, for that case, push a
default/zero shape/origin/lower/upper/ghosts and an empty span/vector for datas
instead of calling shape<dimension>() or makeSpan(); leave the existing behavior
for non-empty patches. Update the collect lambda (and any helper used by sync())
to branch on this emptiness test so rank-padding no longer invokes
shape<dimension> or makeSpan on invalid py_array_t.
In `@src/python3/patch_data.hpp`:
- Around line 33-38: The PatchData struct's member nGhosts is currently left
uninitialized in the default PatchData() case, causing garbage ghost widths when
data_wrangler uses a default-constructed PatchData as a padding sentinel; update
the default initialization so nGhosts is set to 0 (e.g., default-initialize
nGhosts in the member declaration or ensure PatchData() explicitly sets nGhosts
= 0) and verify that the class/struct PatchData and any constructor handling
(PatchData()) refer to the same member name nGhosts when applying the fix.
In `@src/python3/patch_level.hpp`:
- Around line 149-163: The MHD PatchLevel specialization is missing the
Python-accessor methods present in the Hybrid specialization; add methods getB,
getE, getNi, getVi, getFlux, and getParticles to the class PatchLevel<Model,
std::enable_if_t<solver::is_mhd_model_v<Model>>> matching the pattern used in
the Hybrid specialization (i.e., define each accessor with the same signatures
and bodies as Hybrid’s implementations, delegating to Super or wrapping the
underlying data arrays/flux/particles the same way). Ensure you use the same
return types and naming (getB, getE, getNi, getVi, getFlux, getParticles) so the
DataWrangler/Python bindings can find them.
In `@tests/simulator/test_data_wrangler.py`:
- Around line 26-28: The three calls to hierarchy_from_sim
(hierarchy_from_sim(self.simulator, qty=f"EM_B_x", ...),
hierarchy_from_sim(self.simulator, qty=f"EM_B_y", ...),
hierarchy_from_sim(self.simulator, qty=f"density", ...)) use f-strings with no
placeholders; remove the unnecessary f prefix and pass plain string literals for
the qty arguments (e.g., qty="EM_B_x", qty="EM_B_y", qty="density") to clean up
the code.
- Around line 20-21: The test function name test_1d is misleading because it
sets ndim = 2; rename the test function to test_2d (update the def test_1d(...)
to def test_2d(...)) and update any references or fixtures that call test_1d to
the new name; ensure the test name and the variables (ndim, interp) are
consistent (ndim=2 corresponds to test_2d) so test discovery and readability
match the intent.
---
Nitpick comments:
In `@pyphare/pyphare/pharesee/hierarchy/patch.py`:
- Around line 43-48: In __getitem__ (the method accessing self.patch_datas)
replace the print-based debug with the project logger: import logging,
create/get a logger (e.g. logger = logging.getLogger(__name__)) and call
logger.debug or logger.warning to report the missing key and available keys;
also re-raise the caught KeyError using plain raise (not raise e) to preserve
the original traceback. Ensure the logging call references the same key and
self.patch_datas.keys() for context.
In `@src/python3/patch_level.hpp`:
- Around line 44-51: getField and getVecFieldComponent create lambdas that
capture local variables by reference (used by amr::visitLevel and calling
setPyPatchDataFromField), which can hurt performance if reused in hot particle
loops; replace the capturing lambdas with non-capturing alternatives: either a
small function object/struct (with operator()) that holds needed data
explicitly, or a static/non-capturing lambda and pass PatchData container and
field through the visit API parameters so the visitor does not capture locals.
Update usages of setPyPatchDataFromField, PatchData, and amr::visitLevel
accordingly and add a short comment in getField/getVecFieldComponent noting this
pattern is for Python binding code only and should not be used in runtime
particle code.
In `@tests/simulator/test_data_wrangler.py`:
- Line 4: Remove the unused import statement "import numpy as np" from the
module; locate the top-level import of numpy (the symbol np) in the test file
(tests/simulator/test_data_wrangler.py) and delete it, and run the tests to
confirm nothing else references np—if any test actually needs numpy, replace the
unused import with the correct, minimal import at the usage site.
- Around line 43-48: The commented-out hier.plot call in
tests/simulator/test_data_wrangler.py (the block calling hier.plot with
filename="data_wrangler.Ni.png", qty="density", plot_patches=True, levels=(0,))
is a known/marked failure; either remove the dead code or replace it with a
clear TODO that references a tracking issue (or add an issue number) so it’s
explicit this test is pending; if you choose to investigate, run hier.plot in an
isolated test to capture the exception and fix the root cause in the plotting
logic, but at minimum add a TODO above the hier.plot block mentioning the issue
ID and why it’s disabled.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 0f26b015-d85c-4624-ae4a-d8dfe8c2b373
📒 Files selected for processing (17)
pyphare/pyphare/data/wrangler.pypyphare/pyphare/pharesee/hierarchy/fromsim.pypyphare/pyphare/pharesee/hierarchy/hierarchy_utils.pypyphare/pyphare/pharesee/hierarchy/patch.pypyphare/pyphare/pharesee/hierarchy/patchlevel.pysrc/amr/physical_models/mhd_model.hppsrc/core/utilities/mpi_utils.cppsrc/core/utilities/mpi_utils.hppsrc/core/utilities/span.hppsrc/python3/cpp_etc.cppsrc/python3/cpp_simulator.hppsrc/python3/data_wrangler.hppsrc/python3/patch_data.hppsrc/python3/patch_level.hppsrc/python3/pybind_def.hpptests/simulator/CMakeLists.txttests/simulator/test_data_wrangler.py
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (8)
pyphare/pyphare/pharesee/hierarchy/fromsim.py (1)
62-70:⚠️ Potential issue | 🔴 CriticalParticles branch still uses the wrong symbol and wrapper.
patch_datais undefined at Line 66, so the first particle patch raisesNameError. Even after fixing that, wrappingpatch.datainFieldDataat Line 70 drops the particle-specific behavior that downstream code expects fromParticleData.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyphare/pyphare/pharesee/hierarchy/fromsim.py` around lines 62 - 70, The particle branch is using an undefined variable and the wrong wrapper: replace the undefined patch_data passed to patch_gridlayout with the actual patch (use patch) and wrap particle payloads with ParticleData instead of FieldData so downstream code retains particle-specific behavior; update the Patch construction in the loop (the call that currently does Patch({qty: FieldData(layout, "tags", patch.data)}) ) to use Patch({qty: ParticleData(layout, "tags", patch.data)}) and ensure the ParticleData usage still produces the intended SoA COPY semantics.pyphare/pyphare/pharesee/hierarchy/patchlevel.py (1)
25-27:⚠️ Potential issue | 🟡 Minor
cell_widthstill assumes at least one patch.When
self.patchesis empty this still raisesIndexError. Please guard the empty-level case or fail with a clearer exception.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyphare/pyphare/pharesee/hierarchy/patchlevel.py` around lines 25 - 27, The cell_width property currently indexes self.patches[0] and will raise IndexError when self.patches is empty; update the cell_width property to first check if self.patches is empty and either raise a clear ValueError (e.g. "Level has no patches, cannot determine cell width") or return a sensible sentinel (e.g. None) depending on expected callers, and keep the existing behavior of using patches[0].layout.dl when patches exist; reference the cell_width property, self.patches, and patches[0].layout.dl when making the change.pyphare/pyphare/data/wrangler.py (1)
15-16:⚠️ Potential issue | 🟠 Major
getPatchLevel()still can't select MHD levels.
modelsPerLevelis initialized to"Hybrid"for every level and nothing in this file updates it, so the"MHD"branch is unreachable. Any MHD level will be routed through the wrong accessor.Also applies to: 30-35
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyphare/pyphare/data/wrangler.py` around lines 15 - 16, Replace the hardcoded per-level initialization that sets every entry of modelsPerLevel to "Hybrid" with a level-wise probe so MHD levels become reachable: iterate over range(self.cpp.getNumberOfLevels()) and for each level call the appropriate selector (prefer self.getPatchLevel(level) or self.cpp.getPatchLevel(level) if that accessor exists) and set self.modelsPerLevel[level] to "MHD" when that call indicates MHD, otherwise default to "Hybrid"; update the constructor/initializer in wrangler.py where modelsPerLevel is created so downstream code (e.g., getPatchLevel()) sees the correct model type per level.src/python3/data_wrangler.hpp (1)
118-126:⚠️ Potential issue | 🔴 CriticalThe padding path still crashes before rank 0 can skip missing patches.
emptyreachesshape<dimension>(patch_data.data)andmakeSpan(patch_data.data)before thedatas[i].size() == 0guard. For the default-constructedpy_array_t, that meansndim == 0/ null storage, so heterogeneous patch counts still breaksync().Also applies to: 143-146
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/python3/data_wrangler.hpp` around lines 118 - 126, The collect lambda calls shape<dimension>(patch_data.data) and makeSpan(patch_data.data) for every PatchData even when patch_data.data is default/empty, causing crashes; change the collect logic in the collect lambda (and the analogous block around lines 143-146) to first test whether patch_data.data is empty (e.g. patch_data.data.size()==0 or equivalent) and, when empty, provide a safe empty shape/span/array placeholder to core::mpi::collect/collectArrays rather than calling shape<dimension> or makeSpan on a null/default py_array_t; update the calls that build shapes, origins, lower/upper, ghosts, and datas to use conditional expressions or small helper functions so empty patches produce well-formed zero-length metadata that preserves rank ordering.src/python3/patch_data.hpp (1)
33-38:⚠️ Potential issue | 🟡 MinorInitialize
nGhostsin the default state.
PatchData()still leavesnGhostsindeterminate, so any sentinel/default instance can leak garbage ghost widths.🩹 Proposed fix
- std::size_t nGhosts; + std::size_t nGhosts{0};🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/python3/patch_data.hpp` around lines 33 - 38, The default-initialized member nGhosts is left indeterminate; update PatchData's initialization so nGhosts is initialized to a safe default (e.g., 0). Either add nGhosts{0} to the member declaration or ensure the PatchData() constructor's member initializer list explicitly sets nGhosts = 0; reference the PatchData() constructor and the nGhosts member to locate where to apply the change.src/python3/patch_level.hpp (1)
149-163:⚠️ Potential issue | 🟠 MajorDon’t expose an empty
MHDPatchLevelAPI.
DataWrangler.getMHDPatchLevel()is now public, but this specialization still has no accessors, so the MHD path is unusable from Python. Either add the same public surface you expect to bind, or stop exporting it until that surface exists.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/python3/patch_level.hpp` around lines 149 - 163, The MHD specialization PatchLevel<Model, std::enable_if_t<solver::is_mhd_model_v<Model>>> currently exposes no public API while DataWrangler.getMHDPatchLevel() is public; either implement the same public accessors you bind for non-MHD patch levels (copy the public surface from the other PatchLevel specialization into this class: constructors, accessors, e.g., grid()/cells()/data() or whatever named methods your bindings expect) or remove/privatize the Python-exported entry point by making DataWrangler.getMHDPatchLevel() non-public until those methods exist; locate PatchLevel (the MHD specialization) and update it to match the other PatchLevel public interface or adjust DataWrangler accordingly.src/python3/cpp_etc.cpp (1)
40-43:⚠️ Potential issue | 🟠 MajorMake
ParticleArray.__getitem__behave like Python indexing.This still forwards an unchecked
operator[]on an unsigned index, so out-of-range access is UB, negative indices are impossible, and the returned element is not tied toself's lifetime.🧭 Proposed fix
py::class_<ParticleArray, std::shared_ptr<ParticleArray>>(m, name.c_str()) .def("size", &ParticleArray::size) - .def("__getitem__", - [](ParticleArray& self, std::size_t const idx) -> auto& { return self[idx]; }); + .def("__getitem__", [](ParticleArray& self, py::ssize_t idx) -> auto& { + auto const size = static_cast<py::ssize_t>(self.size()); + if (idx < 0) + idx += size; + if (idx < 0 || idx >= size) + throw py::index_error("ParticleArray index out of range"); + return self[static_cast<std::size_t>(idx)]; + }, py::return_value_policy::reference_internal);#!/bin/bash set -euo pipefail echo "Binding site:" sed -n '38,44p' src/python3/cpp_etc.cpp echo echo "ParticleArray indexing operators:" fd particle_array.hpp | while read -r f; do echo "--- $f ---" rg -n '\b(operator\[\]|at)\s*\(' "$f" -C2 || true doneExpected result: the binding should no longer accept
std::size_tdirectly or call uncheckedoperator[]without a PythonIndexErrorpath andreference_internal.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/python3/cpp_etc.cpp` around lines 40 - 43, Change the ParticleArray __getitem__ binding to accept a signed Python index (py::ssize_t), support negative indices by adding size() when index<0, check bounds and throw py::index_error on out-of-range access, and return the element with py::return_value_policy::reference_internal so the Python object is tied to self; update the lambda used in .def("__getitem__", ...) to use py::ssize_t, perform the bounds calculation/validation against ParticleArray::size(), raise py::index_error for invalid indices, and return the element with reference_internal.src/core/utilities/span.hpp (1)
29-33:⚠️ Potential issue | 🟠 MajorKeep the default constructor separate from the pointer/size constructor.
Line 29 still makes
Spanimplicitly constructible fromT*, soSpan s = ptr;silently produces a zero-length view. That is a subtle footgun for a span-like type.Proposed fix
+ Span() = default; - Span(T* ptr_ = nullptr, SIZE s_ = 0) + Span(T* ptr_, SIZE s_) : ptr{ptr_} , s{s_} { }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/utilities/span.hpp` around lines 29 - 33, The constructor currently Span(T* ptr_ = nullptr, SIZE s_ = 0) is implicitly constructible from a T* and thus allows dangerous one-argument conversions; fix this by adding a separate default constructor (e.g. Span() noexcept = default;) and making the pointer/size constructor explicit and require both arguments (change to explicit Span(T* ptr_, SIZE s_) noexcept — remove the default for s_); update any uses that relied on the implicit conversion to pass both ptr and size or use an explicit cast.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@pyphare/pyphare/pharesee/hierarchy/hierarchy_utils.py`:
- Line 218: The new mapping entry uses the key "flux" (pl.getFlux) but the
module and helper tables still expect component names "flux_x", "flux_y",
"flux_z" (see field_qties and isFieldQty), so restore compatibility by either
adding per-component aliases for pl.getFlux (e.g., "flux_x", "flux_y", "flux_z"
pointing to the appropriate pl.getFlux component accessors) or update the helper
tables field_qties and isFieldQty to accept the aggregate "flux" name; modify
the mapping around pl.getFlux and the helper tables consistently so lookups for
flux components and the new "flux" name both resolve correctly.
---
Duplicate comments:
In `@pyphare/pyphare/data/wrangler.py`:
- Around line 15-16: Replace the hardcoded per-level initialization that sets
every entry of modelsPerLevel to "Hybrid" with a level-wise probe so MHD levels
become reachable: iterate over range(self.cpp.getNumberOfLevels()) and for each
level call the appropriate selector (prefer self.getPatchLevel(level) or
self.cpp.getPatchLevel(level) if that accessor exists) and set
self.modelsPerLevel[level] to "MHD" when that call indicates MHD, otherwise
default to "Hybrid"; update the constructor/initializer in wrangler.py where
modelsPerLevel is created so downstream code (e.g., getPatchLevel()) sees the
correct model type per level.
In `@pyphare/pyphare/pharesee/hierarchy/fromsim.py`:
- Around line 62-70: The particle branch is using an undefined variable and the
wrong wrapper: replace the undefined patch_data passed to patch_gridlayout with
the actual patch (use patch) and wrap particle payloads with ParticleData
instead of FieldData so downstream code retains particle-specific behavior;
update the Patch construction in the loop (the call that currently does
Patch({qty: FieldData(layout, "tags", patch.data)}) ) to use Patch({qty:
ParticleData(layout, "tags", patch.data)}) and ensure the ParticleData usage
still produces the intended SoA COPY semantics.
In `@pyphare/pyphare/pharesee/hierarchy/patchlevel.py`:
- Around line 25-27: The cell_width property currently indexes self.patches[0]
and will raise IndexError when self.patches is empty; update the cell_width
property to first check if self.patches is empty and either raise a clear
ValueError (e.g. "Level has no patches, cannot determine cell width") or return
a sensible sentinel (e.g. None) depending on expected callers, and keep the
existing behavior of using patches[0].layout.dl when patches exist; reference
the cell_width property, self.patches, and patches[0].layout.dl when making the
change.
In `@src/core/utilities/span.hpp`:
- Around line 29-33: The constructor currently Span(T* ptr_ = nullptr, SIZE s_ =
0) is implicitly constructible from a T* and thus allows dangerous one-argument
conversions; fix this by adding a separate default constructor (e.g. Span()
noexcept = default;) and making the pointer/size constructor explicit and
require both arguments (change to explicit Span(T* ptr_, SIZE s_) noexcept —
remove the default for s_); update any uses that relied on the implicit
conversion to pass both ptr and size or use an explicit cast.
In `@src/python3/cpp_etc.cpp`:
- Around line 40-43: Change the ParticleArray __getitem__ binding to accept a
signed Python index (py::ssize_t), support negative indices by adding size()
when index<0, check bounds and throw py::index_error on out-of-range access, and
return the element with py::return_value_policy::reference_internal so the
Python object is tied to self; update the lambda used in .def("__getitem__",
...) to use py::ssize_t, perform the bounds calculation/validation against
ParticleArray::size(), raise py::index_error for invalid indices, and return the
element with reference_internal.
In `@src/python3/data_wrangler.hpp`:
- Around line 118-126: The collect lambda calls
shape<dimension>(patch_data.data) and makeSpan(patch_data.data) for every
PatchData even when patch_data.data is default/empty, causing crashes; change
the collect logic in the collect lambda (and the analogous block around lines
143-146) to first test whether patch_data.data is empty (e.g.
patch_data.data.size()==0 or equivalent) and, when empty, provide a safe empty
shape/span/array placeholder to core::mpi::collect/collectArrays rather than
calling shape<dimension> or makeSpan on a null/default py_array_t; update the
calls that build shapes, origins, lower/upper, ghosts, and datas to use
conditional expressions or small helper functions so empty patches produce
well-formed zero-length metadata that preserves rank ordering.
In `@src/python3/patch_data.hpp`:
- Around line 33-38: The default-initialized member nGhosts is left
indeterminate; update PatchData's initialization so nGhosts is initialized to a
safe default (e.g., 0). Either add nGhosts{0} to the member declaration or
ensure the PatchData() constructor's member initializer list explicitly sets
nGhosts = 0; reference the PatchData() constructor and the nGhosts member to
locate where to apply the change.
In `@src/python3/patch_level.hpp`:
- Around line 149-163: The MHD specialization PatchLevel<Model,
std::enable_if_t<solver::is_mhd_model_v<Model>>> currently exposes no public API
while DataWrangler.getMHDPatchLevel() is public; either implement the same
public accessors you bind for non-MHD patch levels (copy the public surface from
the other PatchLevel specialization into this class: constructors, accessors,
e.g., grid()/cells()/data() or whatever named methods your bindings expect) or
remove/privatize the Python-exported entry point by making
DataWrangler.getMHDPatchLevel() non-public until those methods exist; locate
PatchLevel (the MHD specialization) and update it to match the other PatchLevel
public interface or adjust DataWrangler accordingly.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 0d6dcd5c-36ac-4a8e-8928-4b21d87ae174
📒 Files selected for processing (18)
pyphare/pyphare/data/wrangler.pypyphare/pyphare/pharesee/hierarchy/fromsim.pypyphare/pyphare/pharesee/hierarchy/hierarchy_utils.pypyphare/pyphare/pharesee/hierarchy/patch.pypyphare/pyphare/pharesee/hierarchy/patchlevel.pysrc/amr/physical_models/mhd_model.hppsrc/core/utilities/mpi_utils.cppsrc/core/utilities/mpi_utils.hppsrc/core/utilities/span.hppsrc/python3/cpp_etc.cppsrc/python3/cpp_simulator.hppsrc/python3/data_wrangler.hppsrc/python3/patch_data.hppsrc/python3/patch_level.hppsrc/python3/pybind_def.hpptests/simulator/CMakeLists.txttests/simulator/refined_particle_nbr.pytests/simulator/test_data_wrangler.py
💤 Files with no reviewable changes (1)
- tests/simulator/refined_particle_nbr.py
🚧 Files skipped from review as they are similar to previous changes (2)
- src/core/utilities/mpi_utils.cpp
- src/amr/physical_models/mhd_model.hpp
There was a problem hiding this comment.
Actionable comments posted: 6
♻️ Duplicate comments (6)
src/core/utilities/span.hpp (1)
29-33:⚠️ Potential issue | 🟠 MajorAvoid the implicit
T* -> Spanconversion.This constructor is callable with a single pointer because
s_has a default, soSpan s = ptr;silently builds a zero-length view. Split out the default constructor and require the size in the pointer constructor.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/utilities/span.hpp` around lines 29 - 33, The Span constructor currently allows implicit conversion from T* because Span(T* ptr_ = nullptr, SIZE const s_ = 0) lets callers pass only a pointer; change this by splitting into two constructors: a default no-arg Span() that initializes ptr and s to null/0, and an explicit pointer+size constructor Span(T* ptr_, SIZE const s_) (do not provide a default for s_); ensure the pointer constructor is not implicit (mark explicit if your style requires) so code like Span s = ptr; no longer compiles; update any call sites that relied on the old implicit conversion to pass a size.pyphare/pyphare/pharesee/hierarchy/patchlevel.py (1)
25-27:⚠️ Potential issue | 🟡 MinorGuard
cell_widthagainst empty levels.This still dereferences
self.patches[0]unconditionally, so empty patch levels fail withIndexErrorinstead of a clear precondition error.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyphare/pyphare/pharesee/hierarchy/patchlevel.py` around lines 25 - 27, The cell_width property currently dereferences self.patches[0] and will raise IndexError for empty levels; update the cell_width property (in patchlevel.py) to first check whether self.patches is non-empty and raise a clear, descriptive exception (e.g., ValueError("cell_width called on empty patch level")) or otherwise handle the empty-case explicitly before accessing self.patches[0]. Ensure the check references the cell_width property and self.patches symbols so readers can locate the guard and error message.pyphare/pyphare/pharesee/hierarchy/fromsim.py (2)
76-80:⚠️ Potential issue | 🟠 MajorMerging patch levels by index is brittle here.
new_level[ip]assumes the fresh extraction has the same patch count and ordering as the existing hierarchy. If it is shorter you raise on index access, if it is longer the extra patches are ignored, and if the order changes you merge the wrong patch data. Key this merge by patch identity, or at least validate the lengths before indexing.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyphare/pyphare/pharesee/hierarchy/fromsim.py` around lines 76 - 80, The current merge loop assumes patch_levels[lvl_nbr] has the same ordering and length as hier.levels(...)[lvl_nbr].patches which can cause IndexError or incorrect merges; instead, for each level (use hier.levels, hier.times(), level.patches and patch_levels[lvl_nbr]) build a lookup keyed by a stable patch identifier (e.g., patch.id or patch.key) from new_level and then for each existing patch find the matching new patch by that id and merge their patch_datas; if no stable id exists, first validate lengths (len(level.patches) == len(new_level)) and raise a clear error before proceeding so you don’t index out of range or silently ignore/mismerge patches.
62-69:⚠️ Potential issue | 🟠 MajorParticle extraction is still wrapped as field data.
The
qty == "particles"branch buildsFieldData(layout, "tags", patch_data.data)instead of aParticleDatawrapper. That drops particle-specific semantics and population metadata, so downstream code cannot treat this path as particle data anymore.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyphare/pyphare/pharesee/hierarchy/fromsim.py` around lines 62 - 69, The loop currently always wraps particle extraction as FieldData which loses particle semantics; update the branch that handles qty == "particles" inside the loop over getters[qty](pop) to construct a ParticleData instance (not FieldData) and pass the same layout, tag name, payload (patch_data.data) and any population/metadata from patch_data into ParticleData so downstream code retains particle-specific behavior; specifically replace Patch({qty: FieldData(layout, "tags", patch_data.data)}) with a conditional that uses Patch({qty: ParticleData(layout, "tags", patch_data.data, ...metadata...)}) when qty == "particles", preserving whatever metadata/attributes exist on patch_data (e.g. population or metadata fields) and leaving other qtys unchanged.src/python3/cpp_etc.cpp (1)
40-43:⚠️ Potential issue | 🟠 Major
ParticleArray.__getitem__should raiseIndexError, not hitoperator[].The current binding takes
std::size_tand forwards straight toself[idx], so negative indices are impossible and out-of-range access is undefined behavior instead of a Python exception.🩹 Proposed fix
py::class_<ParticleArray, std::shared_ptr<ParticleArray>>(m, name.c_str()) .def("size", &ParticleArray::size) - .def("__getitem__", - [](ParticleArray& self, std::size_t const idx) -> auto& { return self[idx]; }); + .def("__getitem__", [](ParticleArray& self, py::ssize_t idx) -> auto& { + auto const size = static_cast<py::ssize_t>(self.size()); + if (idx < 0) + idx += size; + if (idx < 0 || idx >= size) + throw py::index_error("ParticleArray index out of range"); + return self[static_cast<std::size_t>(idx)]; + }, py::return_value_policy::reference_internal);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/python3/cpp_etc.cpp` around lines 40 - 43, The __getitem__ binding for ParticleArray currently takes std::size_t and calls operator[], which prevents negative indexing and can UB on out-of-range access; change the lambda bound in py::class_<ParticleArray> to take py::ssize_t (or ssize_t), check bounds: if index < 0 add index += self.size(), then if index < 0 or index >= (py::ssize_t)self.size() throw py::index_error with a clear message, otherwise return the element (preserving appropriate return policy, e.g., reference_internal) using the valid non-throwing access (operator[] after bounds check).src/python3/patch_data.hpp (1)
33-38:⚠️ Potential issue | 🟠 Major
nGhostsis indeterminate in the sync padding path.
src/python3/data_wrangler.hpp::sync()pads shorter ranks with anemptyPatchDataand still collectspatch_data.nGhostsbefore it knows the payload is empty. With the current member declaration, that read is indeterminate.🩹 Proposed fix
- std::size_t nGhosts; + std::size_t nGhosts{0};🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/python3/patch_data.hpp` around lines 33 - 38, The nGhosts member in patch_data.hpp can be indeterminate and is read by src/python3/data_wrangler.hpp::sync() when padding with an empty PatchData; initialize nGhosts to a known value (e.g., zero) by default to avoid undefined reads—update the member declaration (or ensure all PatchData constructors explicitly set nGhosts) so that patch_data.nGhosts is always well-defined when sync() inspects it.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@pyphare/pyphare/data/wrangler.py`:
- Around line 15-16: modelsPerLevel is being filled with the placeholder
"Hybrid" for every level so getPatchLevel() ends up always returning Hybrid;
instead populate modelsPerLevel from the simulator's real per-level selection by
querying the existing API (e.g., call self.cpp.getPatchLevel(level) or
self.getPatchLevel(level) for each level returned by
self.cpp.getNumberOfLevels()) when initializing modelsPerLevel, and apply the
same change to the other initializations referenced around lines 30-35 so that
getPatchLevel() and getHybridPatchLevel() route correctly to the actual model
per level rather than the hard-coded "Hybrid".
In `@pyphare/pyphare/pharesee/hierarchy/hierarchy_utils.py`:
- Around line 218-220: The new flux getters break the zero-argument contract
expected by fromsim.py; change the three lambdas ("flux_x", "flux_y", "flux_z")
to accept an optional pop parameter (e.g., lambda pop=None: pl.getFlux("x",
pop)) so callers that call getters[qty]() still work while preserving the
ability to pass pop when available; update the getters definition where
"flux_x"/"flux_y"/"flux_z" are set and ensure they call pl.getFlux("x"/"y"/"z",
pop).
In `@pyphare/pyphare/pharesee/hierarchy/patch.py`:
- Around line 44-48: In the __getitem__ method, replace the current re-raise
that uses the exception variable (raising `e`) with a bare raise so the original
traceback is preserved; locate the except KeyError block that logs the missing
key and change the re-raise to a bare raise while keeping the print/log line
that references self.patch_datas and the key.
In `@src/python3/data_wrangler.hpp`:
- Around line 101-108: The two helper functions getMHDPatchLevel and
getHybridPatchLevel incorrectly dereference simulator_.getMHDModel() and
simulator_.getHybridModel() (which already return references), causing
temporaries and binding errors for the PatchLevel constructor; fix by passing
the referenced objects directly to PatchLevel (i.e., remove the * dereference)
so that PatchLevel<MHDModel>{*hierarchy_, simulator_.getMHDModel(), lvl} and
PatchLevel<HybridModel>{*hierarchy_, simulator_.getHybridModel(), lvl} receive
the proper lvalue references.
In `@src/python3/patch_data.hpp`:
- Around line 60-69: setPyPatchDataFromField currently assigns a py::memoryview
(from field_as_memory_view()) into PatchData::data which is instantiated as
PatchData<py_array_t<double>, dimension>, breaking the py_array_t<T> contract
used by downstream sync() in data_wrangler.hpp (which calls shape<dimension>()
and makeSpan() and expects .request()). Fix by converting the memoryview into a
py_array_t<double> before assigning to pdata.data (or otherwise construct a
py_array_t<double> view over the same buffer), e.g. replace the
field_as_memory_view() assignment in setPyPatchDataFromField with code that
produces a py_array_t<double> compatible view; update references to PatchData,
setPyPatchDataFromField, and any construction sites in patch_level.hpp to ensure
pdata.data is a py_array_t<double> not a py::memoryview so shape<dimension>()
and makeSpan() calls in sync() work correctly.
In `@src/python3/patch_level.hpp`:
- Around line 149-163: The MHD PatchLevel specialization currently defined as
PatchLevel<Model, std::enable_if_t<solver::is_mhd_model_v<Model>>> is empty and
thus the object bound in src/python3/cpp_simulator.hpp exposes no accessors to
Python; update the C++ so the MHD specialization either inherits or reimplements
the same public getters used by other PatchLevel specializations (the methods
exposed through AnyPatchLevel) and then add the corresponding Python bindings in
cpp_simulator.hpp (the binding code used by getPatchLevel()/getMHDPatchLevel())
so that objects returned by getMHDPatchLevel() expose the same callable methods
from Python as non-MHD PatchLevel instances.
---
Duplicate comments:
In `@pyphare/pyphare/pharesee/hierarchy/fromsim.py`:
- Around line 76-80: The current merge loop assumes patch_levels[lvl_nbr] has
the same ordering and length as hier.levels(...)[lvl_nbr].patches which can
cause IndexError or incorrect merges; instead, for each level (use hier.levels,
hier.times(), level.patches and patch_levels[lvl_nbr]) build a lookup keyed by a
stable patch identifier (e.g., patch.id or patch.key) from new_level and then
for each existing patch find the matching new patch by that id and merge their
patch_datas; if no stable id exists, first validate lengths (len(level.patches)
== len(new_level)) and raise a clear error before proceeding so you don’t index
out of range or silently ignore/mismerge patches.
- Around line 62-69: The loop currently always wraps particle extraction as
FieldData which loses particle semantics; update the branch that handles qty ==
"particles" inside the loop over getters[qty](pop) to construct a ParticleData
instance (not FieldData) and pass the same layout, tag name, payload
(patch_data.data) and any population/metadata from patch_data into ParticleData
so downstream code retains particle-specific behavior; specifically replace
Patch({qty: FieldData(layout, "tags", patch_data.data)}) with a conditional that
uses Patch({qty: ParticleData(layout, "tags", patch_data.data, ...metadata...)})
when qty == "particles", preserving whatever metadata/attributes exist on
patch_data (e.g. population or metadata fields) and leaving other qtys
unchanged.
In `@pyphare/pyphare/pharesee/hierarchy/patchlevel.py`:
- Around line 25-27: The cell_width property currently dereferences
self.patches[0] and will raise IndexError for empty levels; update the
cell_width property (in patchlevel.py) to first check whether self.patches is
non-empty and raise a clear, descriptive exception (e.g., ValueError("cell_width
called on empty patch level")) or otherwise handle the empty-case explicitly
before accessing self.patches[0]. Ensure the check references the cell_width
property and self.patches symbols so readers can locate the guard and error
message.
In `@src/core/utilities/span.hpp`:
- Around line 29-33: The Span constructor currently allows implicit conversion
from T* because Span(T* ptr_ = nullptr, SIZE const s_ = 0) lets callers pass
only a pointer; change this by splitting into two constructors: a default no-arg
Span() that initializes ptr and s to null/0, and an explicit pointer+size
constructor Span(T* ptr_, SIZE const s_) (do not provide a default for s_);
ensure the pointer constructor is not implicit (mark explicit if your style
requires) so code like Span s = ptr; no longer compiles; update any call sites
that relied on the old implicit conversion to pass a size.
In `@src/python3/cpp_etc.cpp`:
- Around line 40-43: The __getitem__ binding for ParticleArray currently takes
std::size_t and calls operator[], which prevents negative indexing and can UB on
out-of-range access; change the lambda bound in py::class_<ParticleArray> to
take py::ssize_t (or ssize_t), check bounds: if index < 0 add index +=
self.size(), then if index < 0 or index >= (py::ssize_t)self.size() throw
py::index_error with a clear message, otherwise return the element (preserving
appropriate return policy, e.g., reference_internal) using the valid
non-throwing access (operator[] after bounds check).
In `@src/python3/patch_data.hpp`:
- Around line 33-38: The nGhosts member in patch_data.hpp can be indeterminate
and is read by src/python3/data_wrangler.hpp::sync() when padding with an empty
PatchData; initialize nGhosts to a known value (e.g., zero) by default to avoid
undefined reads—update the member declaration (or ensure all PatchData
constructors explicitly set nGhosts) so that patch_data.nGhosts is always
well-defined when sync() inspects it.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 9bda2e46-f217-4018-a63e-540cfdadc2b4
📒 Files selected for processing (18)
pyphare/pyphare/data/wrangler.pypyphare/pyphare/pharesee/hierarchy/fromsim.pypyphare/pyphare/pharesee/hierarchy/hierarchy_utils.pypyphare/pyphare/pharesee/hierarchy/patch.pypyphare/pyphare/pharesee/hierarchy/patchlevel.pysrc/amr/physical_models/mhd_model.hppsrc/core/utilities/mpi_utils.cppsrc/core/utilities/mpi_utils.hppsrc/core/utilities/span.hppsrc/python3/cpp_etc.cppsrc/python3/cpp_simulator.hppsrc/python3/data_wrangler.hppsrc/python3/patch_data.hppsrc/python3/patch_level.hppsrc/python3/pybind_def.hpptests/simulator/CMakeLists.txttests/simulator/refined_particle_nbr.pytests/simulator/test_data_wrangler.py
💤 Files with no reviewable changes (1)
- tests/simulator/refined_particle_nbr.py
🚧 Files skipped from review as they are similar to previous changes (1)
- src/core/utilities/mpi_utils.cpp
| "flux_x": lambda pop: pl.getFlux("x", pop), | ||
| "flux_y": lambda pop: pl.getFlux("y", pop), | ||
| "flux_z": lambda pop: pl.getFlux("z", pop), |
There was a problem hiding this comment.
Keep the flux getters compatible with the field-quantity call path.
fromsim.py still treats flux_x/y/z as field quantities and calls getters[qty]() with no arguments. These new lambdas require pop, so any flux plot now fails with TypeError before the sync/plotting path runs. Either keep the zero-arg contract here or update the caller in the same change to pass pop for flux quantities.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pyphare/pyphare/pharesee/hierarchy/hierarchy_utils.py` around lines 218 -
220, The new flux getters break the zero-argument contract expected by
fromsim.py; change the three lambdas ("flux_x", "flux_y", "flux_z") to accept an
optional pop parameter (e.g., lambda pop=None: pl.getFlux("x", pop)) so callers
that call getters[qty]() still work while preserving the ability to pass pop
when available; update the getters definition where "flux_x"/"flux_y"/"flux_z"
are set and ensure they call pl.getFlux("x"/"y"/"z", pop).
| try: | ||
| return self.patch_datas[key] | ||
| except KeyError as e: | ||
| print(key, "not in", self.patch_datas.keys()) | ||
| raise e |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -name "patch.py" -path "*/hierarchy/*" | head -5Repository: PHAREHUB/PHARE
Length of output: 103
🏁 Script executed:
sed -n '40,55p' ./pyphare/pyphare/pharesee/hierarchy/patch.pyRepository: PHAREHUB/PHARE
Length of output: 484
Use bare raise to preserve the original exception traceback.
In this __getitem__ method, raise e resets the traceback to line 48, which obscures where the original KeyError occurred. Use bare raise after logging to maintain the original traceback:
except KeyError as e:
print(key, "not in", self.patch_datas.keys())
raise🧰 Tools
🪛 Ruff (0.15.4)
[warning] 48-48: Use raise without specifying exception name
Remove exception name
(TRY201)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pyphare/pyphare/pharesee/hierarchy/patch.py` around lines 44 - 48, In the
__getitem__ method, replace the current re-raise that uses the exception
variable (raising `e`) with a bare raise so the original traceback is preserved;
locate the except KeyError block that logs the missing key and change the
re-raise to a bare raise while keeping the print/log line that references
self.patch_datas and the key.
| auto getMHDPatchLevel(size_t lvl) | ||
| { | ||
| return PatchLevel<opts>{*hierarchy_, *simulator_.getHybridModel(), lvl}; | ||
| return PatchLevel<MHDModel>{*hierarchy_, *simulator_.getMHDModel(), lvl}; | ||
| } | ||
|
|
||
| auto sort_merge_1d(std::vector<PatchData<std::vector<double>, dimension>> const&& input, | ||
| bool shared_patch_border = false) | ||
| auto getHybridPatchLevel(size_t lvl) | ||
| { | ||
| std::vector<std::pair<double, PatchData<std::vector<double>, dimension> const*>> sorted; | ||
| for (auto const& data : input) | ||
| sorted.emplace_back(core::Point<double, 1>::fromString(data.origin)[0], &data); | ||
| std::sort(sorted.begin(), sorted.end(), [](auto& a, auto& b) { return a.first < b.first; }); | ||
| std::vector<double> ret; | ||
| for (size_t i = 0; i < sorted.size(); i++) | ||
| { // skip empty patches in case of unequal patches across MPI domains | ||
| if (!sorted[i].second->data.size()) | ||
| continue; | ||
| auto& data = sorted[i].second->data; | ||
| auto& ghosts = sorted[i].second->nGhosts; | ||
| auto end = ghosts; | ||
| // primal nodes share a cell wall when patches touch so drop duplicate value if so | ||
| if (shared_patch_border) | ||
| end = i == sorted.size() - 1 ? end : end + 1; | ||
| ret.insert(std::end(ret), std::begin(data) + ghosts, std::end(data) - end); | ||
| } | ||
| return ret; | ||
| return PatchLevel<HybridModel>{*hierarchy_, *simulator_.getHybridModel(), lvl}; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
fd simulator.hpp --type fRepository: PHAREHUB/PHARE
Length of output: 115
🏁 Script executed:
cat -n src/simulator/simulator.hpp | head -150Repository: PHAREHUB/PHARE
Length of output: 6128
🏁 Script executed:
rg "getMHDModel|getHybridModel" --type hpp -A 3Repository: PHAREHUB/PHARE
Length of output: 85
🏁 Script executed:
cat -n src/python3/data_wrangler.hpp | sed -n '95,115p'Repository: PHAREHUB/PHARE
Length of output: 702
🏁 Script executed:
rg "class PatchLevel" --type hpp -A 10Repository: PHAREHUB/PHARE
Length of output: 85
🏁 Script executed:
rg "class PatchLevel" -A 15Repository: PHAREHUB/PHARE
Length of output: 2137
🏁 Script executed:
rg "template.*PatchLevel" -A 20Repository: PHAREHUB/PHARE
Length of output: 40
Remove unnecessary dereferencing of getMHDModel() and getHybridModel().
Lines 103 and 108 dereference the return values of getMHDModel() and getHybridModel(), which already return references (as defined in src/simulator/simulator.hpp). This creates temporaries that cannot be bound to the non-const lvalue reference parameters expected by the PatchLevel constructor, making these constructions ill-formed.
🩹 Proposed fix
auto getMHDPatchLevel(size_t lvl)
{
- return PatchLevel<MHDModel>{*hierarchy_, *simulator_.getMHDModel(), lvl};
+ return PatchLevel<MHDModel>{*hierarchy_, simulator_.getMHDModel(), lvl};
}
auto getHybridPatchLevel(size_t lvl)
{
- return PatchLevel<HybridModel>{*hierarchy_, *simulator_.getHybridModel(), lvl};
+ return PatchLevel<HybridModel>{*hierarchy_, simulator_.getHybridModel(), lvl};
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/python3/data_wrangler.hpp` around lines 101 - 108, The two helper
functions getMHDPatchLevel and getHybridPatchLevel incorrectly dereference
simulator_.getMHDModel() and simulator_.getHybridModel() (which already return
references), causing temporaries and binding errors for the PatchLevel
constructor; fix by passing the referenced objects directly to PatchLevel (i.e.,
remove the * dereference) so that PatchLevel<MHDModel>{*hierarchy_,
simulator_.getMHDModel(), lvl} and PatchLevel<HybridModel>{*hierarchy_,
simulator_.getHybridModel(), lvl} receive the proper lvalue references.
| void setPyPatchDataFromField(PatchData& pdata, Field const& field, GridLayout& grid, | ||
| std::string patchID) | ||
| { | ||
| auto constexpr dimension = PatchData::dimension; | ||
| static_assert(dimension >= 1 and dimension <= 3); | ||
|
|
||
| setPatchDataFromGrid(pdata, grid, patchID); | ||
| pdata.nGhosts = static_cast<std::size_t>( | ||
| GridLayout::nbrGhosts(GridLayout::centering(field.physicalQuantity())[0])); | ||
| pdata.data.assign(field.data(), field.data() + field.size()); | ||
| pdata.data = field_as_memory_view(field); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -name "patch_data.hpp" -o -name "data_wrangler.hpp" | head -20Repository: PHAREHUB/PHARE
Length of output: 118
🏁 Script executed:
cat -n src/python3/patch_data.hpp | head -80Repository: PHAREHUB/PHARE
Length of output: 2700
🏁 Script executed:
# Check PatchData structure definition
rg "struct PatchData|class PatchData" -A 20 src/python3/Repository: PHAREHUB/PHARE
Length of output: 40
🏁 Script executed:
# Find field_as_memory_view function
rg "field_as_memory_view" -B 3 -A 10 src/Repository: PHAREHUB/PHARE
Length of output: 1346
🏁 Script executed:
# Check data_wrangler.hpp to see how pdata.data is used
cat -n src/python3/data_wrangler.hpp | head -150Repository: PHAREHUB/PHARE
Length of output: 5948
🏁 Script executed:
# Search for PatchData instantiations
rg "PatchData<" -B 2 -A 2 src/Repository: PHAREHUB/PHARE
Length of output: 4267
🏁 Script executed:
# Search for sync() function definition
rg "void sync|def.*sync" -B 3 -A 15 src/python3/data_wrangler.hppRepository: PHAREHUB/PHARE
Length of output: 40
🏁 Script executed:
# Check the shape<> and makeSpan() function signatures to confirm they expect py_array_t<double>
rg "template.*shape|void.*shape|auto.*shape" -B 2 -A 8 src/python3/Repository: PHAREHUB/PHARE
Length of output: 4112
🏁 Script executed:
# Search for makeSpan implementation
rg "template.*makeSpan|makeSpan\(" -B 2 -A 6 src/Repository: PHAREHUB/PHARE
Length of output: 2397
setPyPatchDataFromField() breaks the py_array_t<double> contract by assigning a py::memoryview.
Line 69 assigns the result of field_as_memory_view() (which returns py::memoryview::from_buffer()) to pdata.data, but pdata is instantiated as PatchData<py_array_t<double>, dimension> in patch_level.hpp. The downstream sync() function in data_wrangler.hpp (lines 121, 126) calls shape<dimension>() and makeSpan() on patch_data.data, both of which are explicitly templated for py_array_t<T> and require the .request() method, which py::memoryview does not provide. Either return a py_array_t<double> view from line 69 or refactor the downstream API to accept memoryviews end-to-end.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/python3/patch_data.hpp` around lines 60 - 69, setPyPatchDataFromField
currently assigns a py::memoryview (from field_as_memory_view()) into
PatchData::data which is instantiated as PatchData<py_array_t<double>,
dimension>, breaking the py_array_t<T> contract used by downstream sync() in
data_wrangler.hpp (which calls shape<dimension>() and makeSpan() and expects
.request()). Fix by converting the memoryview into a py_array_t<double> before
assigning to pdata.data (or otherwise construct a py_array_t<double> view over
the same buffer), e.g. replace the field_as_memory_view() assignment in
setPyPatchDataFromField with code that produces a py_array_t<double> compatible
view; update references to PatchData, setPyPatchDataFromField, and any
construction sites in patch_level.hpp to ensure pdata.data is a
py_array_t<double> not a py::memoryview so shape<dimension>() and makeSpan()
calls in sync() work correctly.
simpler mechanisms to