Skip to content

ufuncing#1192

Open
PhilipDeegan wants to merge 8 commits intoPHAREHUB:masterfrom
PhilipDeegan:ufuncing
Open

ufuncing#1192
PhilipDeegan wants to merge 8 commits intoPHAREHUB:masterfrom
PhilipDeegan:ufuncing

Conversation

@PhilipDeegan
Copy link
Copy Markdown
Member

@PhilipDeegan PhilipDeegan commented Apr 9, 2026

There's still some open questions around data selection

we could consider a class called ValueHierarchy which is similar to the PatchHierarchy, but doesn't have patches.

time / level / list of dict of quantities, this is similar to the PatchHiererarchy

gaussian of Bx from 2d harris init looks like
image

some outer data is dropped during interpolation, which I Have not modified in this PR, but on the lazy loading PR there is less data lost, but for the gaussian filter some outer data will invariably be lost

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 9, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds NumPy protocol hooks across Patch/Level/Hierarchy/FieldData, introduces a tensor-field base with FROM-based constructors for Scalar/Vector fields, adds Gaussian-filter compute helpers and gaussian() methods, updates operator result wrapping to use FROM(), and adjusts selection/centering and tests.

Changes

Cohort / File(s) Summary
Core operators
pyphare/pyphare/core/operators.py
Operator result construction changed to use ScalarField.FROM(...) / VectorField.FROM(...) for dot/cross/sqrt/grad and arithmetic result wrapping.
Grid / Box utilities
pyphare/pyphare/core/box.py, pyphare/pyphare/core/gridlayout.py
Minor variable rename in select; extended yee_centering to treat "value","x","y","z" as "primal".
Hierarchy: top-level
pyphare/pyphare/pharesee/hierarchy/hierarchy.py
Adds __array_ufunc__/__array_function__ that dispatch NumPy ops per time/level, and ensures data_files initialized unconditionally.
Hierarchy: compute helpers
pyphare/pyphare/pharesee/hierarchy/hierarchy_compute.py
New Gaussian-filter helpers: _compute_gaussian_filter_on_scalarfield and _compute_gaussian_filter_on_vectorfield using scipy.ndimage.gaussian_filter, copying only interior region (excluding ghosts).
Patch & PatchLevel NumPy dispatch
pyphare/pyphare/pharesee/hierarchy/patch.py, .../patchlevel.py
Adds __array_ufunc__/__array_function__ and module helpers to dispatch ufunc/array-function per patch and per patch-index; PatchLevel gains integer __getitem__ and cell_width property.
PatchData / FieldData changes
pyphare/pyphare/pharesee/hierarchy/patchdata.py
Centralized centering/ghost resolution; FieldData.select accepts slices/boxes and __getitem__ delegates to it; adds NumPy dispatch that unwraps .dataset, applies ops, and wraps ndarray outputs back into FieldData.
Tensor-field abstraction
pyphare/pyphare/pharesee/hierarchy/tensorfield.py
New AnyTensorField with FROM() constructor, selection semantics, GetDomainSize/GetDl, and get_interpolated_selection_from interpolation helper.
Scalar / Vector fields
pyphare/pyphare/pharesee/hierarchy/scalarfield.py, .../vectorfield.py
Now subclass AnyTensorField, add FROM() classmethods, switch arithmetic/result creation to FROM(), and add gaussian() methods using new compute helpers.
Run & utils changes
pyphare/pyphare/pharesee/run/run.py, pyphare/pyphare/pharesee/run/utils.py
Call sites updated to use FROM() constructors; make_interpolator requests Yee coords including ghosts; small guards/formatting tweaks and improved file-basename guard in all_times.
Tests & simulator
pyphare/pyphare_tests/test_pharesee/test_hierarchy.py, tests/simulator/test_run.py
Hierarchy test times reduced to two timestamps; simulator test parameterized by diag format, reduced resolution/time, refactored plotting to output gaussian/exp/dot of B_finest and other diagnostics.

Sequence Diagram(s)

sequenceDiagram
    actor User
    participant NumPy
    participant Hier as AnyTensorField/PatchHierarchy
    participant Hufunc as hierarchy_array_ufunc
    participant Level as PatchLevel
    participant Lufunc as patch_level_array_ufunc
    participant P as Patch
    participant Pufunc as patch_array_ufunc

    User->>NumPy: np.add(h1, h2)
    NumPy->>Hier: __array_ufunc__(ufunc, method, ...)
    Hier->>Hufunc: dispatch per time/level
    Hufunc->>Hufunc: deepcopy & validate
    loop per time
      Hufunc->>Level: apply ufunc to level
      Level->>Lufunc: __array_ufunc__(...)
      Lufunc->>Lufunc: iterate patches
      loop per patch
        Lufunc->>P: apply ufunc to patch
        P->>Pufunc: __array_ufunc__(...)
        Pufunc->>Pufunc: apply ufunc per patch_datas key
        Pufunc->>P: return new Patch
      end
      Lufunc->>Level: return new PatchLevel
    end
    Hufunc->>Hier: reconstruct time_hier and return new hierarchy
    Hier->>NumPy: result
Loading
sequenceDiagram
    actor User
    participant NumPy
    participant FD as FieldData
    participant FDup as field_data_array_ufunc

    User->>NumPy: np.sqrt(field_data)
    NumPy->>FD: __array_ufunc__(ufunc, method, ...)
    FD->>FDup: dispatch ufunc
    FDup->>FDup: extract ndarray from .dataset
    FDup->>NumPy: apply ndarray ufunc
    NumPy->>FDup: ndarray result
    FDup->>FDup: wrap ndarray result into new FieldData (preserve centering/ghosts)
    FDup->>FD: return FieldData
    FD->>NumPy: result
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested labels

feature, python, pharesee, test

Suggested reviewers

  • UCaromel
  • nicolasaunai
🚥 Pre-merge checks | ❌ 3

❌ Failed checks (1 warning, 2 inconclusive)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 6.86% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Title check ❓ Inconclusive The title 'ufuncing' is vague and generic, using a non-descriptive term that does not convey meaningful information about the substantial changes made across multiple files. Replace with a more descriptive title that reflects the main changes, such as 'Add NumPy ufunc/array_function protocol support to hierarchy and patch classes' or similar.
Description check ❓ Inconclusive No description was provided by the author, making it impossible to assess relatedness to the changeset. Add a pull request description explaining the purpose of the changes, such as implementing NumPy protocol support and refactoring field construction patterns.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Comment thread pyphare/pyphare/pharesee/hierarchy/patchlevel.py Fixed
Comment thread pyphare/pyphare/pharesee/hierarchy/tensorfield.py Fixed
Comment thread pyphare/pyphare/pharesee/hierarchy/tensorfield.py Fixed
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 10

🧹 Nitpick comments (8)
tests/simulator/test_run.py (1)

12-12: Unused import hierarchy_utils.

The hootils alias is imported but never used in this file.

♻️ Proposed fix
-from pyphare.pharesee.hierarchy import hierarchy_utils as hootils
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/simulator/test_run.py` at line 12, Remove the unused import alias
`hootils` from the statement "from pyphare.pharesee.hierarchy import
hierarchy_utils as hootils" in the test module; delete that import (or replace
it with a used symbol if intended) so `hierarchy_utils`/`hootils` is not
imported but unused.
pyphare/pyphare/pharesee/hierarchy/vectorfield.py (1)

60-61: Dead code: reassigning self and other has no effect.

These local variable reassignments don't modify the original objects and are never used afterward. The "needed ?" comment suggests uncertainty — consider removing them.

♻️ Proposed fix
-        self = rename(h_self, names_self_kept)  # needed ?
-        other = rename(h_other, names_other_kept)
-
         return VectorField.FROM(h)

Apply similarly in __sub__.

Also applies to: 83-84

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pyphare/pyphare/pharesee/hierarchy/vectorfield.py` around lines 60 - 61, The
lines reassigning self and other with rename(h_self, names_self_kept) and
rename(h_other, names_other_kept) are dead because they only bind local names
and those locals are never used; remove these redundant assignments in
vectorfield.py (and mirror the same cleanup in __sub__) and keep any necessary
calls that actually affect returned values (e.g., ensure rename is applied to
the objects used later such as h_self/h_other or the result of operations, not
by rebinding self/other locals that are ignored); specifically remove or replace
the two occurrences where rename is assigned to self and other (and the
analogous two lines at the other location) so the code no longer contains no-op
reassignments.
pyphare/pyphare/pharesee/hierarchy/tensorfield.py (3)

49-54: TensorField.__mul__ bypasses FROM pattern used elsewhere.

Other tensor field types use *.FROM(h) for result construction. Direct instantiation here may skip rename logic and cause inconsistency.

♻️ Proposed fix
     def __mul__(self, other):
         if type(other) is TensorField:
             raise ValueError(
                 "TensorField * TensorField is ambiguous, use pyphare.core.operators.dot or .prod"
             )
-        return TensorField(hootils.compute_hier_from(hc.compute_mul, self, other=other))
+        return TensorField.FROM(hootils.compute_hier_from(hc.compute_mul, self, other=other))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pyphare/pyphare/pharesee/hierarchy/tensorfield.py` around lines 49 - 54,
TensorField.__mul__ currently returns a direct TensorField(...) which bypasses
the rename/normalization logic provided by TensorField.FROM; change the return
to construct the result via TensorField.FROM(...) instead of direct
instantiation, i.e. call
TensorField.FROM(hootils.compute_hier_from(hc.compute_mul, self, other=other))
so the output goes through the same FROM path as other tensor operations (keep
the hc.compute_mul and other=other arguments intact).

31-37: Avoid shadowing the input builtin.

Using input as a parameter name shadows Python's built-in function, which can cause subtle bugs and confusion.

♻️ Proposed fix
-    def __getitem__(self, input):
-        cls = type(input)
+    def __getitem__(self, selection):
+        cls = type(selection)
         if cls is Box or cls is slice:
-            return get_interpolated_selection_from(self, input)
-        if input in self.__dict__:
-            return self.__dict__[input]
-        raise ValueError("AnyTensorField.__getitem__ cannot handle input", input)
+            return get_interpolated_selection_from(self, selection)
+        if selection in self.__dict__:
+            return self.__dict__[selection]
+        raise ValueError("AnyTensorField.__getitem__ cannot handle input", selection)

Apply similarly to get_interpolated_selection_from.

Also applies to: 70-78

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pyphare/pyphare/pharesee/hierarchy/tensorfield.py` around lines 31 - 37, The
__getitem__ method currently uses the parameter name `input`, shadowing the
built-in; rename that parameter to a non-built-in name (e.g., `key` or
`selector`) in AnyTensorField.__getitem__ and update its use sites inside the
method (type checks against Box/slice and the dict lookup), and also rename the
parameter in get_interpolated_selection_from (and its other occurrence around
the second __getitem__ implementation) so signatures match; ensure all internal
references and any callers within the module use the new parameter name to avoid
shadowing the built-in `input`.

76-78: Typo: "interpoloation" → "interpolation".

📝 Proposed fix
         raise ValueError(
-            "AnyTensorField interpoloation does not support multiple times"
+            "AnyTensorField interpolation does not support multiple times"
         )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pyphare/pyphare/pharesee/hierarchy/tensorfield.py` around lines 76 - 78, Fix
the typo in the ValueError message raised in
pyphare/pyphare/pharesee/hierarchy/tensorfield.py: change "interpoloation" to
"interpolation" in the raise statement inside the AnyTensorField-related code
(the ValueError raised where multiple times are not supported) so the message
reads "...AnyTensorField interpolation does not support multiple times".
pyphare/pyphare/pharesee/hierarchy/hierarchy.py (2)

657-658: time variable may reference unexpected value after loop.

After the loop, time holds the last iteration's value. If time_hier is empty, time would be undefined. Consider explicitly selecting a time key.

♻️ Proposed fix
+    first_time = next(iter(time_hier.keys()))
-    if type(time_hier[time][0][0]) is dict:
+    if type(time_hier[first_time][0][0]) is dict:
         return time_hier  # return a dictionary of times[]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pyphare/pyphare/pharesee/hierarchy/hierarchy.py` around lines 657 - 658, The
code checks the type of time_hier[time][0][0] using a loop variable time that
may be undefined if time_hier is empty or may refer to the last loop iteration;
change this to explicitly pick a time key and guard against empty time_hier:
first verify time_hier is non-empty, then obtain a representative key with
something like next(iter(time_hier)) (or list(time_hier.keys())[0]) and use that
key instead of the loop variable; update the check that currently uses
time_hier[time][0][0] so it uses the explicit key and handle the empty case by
returning or raising as appropriate.

638-638: Unused loop variable i.

The loop index i is never used. Replace with _ to indicate intentional discard.

♻️ Proposed fix
-    for i, time in enumerate(hier.time_hier):
+    for time in hier.time_hier:

Or if index is needed later:

-    for i, time in enumerate(hier.time_hier):
+    for _i, time in enumerate(hier.time_hier):

Also applies to: 651-651

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pyphare/pyphare/pharesee/hierarchy/hierarchy.py` at line 638, The for-loop
uses an unused index variable: replace "for i, time in
enumerate(hier.time_hier):" with "for _, time in enumerate(hier.time_hier):" (or
drop enumerate and use "for time in hier.time_hier:") to indicate the index is
intentionally unused; apply the same change for the other occurrence around the
same loop (the second instance noted at 651) so the unused variable `i` is
removed in both places.
pyphare/pyphare/pharesee/hierarchy/patchlevel.py (1)

49-49: Add strict=True to zip() for safety.

After validating that all PatchLevel inputs have the same number of patches via is_compatible, the lengths are guaranteed to match. Adding strict=True provides a runtime assertion that catches bugs if the invariant is ever violated.

♻️ Proposed fix
-    out = [getattr(ufunc, method)(*p, **kwargs) for p in zip(*ps)]
+    out = [getattr(ufunc, method)(*p, **kwargs) for p in zip(*ps, strict=True)]
-    out = [func(*p, **kwargs) for p in zip(*ps)]
+    out = [func(*p, **kwargs) for p in zip(*ps, strict=True)]

Also applies to: 59-59

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pyphare/pyphare/pharesee/hierarchy/patchlevel.py` at line 49, Summary: Add
strict=True to zip calls to enforce the precondition that all PatchLevel inputs
have the same length. Fix: In the list comprehensions that build 'out' (the
expressions using zip(*ps) in patchlevel.py), change the zip invocation to
zip(*ps, strict=True) so a ValueError is raised if lengths differ; apply the
same change to the other occurrence noted (also at the second zip usage). Ensure
this is done alongside the existing is_compatible check and no other logic is
altered.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@pyphare/pyphare/pharesee/hierarchy/hierarchy_compute.py`:
- Around line 11-20: The code incorrectly collapses per-axis ghost widths by
using nb_ghosts = patch_datas["value"].ghosts_nbr[0] and then builds select =
tuple([slice(nb_ghosts, -nb_ghosts) for _ in range(ndim)]), which breaks
asymmetric or zero-ghost configurations; fix by reading the full ghosts vector
(e.g. ghosts = patch_datas["value"].ghosts_nbr or similar) and construct select
using each axis' ghost count (slice(ghosts[i], -ghosts[i]) or slice(ghosts[i],
None) when ghost is 0) so ds_[select] receives the interior from
gaussian_filter(ds, sigma=sigma) correctly; update all occurrences (the earlier
nb_ghosts usage and the later identical block) referring to
patch_datas["value"].ghosts_nbr, select, ds_, and gaussian_filter.

In `@pyphare/pyphare/pharesee/hierarchy/hierarchy.py`:
- Around line 632-633: The compatibility check in PatchHierarchy (and similarly
in hierarchy_array_function) calls hier.is_compatible on every item in inputs
which raises AttributeError for non-hierarchy scalars; update the checks to
first filter inputs to only hierarchy-like objects (e.g., instances expected by
PatchHierarchy / PatchLevel) and then call is_compatible on that filtered list
before raising TypeError. Modify the calls around the compatibility logic in
PatchHierarchy (the block using is_compatible at lines near the current check)
and in hierarchy_array_function to ignore non-hierarchy inputs when evaluating
compatibility so only hierarchy instances are passed to is_compatible.

In `@pyphare/pyphare/pharesee/hierarchy/patch.py`:
- Around line 77-105: is_compatible currently uses self.patch_datas ==
that.patch_datas which enforces object-equality; change is_compatible to check
structural compatibility by comparing keys and geometry (e.g., same
set(patch.patch_datas.keys()) and per-key geometry/shape or layout) rather than
dict equality so different PatchData instances with the same structure pass;
then update patch_array_ufunc and patch_array_function to build outputs by key
(iterate patch.patch_datas.keys() and call ufunc/func using the corresponding
inputs for each key) and construct the final mapping by key before returning
type(patch)(final, patch_id=patch.id, layout=patch.layout, attrs=patch.attrs)
(also in patch_array_function, return the reconstructed Patch object instead of
a raw dict when types match).

In `@pyphare/pyphare/pharesee/hierarchy/patchdata.py`:
- Around line 205-220: The function field_data_array_ufunc currently raises
NotImplementedError for unsupported ufunc methods and for non-ndarray results;
change both places to return NotImplemented so NumPy can fall back to other
implementations: in field_data_array_ufunc check the method and if it's not
"__call__" return NotImplemented (instead of raising), and after computing out_
if the result is not an ndarray return NotImplemented rather than raising; keep
the existing construction of the return value when out_ is an ndarray (using
patch_data.layout, patch_data.field_name, out_, patch_data.centerings).

In `@pyphare/pyphare/pharesee/hierarchy/patchlevel.py`:
- Around line 45-46: The compatibility check currently calls
patch_level.is_compatible(o) for all inputs which will raise AttributeError for
non-PatchLevel objects; update the logic in the function containing the lines
with "if not all([patch_level.is_compatible(o) for o in inputs])" to first
filter inputs to only PatchLevel instances (e.g., using isinstance(o,
PatchLevel) or similar) and then run is_compatible only across that filtered
list (treat no PatchLevel peers as trivially compatible); apply the same fix to
patch_level_array_function and the other occurrence around lines 55-56 so only
PatchLevel objects are passed to is_compatible and TypeError is raised only when
filtered PatchLevel instances are mutually incompatible.

In `@pyphare/pyphare/pharesee/hierarchy/scalarfield.py`:
- Around line 218-224: The gaussian method currently uses
self.patch_datas.keys()[0] which will raise TypeError because dict_keys is not
subscriptable; change it to obtain the first key via an iterator (e.g.
next(iter(self.patch_datas))) and wrap that result with ScalarField.FROM(...) to
match other methods; update the call to
hootils.compute_hier_from(hc._compute_gaussian_filter_on_scalarfield, self,
sigma=sigma, qty=ScalarField.FROM(first_key)) so it uses the iterator-first key
and ScalarField.FROM for consistency (refer to gaussian,
hootils.compute_hier_from, hc._compute_gaussian_filter_on_scalarfield,
self.patch_datas, and ScalarField.FROM).

In `@pyphare/pyphare/pharesee/hierarchy/tensorfield.py`:
- Around line 21-29: The FROM method is defined with a first parameter named cls
but decorated as `@staticmethod`; change its decorator to `@classmethod` so the
class is passed automatically, i.e. replace `@staticmethod` above def FROM(cls,
hier): with `@classmethod` and keep the body that constructs cls(...), ensuring
FROM receives the class correctly when called.

In `@pyphare/pyphare/pharesee/hierarchy/vectorfield.py`:
- Around line 105-108: The gaussian method currently returns a raw hierarchy
from hootils.compute_hier_from using hc._compute_gaussian_filter_on_vectorfield
which breaks the VectorField contract; change gaussian(self, sigma=2) so that it
wraps the returned hierarchy with VectorField.FROM(h) (i.e., call
VectorField.FROM on the result of hootils.compute_hier_from) to match other
operators like __mul__ and __add__ and ensure the method returns a VectorField
instance.

In `@pyphare/pyphare/pharesee/run/run.py`:
- Around line 114-130: The wrapped accessors (GetMassDensity, GetNi, GetN,
GetVi, GetFlux, GetDivB, GetRanks) currently call
ScalarField.FROM/VectorField.FROM unconditionally on self._get(...), but when
merged=True _get(...) returns the merged/interpolator payload and must be
returned directly; update each accessor (e.g., GetNi, GetN, GetVi, GetFlux) to
check the merged flag first and return the _get(...) result as-is for
merged=True, otherwise call ScalarField.FROM or VectorField.FROM on the
non-merged hierarchy result; use the same guard/pattern already used in
GetB/GetE and reference the _get and _get_hierarchy helpers when applying the
change.

In `@tests/simulator/test_run.py`:
- Around line 189-190: The two debug print statements using B_finest are
leftover and the np.clip usage is incorrect (np.clip requires a_min and a_max),
so either remove both print lines entirely, or if clipping was intended replace
the np.clip call with a proper call like np.clip(B_finest, a_min, a_max) and log
a single informative message instead of raw prints; locate the occurrences
referencing B_finest and np.clip in the test and apply the chosen fix.

---

Nitpick comments:
In `@pyphare/pyphare/pharesee/hierarchy/hierarchy.py`:
- Around line 657-658: The code checks the type of time_hier[time][0][0] using a
loop variable time that may be undefined if time_hier is empty or may refer to
the last loop iteration; change this to explicitly pick a time key and guard
against empty time_hier: first verify time_hier is non-empty, then obtain a
representative key with something like next(iter(time_hier)) (or
list(time_hier.keys())[0]) and use that key instead of the loop variable; update
the check that currently uses time_hier[time][0][0] so it uses the explicit key
and handle the empty case by returning or raising as appropriate.
- Line 638: The for-loop uses an unused index variable: replace "for i, time in
enumerate(hier.time_hier):" with "for _, time in enumerate(hier.time_hier):" (or
drop enumerate and use "for time in hier.time_hier:") to indicate the index is
intentionally unused; apply the same change for the other occurrence around the
same loop (the second instance noted at 651) so the unused variable `i` is
removed in both places.

In `@pyphare/pyphare/pharesee/hierarchy/patchlevel.py`:
- Line 49: Summary: Add strict=True to zip calls to enforce the precondition
that all PatchLevel inputs have the same length. Fix: In the list comprehensions
that build 'out' (the expressions using zip(*ps) in patchlevel.py), change the
zip invocation to zip(*ps, strict=True) so a ValueError is raised if lengths
differ; apply the same change to the other occurrence noted (also at the second
zip usage). Ensure this is done alongside the existing is_compatible check and
no other logic is altered.

In `@pyphare/pyphare/pharesee/hierarchy/tensorfield.py`:
- Around line 49-54: TensorField.__mul__ currently returns a direct
TensorField(...) which bypasses the rename/normalization logic provided by
TensorField.FROM; change the return to construct the result via
TensorField.FROM(...) instead of direct instantiation, i.e. call
TensorField.FROM(hootils.compute_hier_from(hc.compute_mul, self, other=other))
so the output goes through the same FROM path as other tensor operations (keep
the hc.compute_mul and other=other arguments intact).
- Around line 31-37: The __getitem__ method currently uses the parameter name
`input`, shadowing the built-in; rename that parameter to a non-built-in name
(e.g., `key` or `selector`) in AnyTensorField.__getitem__ and update its use
sites inside the method (type checks against Box/slice and the dict lookup), and
also rename the parameter in get_interpolated_selection_from (and its other
occurrence around the second __getitem__ implementation) so signatures match;
ensure all internal references and any callers within the module use the new
parameter name to avoid shadowing the built-in `input`.
- Around line 76-78: Fix the typo in the ValueError message raised in
pyphare/pyphare/pharesee/hierarchy/tensorfield.py: change "interpoloation" to
"interpolation" in the raise statement inside the AnyTensorField-related code
(the ValueError raised where multiple times are not supported) so the message
reads "...AnyTensorField interpolation does not support multiple times".

In `@pyphare/pyphare/pharesee/hierarchy/vectorfield.py`:
- Around line 60-61: The lines reassigning self and other with rename(h_self,
names_self_kept) and rename(h_other, names_other_kept) are dead because they
only bind local names and those locals are never used; remove these redundant
assignments in vectorfield.py (and mirror the same cleanup in __sub__) and keep
any necessary calls that actually affect returned values (e.g., ensure rename is
applied to the objects used later such as h_self/h_other or the result of
operations, not by rebinding self/other locals that are ignored); specifically
remove or replace the two occurrences where rename is assigned to self and other
(and the analogous two lines at the other location) so the code no longer
contains no-op reassignments.

In `@tests/simulator/test_run.py`:
- Line 12: Remove the unused import alias `hootils` from the statement "from
pyphare.pharesee.hierarchy import hierarchy_utils as hootils" in the test
module; delete that import (or replace it with a used symbol if intended) so
`hierarchy_utils`/`hootils` is not imported but unused.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: c1b96d9e-c7ad-4821-86c4-dd640bf23ff6

📥 Commits

Reviewing files that changed from the base of the PR and between c37379f and 65d8592.

📒 Files selected for processing (15)
  • pyphare/pyphare/core/box.py
  • pyphare/pyphare/core/gridlayout.py
  • pyphare/pyphare/core/operators.py
  • pyphare/pyphare/pharesee/hierarchy/hierarchy.py
  • pyphare/pyphare/pharesee/hierarchy/hierarchy_compute.py
  • pyphare/pyphare/pharesee/hierarchy/patch.py
  • pyphare/pyphare/pharesee/hierarchy/patchdata.py
  • pyphare/pyphare/pharesee/hierarchy/patchlevel.py
  • pyphare/pyphare/pharesee/hierarchy/scalarfield.py
  • pyphare/pyphare/pharesee/hierarchy/tensorfield.py
  • pyphare/pyphare/pharesee/hierarchy/vectorfield.py
  • pyphare/pyphare/pharesee/run/run.py
  • pyphare/pyphare/pharesee/run/utils.py
  • pyphare/pyphare_tests/test_pharesee/test_hierarchy.py
  • tests/simulator/test_run.py

Comment thread pyphare/pyphare/pharesee/hierarchy/hierarchy_compute.py
Comment thread pyphare/pyphare/pharesee/hierarchy/hierarchy.py Outdated
Comment thread pyphare/pyphare/pharesee/hierarchy/patch.py Outdated
Comment thread pyphare/pyphare/pharesee/hierarchy/patchdata.py Outdated
Comment thread pyphare/pyphare/pharesee/hierarchy/patchlevel.py Outdated
Comment thread pyphare/pyphare/pharesee/hierarchy/scalarfield.py
Comment thread pyphare/pyphare/pharesee/hierarchy/tensorfield.py
Comment thread pyphare/pyphare/pharesee/hierarchy/vectorfield.py
Comment thread pyphare/pyphare/pharesee/run/run.py
Comment thread tests/simulator/test_run.py Outdated
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

♻️ Duplicate comments (8)
pyphare/pyphare/pharesee/hierarchy/patchdata.py (1)

205-220: ⚠️ Potential issue | 🟠 Major

Return NotImplemented for unsupported ufunc paths.

For __array_ufunc__, raising here prevents NumPy from falling back to other operand implementations. The unsupported-method branch and the non-ndarray result branch should both return NotImplemented instead.

Suggested fix
 def field_data_array_ufunc(patch_data, ufunc, method, *inputs, **kwargs):
     if method != "__call__":
-        raise NotImplementedError
+        return NotImplemented
 
     in_ = [i.dataset if isinstance(i, FieldData) else i for i in inputs]
     out_ = getattr(ufunc, method)(*in_, **kwargs)
 
     if isinstance(out_, np.ndarray):
         return type(patch_data)(
             layout=patch_data.layout,
             field_name=patch_data.field_name,
             data=out_,
             centering=patch_data.centerings,
         )
 
-    raise NotImplementedError
+    return NotImplemented
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pyphare/pyphare/pharesee/hierarchy/patchdata.py` around lines 205 - 220, The
function field_data_array_ufunc currently raises NotImplementedError which
blocks NumPy from trying other operand implementations; change both error
branches to return NotImplemented instead: in field_data_array_ufunc replace the
"if method != '__call__': raise NotImplementedError" with "return
NotImplemented" and replace the final "raise NotImplementedError" with "return
NotImplemented" so unsupported ufunc methods and non-ndarray outputs yield
NotImplemented as required by __array_ufunc__.
pyphare/pyphare/pharesee/hierarchy/scalarfield.py (1)

218-224: ⚠️ Potential issue | 🔴 Critical

Fix gaussian() key lookup and return type.

self.patch_datas.keys()[0] will raise TypeError in Python 3, and the method currently returns a raw hierarchy instead of a ScalarField.

Suggested fix
 def gaussian(self, sigma=2):
-    return hootils.compute_hier_from(
-        hc._compute_gaussian_filter_on_scalarfield,
-        self,
-        sigma=sigma,
-        qty=self.patch_datas.keys()[0],
-    )
+    return ScalarField.FROM(
+        hootils.compute_hier_from(
+            hc._compute_gaussian_filter_on_scalarfield,
+            self,
+            sigma=sigma,
+            qty=next(iter(self.patch_datas)),
+        )
+    )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pyphare/pyphare/pharesee/hierarchy/scalarfield.py` around lines 218 - 224,
The gaussian method incorrectly uses self.patch_datas.keys()[0] (raises
TypeError) and returns a raw hierarchy instead of a ScalarField; change the key
lookup to use a safe iterator (e.g., next(iter(self.patch_datas)) or
list(self.patch_datas)[0]) to get the first quantity, call
hootils.compute_hier_from(hc._compute_gaussian_filter_on_scalarfield, ...) as
before, and wrap the returned hierarchy in a ScalarField (the ScalarField
constructor used elsewhere in this module) before returning so gaussian returns
a ScalarField instance.
pyphare/pyphare/pharesee/hierarchy/vectorfield.py (1)

105-108: ⚠️ Potential issue | 🟠 Major

Wrap gaussian() with VectorField.FROM(...).

This currently returns the raw hierarchy, so callers lose the VectorField API right after filtering.

Suggested fix
 def gaussian(self, sigma=2):
-    return hootils.compute_hier_from(
-        hc._compute_gaussian_filter_on_vectorfield, self, sigma=sigma
-    )
+    return VectorField.FROM(
+        hootils.compute_hier_from(
+            hc._compute_gaussian_filter_on_vectorfield, self, sigma=sigma
+        )
+    )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pyphare/pyphare/pharesee/hierarchy/vectorfield.py` around lines 105 - 108,
The gaussian method currently returns a raw hierarchy; change it to wrap the
result with VectorField.FROM so callers keep the VectorField API: compute the
hierarchy via
hootils.compute_hier_from(hc._compute_gaussian_filter_on_vectorfield, self,
sigma=sigma), then return VectorField.FROM(hier, self) (or the appropriate FROM
signature) instead of returning the raw hierarchy directly.
pyphare/pyphare/pharesee/run/run.py (1)

112-130: ⚠️ Potential issue | 🟠 Major

Keep the merged fast-path before calling FROM(...).

self._get(..., merged=True) returns the merged interpolator payload, not a hierarchy. Wrapping that with ScalarField.FROM(...) / VectorField.FROM(...) regresses merged mode for these accessors.

Suggested pattern
 def GetNi(self, time, merged=False, interp="nearest", **kwargs):
     hier = self._get_hierarchy(time, "ions_charge_density.h5", **kwargs)
-    return ScalarField.FROM(self._get(hier, time, merged, interp))
+    result = self._get(hier, time, merged, interp)
+    return result if merged else ScalarField.FROM(result)

Apply the same guard to GetMassDensity, GetN, GetVi, GetFlux, GetDivB, and GetRanks.

Also applies to: 174-188

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pyphare/pyphare/pharesee/run/run.py` around lines 112 - 130, The accessors
(GetMassDensity, GetNi, GetN, GetVi, GetFlux — and likewise GetDivB and
GetRanks) must short-circuit the merged fast-path before calling _get_hierarchy
because self._get(..., merged=True) returns the merged payload (not a
hierarchy); move a merged check to the top of each accessor and if merged is
True immediately call ScalarField.FROM(...) or VectorField.FROM(...) on the
result of self._get(..., merged=True, interp=...), otherwise proceed to call
self._get_hierarchy(...) and then call FROM on self._get(...) as before;
reference the methods GetMassDensity, GetN, GetVi, GetFlux, GetDivB, and
GetRanks when applying the same pattern.
pyphare/pyphare/pharesee/hierarchy/patch.py (1)

77-107: ⚠️ Potential issue | 🟠 Major

Don't use patch_datas equality as the compatibility test.

self.patch_datas == that.patch_datas compares the actual PatchData values, so two same-geometry patches with different field contents become "incompatible". The helpers then rely on dict value order via zip(...), which can still pair the wrong quantities if insertion order differs.

Suggested direction
 def is_compatible(self, that):
-    return type(self) is type(that) and self.patch_datas == that.patch_datas
+    return (
+        type(self) is type(that)
+        and self.id == that.id
+        and self.box == that.box
+        and tuple(self.patch_datas) == tuple(that.patch_datas)
+    )
@@
-    pds = [x.patch_datas.values() for x in inputs]
-    out = [getattr(ufunc, method)(*pd, **kwargs) for pd in zip(*pds)]
-    final = {k: pd for k, pd in zip(patch.patch_datas, out)}
+    final = {
+        name: getattr(ufunc, method)(
+            *(obj.patch_datas[name] for obj in inputs), **kwargs
+        )
+        for name in patch.patch_datas
+    }

Apply the same key-based reconstruction in patch_array_function(...).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pyphare/pyphare/pharesee/hierarchy/patch.py` around lines 77 - 107, The
compatibility check and value zipping rely on dict value order and wrongly
compare PatchData contents; change is_compatible(self, that) to compare keys
(e.g. set(self.patch_datas.keys()) == set(that.patch_datas.keys())), and in
patch_array_ufunc and patch_array_function build an explicit ordered keys list
(keys = list(patch.patch_datas.keys())), then construct pds by indexing each
input/arg with those keys (e.g. pds = [[o.patch_datas[k] for k in keys] for o in
inputs/args]), compute out by zipping those pds, and build final using those
keys (final = {k: v for k, v in zip(keys, out)}); keep the existing
type-check/return behavior but ensure the key-based reconstruction is used in
both functions.
pyphare/pyphare/pharesee/hierarchy/tensorfield.py (1)

21-29: ⚠️ Potential issue | 🟠 Major

Make FROM() a @classmethod.

This base factory still takes cls but is declared as @staticmethod, so the base API is mis-bound and only works if callers manually thread the class through.

🐛 Proposed fix
-    `@staticmethod`
+    `@classmethod`
     def FROM(cls, hier):
         return cls(
             hier.patch_levels,
             hier.domain_box,
             hier.refinement_ratio,
             hier.times(),
             hier.data_files,
         )
In Python, if a factory method expects `cls` as its first parameter, should it use `@classmethod` instead of `@staticmethod`?
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pyphare/pyphare/pharesee/hierarchy/tensorfield.py` around lines 21 - 29, The
method FROM is defined with a first parameter cls but annotated with
`@staticmethod`, so change it to a `@classmethod` to bind the class automatically;
update the decorator on FROM and keep the signature def FROM(cls, hier): so
calls like cls(...) work without threading the class through, and review any
callers of FROM (if any) to remove manual class arg passing.
pyphare/pyphare/pharesee/hierarchy/hierarchy.py (1)

632-633: ⚠️ Potential issue | 🟠 Major

Filter non-hierarchy operands before calling is_compatible().

Scalar arguments still flow into hier.is_compatible(...) here, so mixed operations like np.add(hier, 2) or np.clip(hier, 0, 1) fail with AttributeError instead of dispatching cleanly.

Also applies to: 647-648

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pyphare/pyphare/pharesee/hierarchy/hierarchy.py` around lines 632 - 633,
Filter out non-hierarchy operands before calling compatibility checks: when
verifying inputs against hier.is_compatible, first select only operands that are
instances of the hierarchy type (e.g., PatchHierarchy or by using isinstance(x,
type(hier)) or an equivalent check) so scalars or numpy arrays are ignored;
replace the current all([hier.is_compatible(o) for o in inputs]) with a check
over the filtered list of hierarchy instances, and apply the same fix to the
second occurrence around the hier.is_compatible call at the later lines
(647-648).
tests/simulator/test_run.py (1)

189-190: ⚠️ Potential issue | 🔴 Critical

Remove or fix the invalid np.clip() debug call.

np.clip(B_finest) is missing a_min/a_max, so plot() will raise before the file checks run on rank 0.

🐛 Proposed fix
-            print("np.mean(B_finest)", np.mean(B_finest))
-            print("np.clip(B_finest)", np.clip(B_finest))
+            # print("np.mean(B_finest)", np.mean(B_finest))
+            # print("np.clip(B_finest)", np.clip(B_finest, -1.0, 1.0))
What arguments are required by numpy.clip?
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/simulator/test_run.py` around lines 189 - 190, The debug call
print("np.clip(B_finest)", np.clip(B_finest)) is invalid because numpy.clip
requires a_min and/or a_max; fix by either removing that debug line or replacing
it with a valid clip call such as np.clip(B_finest, a_min=<lower>,
a_max=<upper>) (choose appropriate bounds for B_finest) so plot() won't error,
and keep the other debug print of np.mean(B_finest) intact; look for the prints
around B_finest in tests/simulator/test_run.py to apply the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@pyphare/pyphare/pharesee/hierarchy/hierarchy.py`:
- Around line 622-625: The current is_compatible method only checks type(self)
and time_hier.keys(), which can allow mismatched level sets/counts and cause
positional mispairing in the dispatch loop; update is_compatible (and the
similar compatibility checks at the other locations noted) to also verify that
level_hier keys are identical and that the number of levels matches (or require
the same ordered level keys), returning False if they differ; alternatively,
change the dispatch pairing logic to pair by matching level keys from
self.level_hier and that.level_hier instead of by position so operations always
align by explicit keys (reference symbols: is_compatible, time_hier, level_hier,
and the dispatch loop that pairs levels).

In `@pyphare/pyphare/pharesee/hierarchy/patchdata.py`:
- Around line 213-218: When rebuilding FieldData via type(patch_data)(...) in
the NumPy-wrapping helpers, preserve the original ghost width by forwarding
ghosts_nbr=patch_data.ghosts_nbr (or the exact attribute name used on FieldData)
into the constructor; update both occurrences around the return at the block
that currently constructs with layout/field_name/data/centering (lines shown)
and the similar construction at the second location (around 230-235) so the new
FieldData retains the same ghosts_nbr as the input.

In `@pyphare/pyphare/pharesee/hierarchy/patchlevel.py`:
- Around line 37-63: is_compatible currently only compares patch counts; change
it to validate per-patch compatibility (e.g., ensure type(self) is type(that)
and for every pair of patches self.patches[i] and that.patches[i] are compatible
via their own compatibility check or equality) so mismatched geometry/order is
rejected. In patch_level_array_ufunc and patch_level_array_function replace the
coarse all(...is_compatible(o)...) check with a strict per-index pairing check
(use the updated is_compatible semantics or an explicit loop that verifies every
corresponding patch across all inputs are mutually compatible) before zipping
and applying the ufunc/func; ensure you raise TypeError on any mismatch and only
construct the result PatchLevel (type(patch_level)) when all paired patches
validated.

In `@pyphare/pyphare/pharesee/hierarchy/tensorfield.py`:
- Around line 49-54: The TensorField.__mul__ currently passes the raw result of
hootils.compute_hier_from(...) into the TensorField constructor which bypasses
the metadata/shape wrapper; change the return to wrap the compute_hier_from(...)
result with FROM(...) before constructing the TensorField (i.e. use
TensorField(FROM(hootils.compute_hier_from(...)))) so the hierarchy-like result
is converted via FROM() and preserves the expected constructor metadata; update
the call site in __mul__ accordingly.

---

Duplicate comments:
In `@pyphare/pyphare/pharesee/hierarchy/hierarchy.py`:
- Around line 632-633: Filter out non-hierarchy operands before calling
compatibility checks: when verifying inputs against hier.is_compatible, first
select only operands that are instances of the hierarchy type (e.g.,
PatchHierarchy or by using isinstance(x, type(hier)) or an equivalent check) so
scalars or numpy arrays are ignored; replace the current
all([hier.is_compatible(o) for o in inputs]) with a check over the filtered list
of hierarchy instances, and apply the same fix to the second occurrence around
the hier.is_compatible call at the later lines (647-648).

In `@pyphare/pyphare/pharesee/hierarchy/patch.py`:
- Around line 77-107: The compatibility check and value zipping rely on dict
value order and wrongly compare PatchData contents; change is_compatible(self,
that) to compare keys (e.g. set(self.patch_datas.keys()) ==
set(that.patch_datas.keys())), and in patch_array_ufunc and patch_array_function
build an explicit ordered keys list (keys = list(patch.patch_datas.keys())),
then construct pds by indexing each input/arg with those keys (e.g. pds =
[[o.patch_datas[k] for k in keys] for o in inputs/args]), compute out by zipping
those pds, and build final using those keys (final = {k: v for k, v in zip(keys,
out)}); keep the existing type-check/return behavior but ensure the key-based
reconstruction is used in both functions.

In `@pyphare/pyphare/pharesee/hierarchy/patchdata.py`:
- Around line 205-220: The function field_data_array_ufunc currently raises
NotImplementedError which blocks NumPy from trying other operand
implementations; change both error branches to return NotImplemented instead: in
field_data_array_ufunc replace the "if method != '__call__': raise
NotImplementedError" with "return NotImplemented" and replace the final "raise
NotImplementedError" with "return NotImplemented" so unsupported ufunc methods
and non-ndarray outputs yield NotImplemented as required by __array_ufunc__.

In `@pyphare/pyphare/pharesee/hierarchy/scalarfield.py`:
- Around line 218-224: The gaussian method incorrectly uses
self.patch_datas.keys()[0] (raises TypeError) and returns a raw hierarchy
instead of a ScalarField; change the key lookup to use a safe iterator (e.g.,
next(iter(self.patch_datas)) or list(self.patch_datas)[0]) to get the first
quantity, call
hootils.compute_hier_from(hc._compute_gaussian_filter_on_scalarfield, ...) as
before, and wrap the returned hierarchy in a ScalarField (the ScalarField
constructor used elsewhere in this module) before returning so gaussian returns
a ScalarField instance.

In `@pyphare/pyphare/pharesee/hierarchy/tensorfield.py`:
- Around line 21-29: The method FROM is defined with a first parameter cls but
annotated with `@staticmethod`, so change it to a `@classmethod` to bind the class
automatically; update the decorator on FROM and keep the signature def FROM(cls,
hier): so calls like cls(...) work without threading the class through, and
review any callers of FROM (if any) to remove manual class arg passing.

In `@pyphare/pyphare/pharesee/hierarchy/vectorfield.py`:
- Around line 105-108: The gaussian method currently returns a raw hierarchy;
change it to wrap the result with VectorField.FROM so callers keep the
VectorField API: compute the hierarchy via
hootils.compute_hier_from(hc._compute_gaussian_filter_on_vectorfield, self,
sigma=sigma), then return VectorField.FROM(hier, self) (or the appropriate FROM
signature) instead of returning the raw hierarchy directly.

In `@pyphare/pyphare/pharesee/run/run.py`:
- Around line 112-130: The accessors (GetMassDensity, GetNi, GetN, GetVi,
GetFlux — and likewise GetDivB and GetRanks) must short-circuit the merged
fast-path before calling _get_hierarchy because self._get(..., merged=True)
returns the merged payload (not a hierarchy); move a merged check to the top of
each accessor and if merged is True immediately call ScalarField.FROM(...) or
VectorField.FROM(...) on the result of self._get(..., merged=True, interp=...),
otherwise proceed to call self._get_hierarchy(...) and then call FROM on
self._get(...) as before; reference the methods GetMassDensity, GetN, GetVi,
GetFlux, GetDivB, and GetRanks when applying the same pattern.

In `@tests/simulator/test_run.py`:
- Around line 189-190: The debug call print("np.clip(B_finest)",
np.clip(B_finest)) is invalid because numpy.clip requires a_min and/or a_max;
fix by either removing that debug line or replacing it with a valid clip call
such as np.clip(B_finest, a_min=<lower>, a_max=<upper>) (choose appropriate
bounds for B_finest) so plot() won't error, and keep the other debug print of
np.mean(B_finest) intact; look for the prints around B_finest in
tests/simulator/test_run.py to apply the change.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: edcae2d4-7b90-4de5-8e7c-ec2271112079

📥 Commits

Reviewing files that changed from the base of the PR and between 65d8592 and a9a07ec.

📒 Files selected for processing (15)
  • pyphare/pyphare/core/box.py
  • pyphare/pyphare/core/gridlayout.py
  • pyphare/pyphare/core/operators.py
  • pyphare/pyphare/pharesee/hierarchy/hierarchy.py
  • pyphare/pyphare/pharesee/hierarchy/hierarchy_compute.py
  • pyphare/pyphare/pharesee/hierarchy/patch.py
  • pyphare/pyphare/pharesee/hierarchy/patchdata.py
  • pyphare/pyphare/pharesee/hierarchy/patchlevel.py
  • pyphare/pyphare/pharesee/hierarchy/scalarfield.py
  • pyphare/pyphare/pharesee/hierarchy/tensorfield.py
  • pyphare/pyphare/pharesee/hierarchy/vectorfield.py
  • pyphare/pyphare/pharesee/run/run.py
  • pyphare/pyphare/pharesee/run/utils.py
  • pyphare/pyphare_tests/test_pharesee/test_hierarchy.py
  • tests/simulator/test_run.py
✅ Files skipped from review due to trivial changes (2)
  • pyphare/pyphare_tests/test_pharesee/test_hierarchy.py
  • pyphare/pyphare/pharesee/run/utils.py
🚧 Files skipped from review as they are similar to previous changes (2)
  • pyphare/pyphare/core/gridlayout.py
  • pyphare/pyphare/pharesee/hierarchy/hierarchy_compute.py

Comment thread pyphare/pyphare/pharesee/hierarchy/hierarchy.py Outdated
Comment thread pyphare/pyphare/pharesee/hierarchy/patchdata.py
Comment thread pyphare/pyphare/pharesee/hierarchy/patchlevel.py Outdated
Comment thread pyphare/pyphare/pharesee/hierarchy/tensorfield.py Outdated
Comment thread pyphare/pyphare/pharesee/hierarchy/tensorfield.py Fixed
Comment thread pyphare/pyphare/pharesee/run/run.py Dismissed
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
pyphare/pyphare/pharesee/hierarchy/tensorfield.py (1)

21-29: ⚠️ Potential issue | 🔴 Critical

FROM still needs to be a @classmethod.

VectorField.FROM(h) / ScalarField.FROM(h) now call this through the class, but with @staticmethod Python binds h to cls and leaves hier missing. That breaks the new FROM(...) construction path immediately.

🐛 Proposed fix
-    `@staticmethod`
-    def FROM(cls, hier):
+    `@classmethod`
+    def FROM(cls, hier):
         return cls(
             hier.patch_levels,
             hier.domain_box,
             hier.refinement_ratio,
             hier.times(),
             hier.data_files,
         )

Run this to confirm the current decorator/call pattern and the Python binding behavior:

#!/bin/bash
set -euo pipefail

rg -n '^\s*@staticmethod|^\s*def FROM|\.FROM\(' \
  pyphare/pyphare/pharesee/hierarchy/tensorfield.py \
  pyphare/pyphare/pharesee/hierarchy \
  pyphare/pyphare/pharesee/run/run.py

python - <<'PY'
class Base:
    `@staticmethod`
    def FROM(cls, hier):
        return cls(hier)

class Child(Base):
    pass

try:
    Child.FROM("hier")
except TypeError as exc:
    print(type(exc).__name__, exc)
PY

Expected result: the grep shows @staticmethod on FROM plus class-style call sites, and the Python snippet prints a TypeError because the class is not passed automatically.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pyphare/pyphare/pharesee/hierarchy/tensorfield.py` around lines 21 - 29, The
FROM method is declared with `@staticmethod` but used as a classmethod (e.g.,
VectorField.FROM(h), ScalarField.FROM(h)), causing Python to bind arguments
incorrectly; replace the `@staticmethod` decorator on FROM in tensorfield.py with
`@classmethod` and keep the signature def FROM(cls, hier): so the class is passed
automatically when called through subclasses like VectorField or ScalarField.
🧹 Nitpick comments (1)
pyphare/pyphare/pharesee/run/run.py (1)

269-270: Use continue instead of ... in the KeyError path.

... is a no-op here. continue makes the intended skip explicit and clears the static-analysis warning.

♻️ Proposed fix
             except KeyError:
-                ...  # time may not be available for given quantity
+                continue  # time may not be available for given quantity
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pyphare/pyphare/pharesee/run/run.py` around lines 269 - 270, The except
KeyError handler currently uses an ellipsis (...) which is a no-op and triggers
static-analysis warnings; replace the ellipsis with a continue statement inside
the except KeyError block (i.e., change the handler in the except KeyError:
clause in run.py to use continue) so the loop explicitly skips the current
iteration when time is unavailable for a given quantity.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@pyphare/pyphare/pharesee/hierarchy/patch.py`:
- Line 85: Remove the stray debug prints in patch_array_ufunc (the unconditional
print("patch_array_ufunc", ufunc, method, kwargs)) and the similar print at the
later occurrence (around lines referencing the same helper); either delete them
or replace them with a proper logger.debug call so they don't unconditionally
spam stdout during normal NumPy dispatch across patches/components.
- Around line 103-105: PatchHierarchy._quantities currently assumes
patch.patch_datas and final contain at least one element and uses
next(iter(...)) which can raise StopIteration for empty/ghost patches; change
the logic that assigns any_patch_data and any_data to safely find a
representative non-empty element (or None) by using a guarded iterator such as
next((v for v in patch.patch_datas.values() if v), None) and next((v for v in
final.values() if v is not None), None), and handle the None case appropriately
(skip reductions or return empty result) so final, any_patch_data, and any_data
are only accessed when non-empty.

---

Duplicate comments:
In `@pyphare/pyphare/pharesee/hierarchy/tensorfield.py`:
- Around line 21-29: The FROM method is declared with `@staticmethod` but used as
a classmethod (e.g., VectorField.FROM(h), ScalarField.FROM(h)), causing Python
to bind arguments incorrectly; replace the `@staticmethod` decorator on FROM in
tensorfield.py with `@classmethod` and keep the signature def FROM(cls, hier): so
the class is passed automatically when called through subclasses like
VectorField or ScalarField.

---

Nitpick comments:
In `@pyphare/pyphare/pharesee/run/run.py`:
- Around line 269-270: The except KeyError handler currently uses an ellipsis
(...) which is a no-op and triggers static-analysis warnings; replace the
ellipsis with a continue statement inside the except KeyError block (i.e.,
change the handler in the except KeyError: clause in run.py to use continue) so
the loop explicitly skips the current iteration when time is unavailable for a
given quantity.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 14c99941-f392-4a48-bf8c-3beacd8b6318

📥 Commits

Reviewing files that changed from the base of the PR and between a9a07ec and b392d10.

📒 Files selected for processing (9)
  • pyphare/pyphare/pharesee/hierarchy/hierarchy.py
  • pyphare/pyphare/pharesee/hierarchy/patch.py
  • pyphare/pyphare/pharesee/hierarchy/patchdata.py
  • pyphare/pyphare/pharesee/hierarchy/patchlevel.py
  • pyphare/pyphare/pharesee/hierarchy/scalarfield.py
  • pyphare/pyphare/pharesee/hierarchy/tensorfield.py
  • pyphare/pyphare/pharesee/hierarchy/vectorfield.py
  • pyphare/pyphare/pharesee/run/run.py
  • tests/simulator/test_run.py
✅ Files skipped from review due to trivial changes (1)
  • pyphare/pyphare/pharesee/hierarchy/patchdata.py
🚧 Files skipped from review as they are similar to previous changes (2)
  • pyphare/pyphare/pharesee/hierarchy/scalarfield.py
  • pyphare/pyphare/pharesee/hierarchy/vectorfield.py

Comment thread pyphare/pyphare/pharesee/hierarchy/patch.py Outdated
Comment thread pyphare/pyphare/pharesee/hierarchy/patch.py
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (3)
pyphare/pyphare/pharesee/hierarchy/patchdata.py (1)

205-221: ⚠️ Potential issue | 🟠 Major

Return NotImplemented from the ufunc hook.

__array_ufunc__ should signal unsupported cases with NotImplemented, not by raising NotImplementedError or returning the NotImplementedError class. As written, non-__call__ methods and non-ndarray results bypass NumPy's fallback path.

Suggested fix
 def field_data_array_ufunc(patch_data, ufunc, method, *inputs, **kwargs):
     if method != "__call__":
-        raise NotImplementedError
+        return NotImplemented
@@
-    return NotImplementedError
+    return NotImplemented
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pyphare/pyphare/pharesee/hierarchy/patchdata.py` around lines 205 - 221, In
field_data_array_ufunc: replace the raise NotImplementedError in the
method-check and the final return of NotImplementedError with the NotImplemented
singleton so unsupported ufunc methods or non-ndarray results return
NotImplemented (use "return NotImplemented" for both the method != "__call__"
case and when out_ is not an np.ndarray) to allow NumPy's fallback handling;
locate this logic in the field_data_array_ufunc function and update it
accordingly.
pyphare/pyphare/pharesee/hierarchy/patchlevel.py (1)

38-59: ⚠️ Potential issue | 🟠 Major

Validate PatchLevel compatibility before pairing patches by index.

Both helpers assume every PatchLevel input has the same patch count and ordering, then combine patches by pidx. With reordered or geometrically different patch lists you'll silently operate on unrelated patches, and with fewer patches you'll fail partway through dispatch with IndexError instead of a clean TypeError.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pyphare/pyphare/pharesee/hierarchy/patchlevel.py` around lines 38 - 59,
Before combining per-index patches in patch_level_array_ufunc and
patch_level_array_function, validate that every PatchLevel-like input has the
same number of patches and matching ordering/identity; gather all inputs where
type(x) is type(patch_level), check they all have equal len(x.patches) and (if
available) compare a stable identifier per patch (e.g. patch.id or
patch.geometry) across the same indices to ensure ordering matches, and if any
mismatch raise a TypeError with a clear message instead of proceeding; add this
validation at the start of both patch_level_array_ufunc and
patch_level_array_function.
pyphare/pyphare/pharesee/hierarchy/hierarchy.py (1)

623-658: ⚠️ Potential issue | 🟠 Major

Reject incompatible hierarchies before dispatching per time/level.

These helpers iterate the left-hand hierarchy's time/ilvl keys and then blindly fetch x.level(ilvl, time) from every same-typed operand. If another hierarchy is missing a time or level key, the NumPy call dies with KeyError; if its nested patch structure differs, the mismatch is deferred to lower layers. Validate compatibility first and raise TypeError consistently.

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@pyphare/pyphare/pharesee/hierarchy/patchdata.py`:
- Around line 224-237: The current field_data_array_function always rewraps any
non-scalar result as type(patch_data) (FieldData), which breaks
shape/axis-changing ops (e.g. np.transpose, sum, stack). Change it to only
re-wrap when the resulting array shape and dimensionality still match the
original field geometry: after computing out_, if phut.is_scalar(out_) return
it; otherwise compare out_.shape (and ndim) to the original dataset shape (use
patch_data.dataset or the FieldData args' .dataset) and only construct
type(patch_data)(layout=patch_data.layout, field_name=..., data=out_,
centering=patch_data.centerings, ghosts_nbr=patch_data.ghosts_nbr) when shapes
match; if they don’t match, return out_ (the raw ndarray) so callers can handle
shape/axis changes instead of silently preserving old layout/centering.

In `@pyphare/pyphare/pharesee/hierarchy/scalarfield.py`:
- Around line 218-225: The gaussian method on ScalarField incorrectly accesses
self.patch_datas (which doesn't exist on the hierarchy wrapper) causing an
AttributeError; update gaussian to pass the hierarchy quantity instead of
self.patch_datas by computing qty from the hierarchy (e.g., use the existing
pattern that derives qty like next(iter(self.hierarchy.patch_datas.keys())) or a
provided hierarchy quantity) when calling hootils.compute_hier_from with
hc._compute_gaussian_filter_on_scalarfield, keeping the ScalarField.FROM(...)
return and the sigma parameter intact; adjust references so gaussian uses the
hierarchy's quantity lookup rather than self.patch_datas.

In `@pyphare/pyphare/pharesee/hierarchy/tensorfield.py`:
- Around line 50-97: The function get_interpolated_selection_from ignores the
input selection parameter; update it so the requested selection (input) is
applied when extracting and interpolating data. Locate
get_interpolated_selection_from and ensure you pass the selection into the
data/coordinate extraction (e.g., hootils.flat_finest_field or equivalent) or
apply the selection to finest_coords and data before calling
rutils.make_interpolator, and build the mesh only for the selected coordinate
ranges so interpolator(*mesh) returns values for the requested box/slices; also
update the created patch layout/box (where patch0.layout.box is set and
new_patch0.layout is used) to reflect the input selection rather than always
using the full domain. Ensure any downstream uses (levels, cpy.time_hier)
preserve the reduced domain/indices.

---

Duplicate comments:
In `@pyphare/pyphare/pharesee/hierarchy/patchdata.py`:
- Around line 205-221: In field_data_array_ufunc: replace the raise
NotImplementedError in the method-check and the final return of
NotImplementedError with the NotImplemented singleton so unsupported ufunc
methods or non-ndarray results return NotImplemented (use "return
NotImplemented" for both the method != "__call__" case and when out_ is not an
np.ndarray) to allow NumPy's fallback handling; locate this logic in the
field_data_array_ufunc function and update it accordingly.

In `@pyphare/pyphare/pharesee/hierarchy/patchlevel.py`:
- Around line 38-59: Before combining per-index patches in
patch_level_array_ufunc and patch_level_array_function, validate that every
PatchLevel-like input has the same number of patches and matching
ordering/identity; gather all inputs where type(x) is type(patch_level), check
they all have equal len(x.patches) and (if available) compare a stable
identifier per patch (e.g. patch.id or patch.geometry) across the same indices
to ensure ordering matches, and if any mismatch raise a TypeError with a clear
message instead of proceeding; add this validation at the start of both
patch_level_array_ufunc and patch_level_array_function.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: f6671c6a-5d33-4a00-8d3a-69231ad92831

📥 Commits

Reviewing files that changed from the base of the PR and between b392d10 and 89ddf9c.

📒 Files selected for processing (9)
  • pyphare/pyphare/pharesee/hierarchy/hierarchy.py
  • pyphare/pyphare/pharesee/hierarchy/patch.py
  • pyphare/pyphare/pharesee/hierarchy/patchdata.py
  • pyphare/pyphare/pharesee/hierarchy/patchlevel.py
  • pyphare/pyphare/pharesee/hierarchy/scalarfield.py
  • pyphare/pyphare/pharesee/hierarchy/tensorfield.py
  • pyphare/pyphare/pharesee/hierarchy/vectorfield.py
  • pyphare/pyphare/pharesee/run/run.py
  • tests/simulator/test_run.py
✅ Files skipped from review due to trivial changes (1)
  • pyphare/pyphare/pharesee/hierarchy/vectorfield.py
🚧 Files skipped from review as they are similar to previous changes (2)
  • pyphare/pyphare/pharesee/run/run.py
  • pyphare/pyphare/pharesee/hierarchy/patch.py

Comment thread pyphare/pyphare/pharesee/hierarchy/patchdata.py Outdated
Comment thread pyphare/pyphare/pharesee/hierarchy/scalarfield.py
Comment on lines +50 to +97
def get_interpolated_selection_from(hier: AnyTensorField, input, interp="nearest"):
if type(hier) is PatchHierarchy:
raise ValueError("PatchHierarchy not supported, must be AnyTensorField")

times = hier.times()
if len(hier.times()) > 1:
raise ValueError(
"AnyTensorField interpoloation does not support multiple times"
)

from pyphare.pharesee.run import utils as rutils

time = times[0]

dl = GetDl(hier, time)
domain = GetDomainSize(hier)

cpy = deepcopy(hier)
levels = cpy.levels(time)
level0 = levels[0]
patch0 = level0.patches[0]

patch0.layout.box = hier.level_domain_box(len(levels) - 1) # hax todo etc

new_patch0 = Patch({}, patch0.id, patch0.layout)

nbrGhosts = list(hier.level(0).patches[0].patch_datas.values())[0].ghosts_nbr
for qty in hier.quantities():
data, coords = hootils.flat_finest_field(hier, qty, time=time)
interpolator, finest_coords = rutils.make_interpolator(
data, coords, interp, domain, dl, qty, nbrGhosts
)

mesh = np.meshgrid(*finest_coords, indexing="ij")

pdata = type(patch0[qty])(
layout=patch0.layout,
field_name=patch0[qty].name,
data=interpolator(*mesh),
ghosts_nbr=nbrGhosts,
centering=patch0[qty].centerings,
)
new_patch0.patch_datas[qty] = pdata

level0.patches = [new_patch0]
cpy.time_hier[time] = {0: level0}

return cpy
Copy link
Copy Markdown

@coderabbitai coderabbitai bot Apr 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

The requested selection is ignored here.

get_interpolated_selection_from() never uses input, so tensor_field[box] and tensor_field[1:4] both rebuild the full interpolated hierarchy. Run.GetB(time)[:] works by coincidence, but any real sub-selection will return the wrong domain.

🧰 Tools
🪛 Ruff (0.15.9)

[error] 50-50: Function argument input is shadowing a Python builtin

(A002)


[warning] 76-76: Prefer next(...) over single element slice

Replace with next(...)

(RUF015)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pyphare/pyphare/pharesee/hierarchy/tensorfield.py` around lines 50 - 97, The
function get_interpolated_selection_from ignores the input selection parameter;
update it so the requested selection (input) is applied when extracting and
interpolating data. Locate get_interpolated_selection_from and ensure you pass
the selection into the data/coordinate extraction (e.g.,
hootils.flat_finest_field or equivalent) or apply the selection to finest_coords
and data before calling rutils.make_interpolator, and build the mesh only for
the selected coordinate ranges so interpolator(*mesh) returns values for the
requested box/slices; also update the created patch layout/box (where
patch0.layout.box is set and new_patch0.layout is used) to reflect the input
selection rather than always using the full domain. Ensure any downstream uses
(levels, cpy.time_hier) preserve the reduced domain/indices.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the selection is a todo, likely future work

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@PhilipDeegan, understood — noted that the input selection parameter in get_interpolated_selection_from() is a placeholder for future sub-selection support and intentionally unused for now.


✏️ Learnings added
Learnt from: PhilipDeegan
Repo: PHAREHUB/PHARE PR: 1192
File: pyphare/pyphare/pharesee/hierarchy/tensorfield.py:50-97
Timestamp: 2026-04-09T12:39:59.360Z
Learning: In `pyphare/pyphare/pharesee/hierarchy/tensorfield.py`, the `input` parameter of `get_interpolated_selection_from(hier, input, interp="nearest")` is intentionally unused. Applying the requested box/slice selection is a known TODO for future work. Do not flag it as a bug.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.


🧠 Learnings used
Learnt from: PhilipDeegan
Repo: PHAREHUB/PHARE PR: 1192
File: pyphare/pyphare/pharesee/hierarchy/tensorfield.py:20-28
Timestamp: 2026-04-09T12:33:39.398Z
Learning: In `pyphare/pyphare/pharesee/hierarchy/tensorfield.py`, `AnyTensorField.FROM` is intentionally a `staticmethod` (not `classmethod`) that accepts an explicit `cls` parameter. It is called from `classmethod FROM(cls, hier)` definitions in subclasses (e.g., `ScalarField`, `VectorField`) which pass their own `cls` explicitly via `super().FROM(cls, hier)`. This is a deliberate design pattern so `cls` always refers to the concrete subclass type.

Learnt from: PhilipDeegan
Repo: PHAREHUB/PHARE PR: 910
File: pyphare/pyphare/pharesee/hierarchy/hierarchy_utils.py:7-7
Timestamp: 2024-10-18T13:23:32.074Z
Learning: In the `pyphare.pharesee.hierarchy` module, importing `PatchHierarchy` and `format_timestamp` from `hierarchy.py` into `hierarchy_utils.py` is acceptable as long as `hierarchy.py` does not import `hierarchy_utils.py`, thereby avoiding a cyclic import.

Learnt from: PhilipDeegan
Repo: PHAREHUB/PHARE PR: 784
File: tests/simulator/test_restarts.py:333-339
Timestamp: 2024-10-09T08:32:15.667Z
Learning: PhilipDeegan has indicated a preference for minimal and efficient code, even after making changes to get a function working. There may be opportunities to remove or optimize parts of the code such as sleep/wait/advances.

Learnt from: PhilipDeegan
Repo: PHAREHUB/PHARE PR: 784
File: tests/simulator/test_restarts.py:333-339
Timestamp: 2024-07-26T22:04:34.160Z
Learning: PhilipDeegan has indicated a preference for minimal and efficient code, even after making changes to get a function working. There may be opportunities to remove or optimize parts of the code such as sleep/wait/advances.

Learnt from: PhilipDeegan
Repo: PHAREHUB/PHARE PR: 1068
File: src/amr/data/field/coarsening/electric_field_coarsener.hpp:1-2
Timestamp: 2025-09-17T13:35:11.533Z
Learning: PhilipDeegan prefers header guard names that include the full directory path structure, following the pattern PHARE_[PATH_WITH_UNDERSCORES]_HPP. For example, a file at src/amr/data/field/coarsening/electric_field_coarsener.hpp should use PHARE_AMR_DATA_FIELD_COARSENING_ELECTRIC_FIELD_COARSENER_HPP as the header guard.

Learnt from: PhilipDeegan
Repo: PHAREHUB/PHARE PR: 888
File: src/hdf5/detail/h5/h5_file.hpp:78-79
Timestamp: 2024-09-05T17:02:58.784Z
Learning: Avoid commenting on draft PRs for the user PhilipDeegan.

Learnt from: PhilipDeegan
Repo: PHAREHUB/PHARE PR: 888
File: src/hdf5/detail/h5/h5_file.hpp:78-79
Timestamp: 2024-10-09T08:32:15.667Z
Learning: Avoid commenting on draft PRs for the user PhilipDeegan.

Learnt from: PhilipDeegan
Repo: PHAREHUB/PHARE PR: 888
File: pyphare/pyphare/pharesee/hierarchy/hierarchy_utils.py:566-568
Timestamp: 2024-09-09T13:57:02.285Z
Learning: Avoid commenting on draft PRs for PhilipDeegan unless explicitly requested.

Comment thread pyphare/pyphare/pharesee/hierarchy/patchdata.py Fixed
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
pyphare/pyphare/pharesee/hierarchy/patchdata.py (1)

205-221: ⚠️ Potential issue | 🟡 Minor

Use return NotImplemented instead of raise NotImplemented.

Line 207 has raise NotImplemented which is incorrect for two reasons:

  1. NotImplemented is a singleton value, not an exception class (Ruff F901 flags this)
  2. Per NumPy's __array_ufunc__ protocol, returning NotImplemented allows NumPy to try other implementations, whereas raising prevents fallback behavior
Proposed fix
 def field_data_array_ufunc(patch_data, ufunc, method, *inputs, **kwargs):
     if method != "__call__":
-        raise NotImplemented
+        return NotImplemented
 
     in_ = [i.dataset if isinstance(i, FieldData) else i for i in inputs]
     out_ = getattr(ufunc, method)(*in_, **kwargs)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pyphare/pyphare/pharesee/hierarchy/patchdata.py` around lines 205 - 221, In
field_data_array_ufunc, when the incoming ufunc method is unsupported (the
branch where currently "raise NotImplemented" is used), change that to "return
NotImplemented" so the function returns the NotImplemented singleton instead of
raising an exception; update the branch at the start of field_data_array_ufunc
(the check for method != "__call__") to return NotImplemented to allow NumPy's
__array_ufunc__ fallback behavior.
🧹 Nitpick comments (5)
pyphare/pyphare/pharesee/hierarchy/hierarchy.py (1)

633-638: Rename unused loop variable i to _.

The loop variable i is not used in the loop body.

Proposed fix
-    for i, time in enumerate(hier.time_hier):
+    for time in hier.time_hier:
         for ilvl in hier.levels(time):
             ret.time_hier[time][ilvl] = getattr(ufunc, method)(
                 *extract(time, ilvl), **kwargs
             )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pyphare/pyphare/pharesee/hierarchy/hierarchy.py` around lines 633 - 638, The
for-loop uses an unused index variable i when iterating over hier.time_hier;
update the loop header in the block that iterates "for i, time in
enumerate(hier.time_hier):" to use a throwaway name "_" instead (e.g. "for _,
time in enumerate(hier.time_hier):") so the unused variable is clearly marked;
keep the body unchanged where ret.time_hier[time][ilvl] = getattr(ufunc,
method)(*extract(time, ilvl), **kwargs) and the nested loop over
hier.levels(time).
pyphare/pyphare/pharesee/hierarchy/tensorfield.py (3)

76-76: Prefer next(iter(...)) over list(...)[0].

Per Ruff RUF015, using next(iter(...)) is more efficient for getting the first element.

Proposed fix
-    nbrGhosts = list(hier.level(0).patches[0].patch_datas.values())[0].ghosts_nbr
+    nbrGhosts = next(iter(hier.level(0).patches[0].patch_datas.values())).ghosts_nbr
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pyphare/pyphare/pharesee/hierarchy/tensorfield.py` at line 76, Replace the
inefficient list(...) indexing when grabbing the first patch_data with an
iterator; specifically, in the expression that sets nbrGhosts (currently using
list(hier.level(0).patches[0].patch_datas.values())[0].ghosts_nbr) use
next(iter(...)) on hier.level(0).patches[0].patch_datas.values() to obtain the
first patch_data and then access .ghosts_nbr (keep the same variables hier,
level, patches, and patch_datas).

54-58: Avoid redundant hier.times() call.

Line 55 calls hier.times() again when times is already stored on line 54.

Proposed fix
     times = hier.times()
-    if len(hier.times()) > 1:
+    if len(times) > 1:
         raise ValueError(
             "AnyTensorField interpoloation does not support multiple times"
         )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pyphare/pyphare/pharesee/hierarchy/tensorfield.py` around lines 54 - 58, The
code calls hier.times() twice; use the previously stored variable times instead
of calling hier.times() again: replace the len(hier.times()) check with
len(times) in the AnyTensorField/time-check block (the lines around the variable
times and the ValueError raise) so the stored result is reused and avoids the
redundant call.

30-36: Consider renaming input to avoid shadowing the builtin.

The parameter input shadows Python's built-in input() function. While functional, this can cause confusion.

Proposed fix
-    def __getitem__(self, input):
-        cls = type(input)
+    def __getitem__(self, selection):
+        cls = type(selection)
         if cls is Box or cls is slice:
-            return get_interpolated_selection_from(self, input)
-        if input in self.__dict__:
-            return self.__dict__[input]
-        raise IndexError("AnyTensorField.__getitem__ cannot handle input", input)
+            return get_interpolated_selection_from(self, selection)
+        if selection in self.__dict__:
+            return self.__dict__[selection]
+        raise IndexError("AnyTensorField.__getitem__ cannot handle input", selection)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pyphare/pyphare/pharesee/hierarchy/tensorfield.py` around lines 30 - 36, The
__getitem__ method currently uses the parameter name "input", which shadows the
built-in input() function; rename this parameter to a non-conflicting name like
"key" or "idx" in the method signature of __getitem__ and update all internal
references (e.g., change cls = type(input) to cls = type(key),
get_interpolated_selection_from(self, input) to
get_interpolated_selection_from(self, key), the dict membership test input in
self.__dict__ to key in self.__dict__, and the IndexError raise to include key)
so behavior is unchanged but the builtin is no longer shadowed.
pyphare/pyphare/pharesee/hierarchy/vectorfield.py (1)

60-61: Dead code: reassignment of self and other has no effect.

Lines 60-61 reassign self and other but these local variables are never used afterward. The same pattern exists in __sub__ at lines 83-84.

Proposed fix
         h = compute_hier_from(
             _compute_add,
             (h_self, h_other),
         )

-        self = rename(h_self, names_self_kept)  # needed ?
-        other = rename(h_other, names_other_kept)

         return VectorField.FROM(h)

Same for __sub__:

         h = compute_hier_from(
             _compute_sub,
             (h_self, h_other),
         )

-        self = rename(h_self, names_self_kept)
-        other = rename(h_other, names_other_kept)

         return VectorField.FROM(h)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pyphare/pyphare/pharesee/hierarchy/vectorfield.py` around lines 60 - 61, In
__add__ and __sub__ in VectorField, the lines reassigning self and other with
rename(h_self, names_self_kept) / rename(h_other, names_other_kept) are dead
because those local vars are never used; either remove those reassignments or
actually use the renamed handles when computing the result (e.g., assign to new
locals like h_self_renamed and h_other_renamed and pass those into whatever
combines fields), or apply rename in-place if rename mutates; update the code in
the methods (referencing the rename function and the __add__/__sub__ methods) so
the renamed handles are the ones used or drop the unused assignments.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@pyphare/pyphare/pharesee/hierarchy/hierarchy.py`:
- Around line 641-658: The fragile type check in hierarchy_array_function that
does type(time_hier[time][0][0]) is dict can raise KeyError/IndexError; change
it to safely find the first existing time->level->item (e.g., iterate
time_hier.items() and then levels dict values, use next(...) with defaults) and
use isinstance(..., dict) for the check; also remove the unused enumerate index
by iterating directly over hier.time_hier in the loop that builds time_hier;
ensure you reference hierarchy_array_function, extract, hier.time_hier,
hier.levels and time_hier when making these changes.

In `@pyphare/pyphare/pharesee/hierarchy/patchlevel.py`:
- Around line 22-25: The IndexError raised in PatchLevel.__getitem__ passes
multiple arguments so the type information is ignored; change the raise to pass
a single formatted message string (e.g., using an f-string) that includes the
unexpected type of idx so the error message contains the type information (refer
to the __getitem__ method and ensure the IndexError message explicitly includes
type(idx) in one string).

---

Duplicate comments:
In `@pyphare/pyphare/pharesee/hierarchy/patchdata.py`:
- Around line 205-221: In field_data_array_ufunc, when the incoming ufunc method
is unsupported (the branch where currently "raise NotImplemented" is used),
change that to "return NotImplemented" so the function returns the
NotImplemented singleton instead of raising an exception; update the branch at
the start of field_data_array_ufunc (the check for method != "__call__") to
return NotImplemented to allow NumPy's __array_ufunc__ fallback behavior.

---

Nitpick comments:
In `@pyphare/pyphare/pharesee/hierarchy/hierarchy.py`:
- Around line 633-638: The for-loop uses an unused index variable i when
iterating over hier.time_hier; update the loop header in the block that iterates
"for i, time in enumerate(hier.time_hier):" to use a throwaway name "_" instead
(e.g. "for _, time in enumerate(hier.time_hier):") so the unused variable is
clearly marked; keep the body unchanged where ret.time_hier[time][ilvl] =
getattr(ufunc, method)(*extract(time, ilvl), **kwargs) and the nested loop over
hier.levels(time).

In `@pyphare/pyphare/pharesee/hierarchy/tensorfield.py`:
- Line 76: Replace the inefficient list(...) indexing when grabbing the first
patch_data with an iterator; specifically, in the expression that sets nbrGhosts
(currently using
list(hier.level(0).patches[0].patch_datas.values())[0].ghosts_nbr) use
next(iter(...)) on hier.level(0).patches[0].patch_datas.values() to obtain the
first patch_data and then access .ghosts_nbr (keep the same variables hier,
level, patches, and patch_datas).
- Around line 54-58: The code calls hier.times() twice; use the previously
stored variable times instead of calling hier.times() again: replace the
len(hier.times()) check with len(times) in the AnyTensorField/time-check block
(the lines around the variable times and the ValueError raise) so the stored
result is reused and avoids the redundant call.
- Around line 30-36: The __getitem__ method currently uses the parameter name
"input", which shadows the built-in input() function; rename this parameter to a
non-conflicting name like "key" or "idx" in the method signature of __getitem__
and update all internal references (e.g., change cls = type(input) to cls =
type(key), get_interpolated_selection_from(self, input) to
get_interpolated_selection_from(self, key), the dict membership test input in
self.__dict__ to key in self.__dict__, and the IndexError raise to include key)
so behavior is unchanged but the builtin is no longer shadowed.

In `@pyphare/pyphare/pharesee/hierarchy/vectorfield.py`:
- Around line 60-61: In __add__ and __sub__ in VectorField, the lines
reassigning self and other with rename(h_self, names_self_kept) /
rename(h_other, names_other_kept) are dead because those local vars are never
used; either remove those reassignments or actually use the renamed handles when
computing the result (e.g., assign to new locals like h_self_renamed and
h_other_renamed and pass those into whatever combines fields), or apply rename
in-place if rename mutates; update the code in the methods (referencing the
rename function and the __add__/__sub__ methods) so the renamed handles are the
ones used or drop the unused assignments.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 96f9b2d9-47c9-4d09-97fc-d0aff9a9cd63

📥 Commits

Reviewing files that changed from the base of the PR and between 89ddf9c and 9fea76f.

📒 Files selected for processing (9)
  • pyphare/pyphare/pharesee/hierarchy/hierarchy.py
  • pyphare/pyphare/pharesee/hierarchy/patch.py
  • pyphare/pyphare/pharesee/hierarchy/patchdata.py
  • pyphare/pyphare/pharesee/hierarchy/patchlevel.py
  • pyphare/pyphare/pharesee/hierarchy/scalarfield.py
  • pyphare/pyphare/pharesee/hierarchy/tensorfield.py
  • pyphare/pyphare/pharesee/hierarchy/vectorfield.py
  • pyphare/pyphare/pharesee/run/run.py
  • tests/simulator/test_run.py
✅ Files skipped from review due to trivial changes (1)
  • pyphare/pyphare/pharesee/run/run.py
🚧 Files skipped from review as they are similar to previous changes (2)
  • pyphare/pyphare/pharesee/hierarchy/patch.py
  • tests/simulator/test_run.py

Comment thread pyphare/pyphare/pharesee/hierarchy/hierarchy.py
Comment thread pyphare/pyphare/pharesee/hierarchy/patchlevel.py Outdated

def select(data, box):
return data[tuple([slice(l, u + 1) for l, u in zip(box.lower, box.upper)])]
return data[tuple([slice(lo, up + 1) for lo, up in zip(box.lower, box.upper)])]
Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

l is ambiguous

qty=f"B{c}",
plot_patches=True,
)
B_finest = run.GetB(time)[:]
Copy link
Copy Markdown
Member Author

@PhilipDeegan PhilipDeegan Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this could also be something like

B_finest = run.GetB(time).as_finets(box=None)

where None just gives you everything similar to [:]

filename=plot_file_for_qty(plot_dir, f"b{c}_exp", time), qty=f"{c}"
)
print("np.add(hier, 2)", np.add(2, B_finest))
print("np.mean(B_finest)", np.mean(B_finest))
Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for the case of scalar return values, what is returned is a dictionary of times
like {np.str_('0.0000000000'): {0: [{'x': np.float64(...), 'y': np.float64(...), 'z': np.float64(...)}]}}

like here

Comment thread pyphare/pyphare/pharesee/hierarchy/tensorfield.py Fixed
@PhilipDeegan
Copy link
Copy Markdown
Member Author

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants