add a gaussian_filter and find_peaks for scalarfield and vectorfield#1120
add a gaussian_filter and find_peaks for scalarfield and vectorfield#1120rochSmets wants to merge 7 commits intoPHAREHUB:masterfrom
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds Gaussian-filter and peak-detection utilities; expands Yee-centering mappings to include Changes
Sequence Diagram(s)sequenceDiagram
participant Caller
participant PatchHierarchy
participant FinestField
participant build_interpolator
Caller->>PatchHierarchy: interpol(time, interp)
PatchHierarchy->>FinestField: flat_finest_field(qty, time)
FinestField-->>PatchHierarchy: data, coords, box, dl
PatchHierarchy->>build_interpolator: build_interpolator(data, coords, interp, box, dl, qty, nbrGhosts)
build_interpolator-->>PatchHierarchy: interpolator, finest_coords
PatchHierarchy-->>Caller: {qty: interpolator, ...}
sequenceDiagram
participant Caller
participant gF_gFilt
participant compute_hier_from
participant Patch
participant scipy_ndimage
Caller->>gF_gFilt: gF/gFilt(hier, sigma, ...)
gF_gFilt->>compute_hier_from: dispatch per-patch
compute_hier_from->>Patch: provide patch_datas (with ghosts)
Patch->>scipy_ndimage: gaussian_filter(per-patch array)
scipy_ndimage-->>Patch: filtered array
Patch-->>gF_gFilt: updated patch_datas
gF_gFilt-->>Caller: filtered hierarchy
sequenceDiagram
participant Caller
participant peakIds
participant compute_hier_from
participant Patch
participant scipy_signal
Caller->>peakIds: peakIds(hier, qty, peak_heights, ...)
peakIds->>compute_hier_from: dispatch per-patch
compute_hier_from->>Patch: provide 1D slices / arrays
Patch->>scipy_signal: find_peaks(slice)
scipy_signal-->>Patch: peak indices & heights
Patch-->>peakIds: physical positions & heights
peakIds-->>Caller: aggregated peaks, heights
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested labels
Suggested reviewers
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 6
🤖 Fix all issues with AI agents
In @pyphare/pyphare/core/ufuncs.py:
- Around line 136-138: Replace the incorrect return of the sentinel
NotImplemented with raising a proper exception: in the function that assigns
time = times[0] and currently does "return NotImplemented", change that to raise
NotImplementedError with a short explanatory message (similar to the gFilt fix)
so callers get an exception instead of a bogus return value.
- Around line 81-89: The branch that currently returns NotImplemented should
instead raise NotImplementedError and the function must explicitly handle the
case hier.ndim != 1; update the block in ufuncs.py where hier is tested for ndim
== 1 (and where ScalarField/VectorField are handled via compute_hier_from and
_compute_gaussian_filter_on_scalarfield/_compute_gaussian_filter_on_vectorfield
with nb_ghosts and sigma) so that if hier is neither ScalarField nor VectorField
you raise NotImplementedError("Gaussian filter not implemented for type ..."),
and if hier.ndim != 1 raise NotImplementedError(f"Gaussian filter only
implemented for 1D hier, got ndim={hier.ndim}").
- Around line 173-175: The loop assumes pks[1]['peak_heights'] exists and uses
zip(pks[0], pks[1]['peak_heights']); update the code in the block that builds
pks_ and hgs_ to (1) guard the access to the 'peak_heights' key on pks[1] (e.g.,
check if 'peak_heights' in pks[1] or fetch via .get and handle a missing value
by skipping, defaulting, or raising a clear ValueError) and (2) call zip(...,
strict=True) so the zip between pks[0] and the heights enforces equal lengths;
keep references to pks, pks_[the accumulator], hgs_, patch.dl and patch.origin
when implementing the fix.
- Around line 115-123: In the ndim dispatch in pyphare/core/ufuncs.py replace
the duplicated "elif ndim == 2" branch with "elif ndim == 3" so the 3D call uses
func(r_[0], r_[1], r_[2]); also change the final "print('unvalid dimension')" to
raise a ValueError with a clear message (e.g., "invalid dimension") so invalid
ndim values are handled as exceptions; references: variables ndim, func, r_ in
the existing branch logic.
In @pyphare/pyphare/pharesee/hierarchy/scalarfield.py:
- Line 3: Remove the unused import flat_finest_field from
pyphare.pharesee.hierarchy.scalarfield by deleting the line that imports
flat_finest_field; search for the symbol flat_finest_field in this file to
confirm it's unused and ensure no other references exist before removing the
import statement from the top of the module.
In @pyphare/pyphare/pharesee/run/utils.py:
- Line 508: The nCells computation is off-by-one: replace the manual formula
nCells = box.upper - box.lower + 2 with the correct count using either nCells =
box.upper - box.lower + 1 or, better, use the Box.shape property (e.g., nCells =
box.shape) so the interpolation uses the correct grid dimensions; update the
occurrence of nCells in this module (the variable computed from box.upper and
box.lower) to use box.shape to avoid the extra cell.
🧹 Nitpick comments (8)
pyphare/pyphare/core/operators.py (1)
22-29: LGTM! Consider removing the TODO comment.The refactoring to use
patch_datas["value"].centeringsdirectly is clearer and removes unnecessary indirection. Since the TODO has been addressed by this change, consider removing the comment.♻️ Optional: Remove the TODO comment
def _compute_sqrt(patch_datas, **kwargs): - # ref_name = next(iter(patch_datas.keys())) TODO this is always "value" - dset = np.sqrt(patch_datas["value"][:]) return (pyphare/pyphare/pharesee/run/utils.py (1)
493-537: Significant code duplication withmake_interpolator.The
build_interpolatorfunction (lines 493-537) duplicates substantial logic frommake_interpolator(lines 444-490). Both functions:
- Create scipy interpolators (interp1d, LinearNDInterpolator, NearestNDInterpolator)
- Generate finest coordinates using yeeCoordsFor
- Handle 1D and 2D cases with similar error handling
Consider refactoring to extract common logic and reduce duplication.
💡 Refactoring suggestion
Consider creating a shared helper function for the interpolator creation logic, with parameters to handle the differences in coordinate generation (
domainvsboxbased).pyphare/pyphare/pharesee/hierarchy/hierarchy.py (1)
619-631: Consider defensive checks and usenext()for cleaner iteration.The
interpolmethod has several assumptions that could cause runtime errors:
- Line 623 uses
list(...)[0]whennext(iter(...))would be more idiomatic and efficient.- The method assumes level 0 exists and has at least one patch with patch_datas.
- It assumes
nbrGhostsis consistent across all quantities.- Lines 627-628 assume each quantity exists in the first patch's patch_datas.
Consider adding validation checks or documenting these assumptions.
♻️ Proposed improvements
def interpol(self, time, interp="nearest"): from pyphare.pharesee.hierarchy.hierarchy_utils import flat_finest_field from pyphare.pharesee.run.utils import build_interpolator - nbrGhosts = list(self.level(0).patches[0].patch_datas.values())[0].ghosts_nbr + # Get nbrGhosts from the first available patch data + first_patch = self.level(0).patches[0] + nbrGhosts = next(iter(first_patch.patch_datas.values())).ghosts_nbr interp_ = {} for qty in self.quantities(): box = self.level(0).patches[0].patch_datas[qty].box dl = self.level(0).patches[0].patch_datas[qty].dl data, coords = flat_finest_field(self, qty, time=time) interp_[qty] = build_interpolator(data, coords, interp, box, dl, qty, nbrGhosts) return interp_pyphare/pyphare/core/ufuncs.py (5)
5-8: Remove unused imports.These imports are not used anywhere in the module and should be removed.
Suggested fix
-from pyphare.pharesee.hierarchy.hierarchy_utils import are_compatible_hierarchies -from pyphare.core.phare_utilities import listify -from pyphare.pharesee.hierarchy.patch import Patch -from pyphare.pharesee.hierarchy.patchlevel import PatchLevel
14-16: Remove redundant import.
gaussian_filteris already imported at module level (line 10). The local import is unnecessary.Suggested fix
def _compute_gaussian_filter_on_scalarfield(patch_datas, **kwargs): - from scipy.ndimage import gaussian_filter - ndim = patch_datas["value"].box.ndim
33-35: Remove redundant import.Same issue as above —
gaussian_filteris already available from the module-level import.Suggested fix
def _compute_gaussian_filter_on_vectorfield(patch_datas, **kwargs): - from scipy.ndimage import gaussian_filter - ref_name = next(iter(patch_datas.keys()))
69-76: Clean up commented-out code and usenext()idiom.The commented-out block appears to be dead code. Line 76 should use
next(iter(...))instead oflist(...)[0]for efficiency.Suggested fix
- # time0 = list(hier.times())[0] - # level0 = 0 - # p0 = 0 - # pd0 = hier.levels(time0)[level0].patches[p0].patch_datas - # key0 = list(pd0.keys())[0] - # nb_ghosts = np.max(pd0[key0].ghosts_nbr) - - nb_ghosts = np.max(list(hier.level(0).patches[0].patch_datas.values())[0].ghosts_nbr) + nb_ghosts = np.max(next(iter(hier.level(0).patches[0].patch_datas.values())).ghosts_nbr)
151-179: Remove commented-out debug code.The extensive commented-out print statements are debug artifacts and clutter the code.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
pyphare/pyphare/core/gridlayout.pypyphare/pyphare/core/operators.pypyphare/pyphare/core/ufuncs.pypyphare/pyphare/pharesee/hierarchy/hierarchy.pypyphare/pyphare/pharesee/hierarchy/scalarfield.pypyphare/pyphare/pharesee/run/utils.py
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2024-10-18T13:23:32.074Z
Learnt from: PhilipDeegan
Repo: PHAREHUB/PHARE PR: 910
File: pyphare/pyphare/pharesee/hierarchy/hierarchy_utils.py:7-7
Timestamp: 2024-10-18T13:23:32.074Z
Learning: In the `pyphare.pharesee.hierarchy` module, importing `PatchHierarchy` and `format_timestamp` from `hierarchy.py` into `hierarchy_utils.py` is acceptable as long as `hierarchy.py` does not import `hierarchy_utils.py`, thereby avoiding a cyclic import.
Applied to files:
pyphare/pyphare/core/ufuncs.pypyphare/pyphare/pharesee/hierarchy/scalarfield.pypyphare/pyphare/pharesee/hierarchy/hierarchy.py
🧬 Code graph analysis (4)
pyphare/pyphare/core/ufuncs.py (3)
pyphare/pyphare/pharesee/hierarchy/hierarchy_utils.py (2)
compute_hier_from(137-160)are_compatible_hierarchies(73-84)pyphare/pyphare/core/phare_utilities.py (1)
listify(46-51)pyphare/pyphare/pharesee/hierarchy/patch.py (1)
Patch(4-69)
pyphare/pyphare/pharesee/hierarchy/scalarfield.py (1)
pyphare/pyphare/pharesee/hierarchy/hierarchy_utils.py (1)
flat_finest_field(330-350)
pyphare/pyphare/pharesee/hierarchy/hierarchy.py (3)
pyphare/pyphare/pharesee/hierarchy/hierarchy_utils.py (1)
flat_finest_field(330-350)pyphare/pyphare/pharesee/run/utils.py (1)
build_interpolator(493-537)pyphare/pyphare/core/gridlayout.py (1)
nbrGhosts(247-251)
pyphare/pyphare/pharesee/run/utils.py (1)
pyphare/pyphare/core/gridlayout.py (3)
nbrGhosts(247-251)yeeCoordsFor(150-177)yeeCoordsFor(387-416)
🪛 GitHub Check: CodeQL
pyphare/pyphare/core/ufuncs.py
[warning] 119-119: Redundant comparison
Test is always false, because of this condition.
[warning] 120-120: Unreachable code
This statement is unreachable.
[notice] 5-5: Unused import
Import of 'are_compatible_hierarchies' is not used.
[notice] 6-6: Unused import
Import of 'listify' is not used.
[notice] 7-7: Unused import
Import of 'Patch' is not used.
[notice] 8-8: Unused import
Import of 'PatchLevel' is not used.
[notice] 66-66: Explicit returns mixed with implicit (fall through) returns
Mixing implicit and explicit returns may indicate an error, as implicit returns always return None.
[notice] 176-179: Commented-out code
This comment appears to contain commented-out code.
pyphare/pyphare/pharesee/hierarchy/scalarfield.py
[notice] 3-3: Unused import
Import of 'flat_finest_field' is not used.
pyphare/pyphare/pharesee/hierarchy/hierarchy.py
[notice] 619-619: Cyclic import
Import of module pyphare.pharesee.hierarchy.hierarchy_utils begins an import cycle.
🪛 Ruff (0.14.10)
pyphare/pyphare/core/ufuncs.py
76-76: Prefer next(...) over single element slice
Replace with next(...)
(RUF015)
173-173: zip() without an explicit strict= parameter
Add explicit value for parameter strict=
(B905)
pyphare/pyphare/pharesee/hierarchy/hierarchy.py
623-623: Prefer next(...) over single element slice
Replace with next(...)
(RUF015)
pyphare/pyphare/pharesee/run/utils.py
532-532: Avoid specifying long messages outside the exception class
(TRY003)
535-535: Avoid specifying long messages outside the exception class
(TRY003)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
- GitHub Check: Analyze (python)
- GitHub Check: Analyze (cpp)
- GitHub Check: build (ubuntu-latest, clang)
- GitHub Check: build (ubuntu-latest, gcc)
- GitHub Check: build (macos-14)
- GitHub Check: build (macos-latest)
🔇 Additional comments (4)
pyphare/pyphare/core/gridlayout.py (1)
47-50: LGTM!The expansion of Yee centering mappings to include "value", "x", "y", and "z" keys with "primal" centering is consistent across all three spatial directions and aligns with the PR's goal of supporting Gaussian filtering and peak finding on scalar and vector fields.
Also applies to: 86-89, 125-128
pyphare/pyphare/pharesee/hierarchy/hierarchy.py (2)
442-444: LGTM!The addition of the
lwparameter retrieval improves the plot1d method's flexibility.
619-619: Cyclic import is acceptable in this context.The cyclic import warning can be safely ignored. Based on learnings, importing from
hierarchy_utilsintohierarchy.pyis acceptable as long ashierarchy.pydoes not create a circular dependency, which is the established pattern in this module.pyphare/pyphare/core/ufuncs.py (1)
125-126: Verify slice bounds whennb_ghosts[i] >= sigma.If
nb_ghosts[i] == sigma, the slice becomesslice(0, 0)which is empty. Ifnb_ghosts[i] > sigma, the slice indices invert (e.g.,slice(2, -2)when sigma=3 and nb_ghosts=5 would select less data than expected). Ensure this is the intended behavior.
There was a problem hiding this comment.
Actionable comments posted: 9
🤖 Fix all issues with AI agents
In @pyphare/pyphare/core/ufuncs.py:
- Around line 173-175: The loop assumes pks[1]['peak_heights'] exists causing a
KeyError when find_peaks was called without height; to fix, before the loop
obtain heights = pks[1].get('peak_heights') and if None compute heights from the
underlying signal at the peak indices (e.g., heights = signal_array[pks[0]]),
then iterate for pk, hg in zip(pks[0], heights) and append to pks_ and hgs_;
alternatively, if computing heights is impossible here, raise a clear error
indicating that the height argument is required.
- Around line 81-89: The branch only handles hier.ndim == 1 and returns the
sentinel NotImplemented for other cases; replace that with explicit error
handling and proper exception raising: instead of returning NotImplemented,
raise NotImplementedError with a message including hier.ndim and the field type,
and extend the conditional to explicitly handle expected dims (e.g., ndim == 2
and ndim == 3) by calling the appropriate compute_hier_from variant or, if those
variants are not available, raise NotImplementedError mentioning
_compute_gaussian_filter_on_scalarfield/_compute_gaussian_filter_on_vectorfield
and the unsupported dimension; ensure ScalarField/VectorField handling remains
intact and that compute_hier_from is invoked with nb_ghosts and sigma as before.
- Around line 115-123: The branch contains a copy-paste bug: change the second
`elif ndim == 2` to `elif ndim == 3` and call `func(r_[0], r_[1], r_[2])` to
produce `data` for 3D; also stop silently printing on invalid dimensions—raise a
ValueError (or return/raise) instead of `print('unvalid dimension')` so `data`
is always defined before subsequent use (e.g., the later `gaussian_filter`
call).
- Around line 143-145: The code mutates the caller's kwargs by calling
kwargs.pop("names"); instead, read names with kwargs.get("names") into names_
(or make a shallow copy like local_kwargs = dict(kwargs) and pop from that) and
then pass a filtered/local_kwargs to find_peaks so the original kwargs is not
modified; update the block that sets names_ and the subsequent call to
find_peaks to use the non-mutating approach (symbols to change: names_, kwargs,
find_peaks).
- Around line 5-10: Remove the unused imports at the top of ufuncs.py
(are_compatible_hierarchies, listify, Patch, PatchLevel) since they are not
referenced, and keep a single top-level "from scipy.ndimage import
gaussian_filter" import; also delete the redundant inner imports of
gaussian_filter inside the functions _compute_gaussian_filter_on_scalarfield and
_compute_gaussian_filter_on_vectorfield so both functions use the single
module-level gaussian_filter.
In @pyphare/pyphare/pharesee/hierarchy/hierarchy.py:
- Around line 618-630: The interpol method uses the provided time only for
flat_finest_field but calls self.level(0) without the time argument for
nbrGhosts, box and dl, and uses list(...)[0] to get the first patch; fix by
calling self.level(0, time=time) (or the correct API to request the same
timestamp) whenever accessing patch data for nbrGhosts, box and dl so those
values match the data passed to flat_finest_field, and replace
list(self.level(0).patches)[0] / list(...)[0] uses with next(iter(...)) to
satisfy the RUF015 suggestion; update references in interpol, specifically
nbrGhosts, box, dl and the self.level calls, and keep build_interpolator and
flat_finest_field usage unchanged.
In @pyphare/pyphare/pharesee/hierarchy/scalarfield.py:
- Line 3: Remove the unused import flat_finest_field from the top of the file
(the import statement "from pyphare.pharesee.hierarchy.hierarchy_utils import
flat_finest_field" in scalarfield.py); ensure no other references to
flat_finest_field remain in this module and run linters/tests to confirm the
unused-import warning is resolved.
In @pyphare/pyphare/pharesee/run/utils.py:
- Line 535: Update the ValueError message to reference the correct function
name: replace the string "make_interpolator is not yet 3d" with
"build_interpolator is not yet 3d" in the raise statement (the one that
currently raises ValueError near the build_interpolator logic) so the error
message matches the actual function name.
- Line 508: The nCells calculation is off by one: change the expression that
sets nCells from using "box.upper - box.lower + 2" to use "box.upper - box.lower
+ 1" (or simply use "box.shape") so it matches Box semantics and aligns with
yeeCoordsFor's internal +1 for primal centering; update the assignment where
nCells is computed in utils.py (the variable nCells and the surrounding
calculation) accordingly.
🧹 Nitpick comments (5)
pyphare/pyphare/core/operators.py (1)
22-29: LGTM with minor cleanup suggestion.The simplification to directly use
patch_datas["value"]is correct since_compute_sqrtoperates onScalarFieldwhich always has a"value"key. Consider removing the commented-out line entirely rather than leaving it as a TODO comment.🧹 Suggested cleanup
def _compute_sqrt(patch_datas, **kwargs): - # ref_name = next(iter(patch_datas.keys())) TODO this is always "value" - dset = np.sqrt(patch_datas["value"][:]) return ( {"name": "value", "data": dset, "centering": patch_datas["value"].centerings}, )pyphare/pyphare/core/ufuncs.py (2)
76-76: Usenext()instead of single-element slice.Per Ruff RUF015, prefer
next(iter(...))overlist(...)[0]for accessing the first element.♻️ Proposed fix
- nb_ghosts = np.max(list(hier.level(0).patches[0].patch_datas.values())[0].ghosts_nbr) + nb_ghosts = np.max(next(iter(hier.level(0).patches[0].patch_datas.values())).ghosts_nbr)
151-179: Remove commented-out debug code before merging.Lines 151-163 and 167-179 contain commented-out debug prints. These should be removed to keep the codebase clean.
pyphare/pyphare/pharesee/run/utils.py (1)
493-537: Consider refactoring to reduce duplication withmake_interpolator.
build_interpolatorshares significant logic withmake_interpolator(lines 444-490). The main differences are:
- Coordinate generation:
build_interpolatorusesboxwhilemake_interpolatorusesdomainnCellscalculation differsConsider extracting the common interpolator construction logic into a shared helper to reduce duplication.
pyphare/pyphare/pharesee/hierarchy/hierarchy.py (1)
618-620: Consider adding a docstring for the new public method.The
interpolmethod would benefit from documentation explaining its purpose, parameters, and return type. Based onbuild_interpolator's signature, this returns a dict mapping quantity names to(interpolator, finest_coords)tuples, which isn't immediately obvious.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
pyphare/pyphare/core/gridlayout.pypyphare/pyphare/core/operators.pypyphare/pyphare/core/ufuncs.pypyphare/pyphare/pharesee/hierarchy/hierarchy.pypyphare/pyphare/pharesee/hierarchy/scalarfield.pypyphare/pyphare/pharesee/run/utils.py
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2024-10-18T13:23:32.074Z
Learnt from: PhilipDeegan
Repo: PHAREHUB/PHARE PR: 910
File: pyphare/pyphare/pharesee/hierarchy/hierarchy_utils.py:7-7
Timestamp: 2024-10-18T13:23:32.074Z
Learning: In the `pyphare.pharesee.hierarchy` module, importing `PatchHierarchy` and `format_timestamp` from `hierarchy.py` into `hierarchy_utils.py` is acceptable as long as `hierarchy.py` does not import `hierarchy_utils.py`, thereby avoiding a cyclic import.
Applied to files:
pyphare/pyphare/core/ufuncs.pypyphare/pyphare/pharesee/hierarchy/hierarchy.pypyphare/pyphare/pharesee/hierarchy/scalarfield.py
📚 Learning: 2024-09-09T13:57:02.285Z
Learnt from: PhilipDeegan
Repo: PHAREHUB/PHARE PR: 888
File: pyphare/pyphare/pharesee/hierarchy/hierarchy_utils.py:566-568
Timestamp: 2024-09-09T13:57:02.285Z
Learning: Avoid commenting on draft PRs for PhilipDeegan unless explicitly requested.
Applied to files:
pyphare/pyphare/core/ufuncs.py
🧬 Code graph analysis (4)
pyphare/pyphare/core/ufuncs.py (3)
pyphare/pyphare/pharesee/hierarchy/hierarchy_utils.py (2)
compute_hier_from(137-160)are_compatible_hierarchies(73-84)pyphare/pyphare/core/phare_utilities.py (1)
listify(46-51)pyphare/pyphare/pharesee/hierarchy/patch.py (1)
Patch(4-69)
pyphare/pyphare/pharesee/run/utils.py (1)
pyphare/pyphare/core/gridlayout.py (3)
nbrGhosts(247-251)yeeCoordsFor(150-177)yeeCoordsFor(374-403)
pyphare/pyphare/pharesee/hierarchy/hierarchy.py (2)
pyphare/pyphare/pharesee/hierarchy/hierarchy_utils.py (1)
flat_finest_field(330-350)pyphare/pyphare/pharesee/run/utils.py (1)
build_interpolator(493-537)
pyphare/pyphare/pharesee/hierarchy/scalarfield.py (1)
pyphare/pyphare/pharesee/hierarchy/hierarchy_utils.py (1)
flat_finest_field(330-350)
🪛 GitHub Check: CodeQL
pyphare/pyphare/core/ufuncs.py
[warning] 119-119: Redundant comparison
Test is always false, because of this condition.
[warning] 120-120: Unreachable code
This statement is unreachable.
[notice] 5-5: Unused import
Import of 'are_compatible_hierarchies' is not used.
[notice] 6-6: Unused import
Import of 'listify' is not used.
[notice] 7-7: Unused import
Import of 'Patch' is not used.
[notice] 8-8: Unused import
Import of 'PatchLevel' is not used.
[notice] 66-66: Explicit returns mixed with implicit (fall through) returns
Mixing implicit and explicit returns may indicate an error, as implicit returns always return None.
[notice] 176-179: Commented-out code
This comment appears to contain commented-out code.
pyphare/pyphare/pharesee/hierarchy/hierarchy.py
[notice] 619-619: Cyclic import
Import of module pyphare.pharesee.hierarchy.hierarchy_utils begins an import cycle.
pyphare/pyphare/pharesee/hierarchy/scalarfield.py
[notice] 3-3: Unused import
Import of 'flat_finest_field' is not used.
🪛 Ruff (0.14.10)
pyphare/pyphare/core/ufuncs.py
76-76: Prefer next(...) over single element slice
Replace with next(...)
(RUF015)
173-173: zip() without an explicit strict= parameter
Add explicit value for parameter strict=
(B905)
pyphare/pyphare/pharesee/run/utils.py
532-532: Avoid specifying long messages outside the exception class
(TRY003)
535-535: Avoid specifying long messages outside the exception class
(TRY003)
pyphare/pyphare/pharesee/hierarchy/hierarchy.py
622-622: Prefer next(...) over single element slice
Replace with next(...)
(RUF015)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
- GitHub Check: build (macos-14)
- GitHub Check: build (ubuntu-latest, clang)
- GitHub Check: build (ubuntu-latest, gcc)
- GitHub Check: build (macos-latest)
- GitHub Check: Analyze (python)
- GitHub Check: Analyze (cpp)
🔇 Additional comments (3)
pyphare/pyphare/core/gridlayout.py (1)
47-50: LGTM!The additions of
"value","x","y", and"z"keys with"primal"centering extend the Yee centering mappings to supportScalarFieldandVectorFieldquantities. This is consistent with how moment-like quantities are centered and enables proper centering lookups in operators and ufuncs.Also applies to: 86-89, 125-128
pyphare/pyphare/core/ufuncs.py (1)
125-126: Verify slice calculation logic.The slice
slice(sigma-nb_ghosts[i], -sigma+nb_ghosts[i])differs from the pattern used in_compute_gaussian_filter_on_scalarfieldwhich usesslice(nb_ghosts, -nb_ghosts). Whensigma < nb_ghosts, this producesslice(negative, positive)which may not behave as expected (e.g.,sigma=2, nb_ghosts=4yieldsslice(-2, 2)).pyphare/pyphare/pharesee/hierarchy/hierarchy.py (1)
441-443: LGTM!The addition of the
lw(line width) parameter with a sensible default of 1 is a clean enhancement that maintains backward compatibility while providing additional customization.
|
|
||
| dim = coords.ndim | ||
|
|
||
| nCells = box.upper-box.lower+2 |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check how nCells is calculated elsewhere in the codebase
rg -n "nCells\s*=" --type py -C 3Repository: PHAREHUB/PHARE
Length of output: 1088
🏁 Script executed:
# Also inspect the specific file and context around line 508
sed -n '500,520p' pyphare/pyphare/pharesee/run/utils.py | cat -nRepository: PHAREHUB/PHARE
Length of output: 723
🏁 Script executed:
# Search for box.upper and box.lower usage to understand the context
rg -n "box\.(upper|lower)" --type py -B2 -A2 | head -100Repository: PHAREHUB/PHARE
Length of output: 7568
🏁 Script executed:
# Get broader context around line 508
sed -n '490,530p' pyphare/pyphare/pharesee/run/utils.py | cat -nRepository: PHAREHUB/PHARE
Length of output: 1600
🏁 Script executed:
# Find the yeeCoordsFor function definition
rg -n "def yeeCoordsFor" --type py -A10Repository: PHAREHUB/PHARE
Length of output: 1768
🏁 Script executed:
# Search for documentation or comments about nCells in relation to primal/dual centering
rg -n "primal|dual|centering" --type py -i -C2 | head -60Repository: PHAREHUB/PHARE
Length of output: 4222
🏁 Script executed:
# Get full yeeCoordsFor function implementation
sed -n '150,200p' pyphare/pyphare/core/gridlayout.py | cat -nRepository: PHAREHUB/PHARE
Length of output: 1952
🏁 Script executed:
# Look for how yeeCoordsFor is used elsewhere with nbrCells parameter
rg -n "yeeCoordsFor" --type py -B2 -A2 | grep -E "(yeeCoordsFor|nCells|nbrCells)" | head -40Repository: PHAREHUB/PHARE
Length of output: 1457
🏁 Script executed:
# Check if there are any tests or examples for build_interpolator
rg -n "build_interpolator" --type py -B3 -A3Repository: PHAREHUB/PHARE
Length of output: 1909
🏁 Script executed:
# Search for Box class definition and nCells method
rg -n "class Box|def nCells" --type py -B1 -A5 | head -80Repository: PHAREHUB/PHARE
Length of output: 2257
🏁 Script executed:
# Look for any comments or documentation around the build_interpolator function
sed -n '493,515p' pyphare/pyphare/pharesee/run/utils.py | cat -nRepository: PHAREHUB/PHARE
Length of output: 944
🏁 Script executed:
# Check what the typical pattern is for cell counts in the codebase
rg -n "upper\s*-\s*lower\s*\+" --type py | head -20Repository: PHAREHUB/PHARE
Length of output: 40
🏁 Script executed:
# Get the full Box class implementation
sed -n '5,78p' pyphare/pyphare/core/box.py | cat -nRepository: PHAREHUB/PHARE
Length of output: 2710
🏁 Script executed:
# Look for tests or usage of build_interpolator
rg -n "build_interpolator" --type py -B5 -A10Repository: PHAREHUB/PHARE
Length of output: 3487
🏁 Script executed:
# Check how shape is computed in Box
rg -n "def shape|@property" pyphare/pyphare/core/box.py -A3Repository: PHAREHUB/PHARE
Length of output: 200
🏁 Script executed:
# Check how yeeCoordsFor is called with nCells in other contexts
rg -n "yeeCoordsFor" pyphare/pyphare/pharesee/run/utils.py -B3 -A3Repository: PHAREHUB/PHARE
Length of output: 1269
🏁 Script executed:
# Look for any tests that use build_interpolator to understand the expected behavior
fd -e py -path "*/test*" | xargs rg -l "build_interpolator" | head -5Repository: PHAREHUB/PHARE
Length of output: 329
🏁 Script executed:
# Check if nCells should match Box.shape or if there's a specific reason for +2
rg -n "upper.*lower.*1|shape.*prod" pyphare/pyphare/core/box.py -B2 -A2Repository: PHAREHUB/PHARE
Length of output: 858
🏁 Script executed:
# Let's check how box.shape relates to the calculation and verify nCells usage
python3 << 'EOF'
# If box.upper=10, box.lower=5:
upper, lower = 10, 5
# Current formula at line 508:
ncells_current = upper - lower + 2
print(f"Current formula (upper - lower + 2): {ncells_current}")
# Box.shape formula:
box_shape = upper - lower + 1
print(f"Box.shape formula (upper - lower + 1): {box_shape}")
# Difference:
print(f"Difference: {ncells_current - box_shape}")
# This means yeeCoordsFor will receive nCells that's 1 larger than box.shape
# Since yeeCoordsFor adds 1 for primal centering, the final size would be:
print(f"\nWith primal centering in yeeCoordsFor:")
print(f"Final size = nCells + 1 = {ncells_current + 1}")
print(f"vs box.shape + 1 = {box_shape + 1}")
EOFRepository: PHAREHUB/PHARE
Length of output: 144
🏁 Script executed:
# Look for tests of build_interpolator or similar interpolation code
rg -n "interpol" pyphare/pyphare/pharesee/hierarchy/hierarchy.py -B5 -A15 | head -50Repository: PHAREHUB/PHARE
Length of output: 1319
🏁 Script executed:
# Search for any comments or issues related to nCells or interpolation
rg -n "nCells|interpolat" pyphare/pyphare/pharesee/run/utils.py -B2 -A2Repository: PHAREHUB/PHARE
Length of output: 2870
🏁 Script executed:
# Let's verify by checking if there are any related bug reports or comments
rg -n "TODO|FIXME|BUG|HACK" pyphare/pyphare/pharesee/run/utils.py -B2 -A2Repository: PHAREHUB/PHARE
Length of output: 40
🏁 Script executed:
# Check the git history for context on this calculation
git log -p --all -S "box.upper-box.lower+2" -- pyphare/pyphare/pharesee/run/utils.py | head -100Repository: PHAREHUB/PHARE
Length of output: 40
🏁 Script executed:
# Check if nCells is truly a scalar or array in build_interpolator context
sed -n '493,540p' pyphare/pyphare/pharesee/run/utils.py | cat -nRepository: PHAREHUB/PHARE
Length of output: 1787
🏁 Script executed:
# Verify Box properties for multi-dimensional cases
rg -n "class.*Box|def __init__" pyphare/pyphare/core/box.py -A8 | head -50Repository: PHAREHUB/PHARE
Length of output: 1408
🏁 Script executed:
# Check test cases to see expected box behavior
rg -n "Box\(" pyphare/pyphare_tests/test_core/test_box.py -B1 -A1 | head -40Repository: PHAREHUB/PHARE
Length of output: 1933
🏁 Script executed:
# Check if nCells is being treated as array and compare with Box.shape behavior
python3 << 'EOF'
import numpy as np
# Simulate Box behavior
lower = np.array([5, 10])
upper = np.array([10, 20])
# Current formula in line 508
nCells_current = upper - lower + 2
print(f"Current formula (upper - lower + 2): {nCells_current}")
# Box.shape formula
box_shape = upper - lower + 1
print(f"Box.shape formula (upper - lower + 1): {box_shape}")
# What yeeCoordsFor receives
print(f"\nEach dimension:")
for i, (curr, shape) in enumerate(zip(nCells_current, box_shape)):
print(f" Dim {i}: current={curr}, shape={shape}, difference={curr - shape}")
EOFRepository: PHAREHUB/PHARE
Length of output: 175
🏁 Script executed:
# Search for any documentation about why +2 is used specifically
rg -n "nCells|box\.upper|box\.lower" pyphare/pyphare/pharesee/run/utils.py -B5 -A1 | grep -A5 -B5 "508"Repository: PHAREHUB/PHARE
Length of output: 404
🏁 Script executed:
# Check if there's any reference to the difference between make_interpolator and build_interpolator
git log --oneline --all -- pyphare/pyphare/pharesee/run/utils.py | head -20Repository: PHAREHUB/PHARE
Length of output: 188
🏁 Script executed:
# Let's check if there are any tests for build_interpolator or the hierarchy.interpol method
find . -name "*.py" -type f -path "*/test*" -exec rg -l "interpol|build_interpolator" {} \;Repository: PHAREHUB/PHARE
Length of output: 650
🏁 Script executed:
# Compare the two similar functions side by side
diff <(sed -n '444,490p' pyphare/pyphare/pharesee/run/utils.py) <(sed -n '493,537p' pyphare/pyphare/pharesee/run/utils.py)Repository: PHAREHUB/PHARE
Length of output: 145
🏁 Script executed:
# Compare the nCells calculations between make_interpolator and build_interpolator
rg -n "nCells\s*=" pyphare/pyphare/pharesee/run/utils.py -B2 -A5Repository: PHAREHUB/PHARE
Length of output: 733
🏁 Script executed:
# Check the interpolator tests to see if they validate the behavior
cat tests/core/numerics/interpolator/interpolator_test.py | head -100Repository: PHAREHUB/PHARE
Length of output: 2790
🏁 Script executed:
# Look for any documentation in comments about the difference
sed -n '444,450p' pyphare/pyphare/pharesee/run/utils.py && echo "---" && sed -n '493,502p' pyphare/pyphare/pharesee/run/utils.pyRepository: PHAREHUB/PHARE
Length of output: 768
🏁 Script executed:
# Let's trace through the logic: check the actual usage in hierarchy.py to understand what box is
sed -n '618,630p' pyphare/pyphare/pharesee/hierarchy/hierarchy.pyRepository: PHAREHUB/PHARE
Length of output: 708
🏁 Script executed:
# Check if PatchData box includes or excludes ghosts
rg -n "class PatchData|def box|\.box\s*=" pyphare/pyphare/pharesee/hierarchy/patchdata.py -B1 -A3 | head -60Repository: PHAREHUB/PHARE
Length of output: 346
🏁 Script executed:
# Final check: see if the +2 is consistent with including boundaries on both sides
python3 << 'EOF'
# If box represents inclusive cell indices from lower to upper
# For a 1D example: box.lower=5, box.upper=10
# Number of cells = 10 - 5 + 1 = 6 cells (indices 5,6,7,8,9,10)
# But for interpolation coordinates, we might want:
# - Points at cell boundaries: need 7 points (before cell 5, between each, after cell 10)
# That would be: upper - lower + 2
# This makes sense! The +2 gives the number of coordinate points needed
# if cells are interpreted as intervals from one point to the next
print("If cells are indexed intervals:")
print(" Box [5, 10] = 6 cells")
print(" Need 7 coordinate points for boundaries")
print(" Formula: 10 - 5 + 2 = 7 ✓")
print()
print("If cells are point locations (primal/dual on Yee grid):")
print(" Box [5, 10] = 6 cell centers")
print(" yeeCoordsFor adds +1 for primal -> 7 total")
print(" Formula should be: 10 - 5 + 1 = 6 (input), +1 inside yeeCoordsFor = 7")
EOFRepository: PHAREHUB/PHARE
Length of output: 388
🏁 Script executed:
# Check the actual yeeCoordsFor logic more carefully with the nCells parameter
sed -n '150,175p' pyphare/pyphare/core/gridlayout.py | cat -nRepository: PHAREHUB/PHARE
Length of output: 1032
🏁 Script executed:
# Look for tests that might validate build_interpolator behavior
rg -n "build_interpolator\|hierarchy.*interpol" tests/ -lRepository: PHAREHUB/PHARE
Length of output: 40
🏁 Script executed:
# Check if there are integration tests using hierarchy.interpol
rg -n "\.interpol\(" tests/ -B3 -A3 | head -40Repository: PHAREHUB/PHARE
Length of output: 40
Fix nCells calculation to match Box semantics.
The formula box.upper - box.lower + 2 adds an extra coordinate point beyond what the Box indices define. Should use box.upper - box.lower + 1 (equivalent to box.shape) since yeeCoordsFor adds 1 internally for primal centering. The current formula results in one extra coordinate extending beyond the box bounds.
🤖 Prompt for AI Agents
In @pyphare/pyphare/pharesee/run/utils.py at line 508, The nCells calculation is
off by one: change the expression that sets nCells from using "box.upper -
box.lower + 2" to use "box.upper - box.lower + 1" (or simply use "box.shape") so
it matches Box semantics and aligns with yeeCoordsFor's internal +1 for primal
centering; update the assignment where nCells is computed in utils.py (the
variable nCells and the surrounding calculation) accordingly.
There was a problem hiding this comment.
@rochSmets, understood. I've noted your feedback on this comment.
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@pyphare/pyphare/core/ufuncs.py`:
- Around line 136-166: In peakIds, require and validate the input 'height'
parameter (not the output-only 'peak_heights'), remove any 'peak_heights' key
from kwargs before calling scipy.signal.find_peaks, and pass the filtered kwargs
to find_peaks; specifically, update the validation that currently checks
kwargs.get('peak_heights') to check for 'height' instead, pop/delete
'peak_heights' from kwargs if present, then call find_peaks(ds,
**filtered_kwargs) and continue using pks[1]['peak_heights'] from the result for
peak magnitudes.
- Around line 90-134: The gF function currently assumes sigma is a scalar and
computes n_pad = 2*sigma and slices with that single value, which breaks when
sigma is a sequence and also only checks sigma == 1; modify gF to accept either
a scalar or an iterable: if scalar, promote it to a tuple of length ndim; if
iterable, validate its length equals hier.ndim and that every sigma_i >= 2
(reject values < 2); compute per-dimension n_pad_i = int(2 * sigma_i) (ensuring
integers) and use n_pad_i when building the extended coordinate arrays (s_) and
when building the select tuple for slicing (replace single n_pad with
per-dimension n_pad_i in the slice computation); keep calls to
scipy.ndimage.gaussian_filter with the original per-dimension sigma sequence so
filtering remains correct and assign pdata.dataset from gf_ using the
per-dimension select slices.
♻️ Duplicate comments (2)
pyphare/pyphare/pharesee/hierarchy/hierarchy.py (1)
622-622: Avoid materializing a list just to fetch the first PatchData.
Usingnext(iter(...))is lighter and matches RUF015.♻️ Proposed change
- nbrGhosts = list(self.level(0, time).patches[0].patch_datas.values())[0].ghosts_nbr + nbrGhosts = next(iter(self.level(0, time).patches[0].patch_datas.values())).ghosts_nbrpyphare/pyphare/core/ufuncs.py (1)
62-88: Return proper exceptions for unsupported hierarchies.
NotImplementedis a sentinel for binary ops and can leak silently; raiseNotImplementedErrorinstead.🐛 Proposed fix
if hier.ndim == 1: if isinstance(hier, ScalarField) : h = compute_hier_from(_compute_gaussian_filter_on_scalarfield, hier, nb_ghosts=nb_ghosts, sigma=sigma) return ScalarField(h) elif isinstance(hier, VectorField) : h = compute_hier_from(_compute_gaussian_filter_on_vectorfield, hier, nb_ghosts=nb_ghosts, sigma=sigma) return VectorField(h) else: - return NotImplemented + raise NotImplementedError("gFilt supports only ScalarField/VectorField in 1D") else: - return NotImplemented + raise NotImplementedError(f"gFilt not implemented for ndim={hier.ndim}")
| def gF(hier, **kwargs): | ||
| sigma = kwargs.get("sigma", 4) | ||
| if sigma == 1: | ||
| raise ValueError("sigma value has to be at least 2") | ||
| h_ = hier.__deepcopy__(memo={}) | ||
| ndim = hier.ndim | ||
| n_pad = 2*sigma | ||
| # The gaussian filter is calculated on the box extended by | ||
| # n_pad. Hence, the number of points is large enough so that the value | ||
| # at the last point of the real box is as equal as possible to the one | ||
| # at the first point of the next box... | ||
|
|
||
| for time in h_.times(): | ||
| interp_ = hier.interpol(time) | ||
| for lvl in h_.levels(time).values(): | ||
| for patch in lvl.patches: | ||
| names = list(patch.patch_datas.keys()) | ||
| box = patch.box | ||
|
|
||
| for name in names: | ||
| pdata = patch.patch_datas[name] | ||
| nb_ghosts = pdata.ghosts_nbr | ||
|
|
||
| r_ = [] | ||
| for i in range(ndim): | ||
| s_ = np.arange(box.lower[i]-n_pad, box.upper[i]+2+n_pad)*pdata.dl[i] | ||
| r_.append(s_) | ||
|
|
||
| func, _ = interp_[name] | ||
|
|
||
| if ndim == 1: | ||
| data = func(r_[0]) | ||
| elif ndim == 2: | ||
| data = func(r_[0], r_[1]) | ||
| elif ndim == 3: | ||
| data = func(r_[0], r_[1], r_[2]) | ||
| else: | ||
| raise ValueError('unvalid dimension') | ||
|
|
||
| gf_ = gaussian_filter(data, sigma=sigma) | ||
| select = tuple([slice(n_pad-nb_ghosts[i], -n_pad+nb_ghosts[i]) for i in range(ndim)]) | ||
| pdata.dataset = np.asarray(gf_[select]) | ||
|
|
||
| return h_ | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
scipy.ndimage.gaussian_filter sigma parameter scalar sequence documentation
💡 Result:
In scipy.ndimage.gaussian_filter(input, sigma, ...), the sigma parameter is the standard deviation of the Gaussian kernel and can be either:
- a single scalar (same sigma used for all axes), or
- a sequence of scalars (one sigma per axis, in axis order).
This is exactly how SciPy’s docs define it: “scalar or sequence of scalars … as a sequence [per axis], or as a single number, in which case it is equal for all axes.” [1]
Example: for a 2D array, sigma=2.0 applies (2.0, 2.0), while sigma=(1.0, 3.0) applies 1.0 on axis 0 and 3.0 on axis 1. [1]
Source: SciPy API docs for scipy.ndimage.gaussian_filter [1]
[1] https://docs.scipy.org/doc/scipy-1.11.4/reference/generated/scipy.ndimage.gaussian_filter.html
🏁 Script executed:
# Search for sigma validation patterns in the codebase
rg "sigma" pyphare/pyphare/core/ufuncs.py -B 2 -A 2
# Look for tests of the gF function
find . -name "*.py" -type f | xargs grep -l "gF" | grep -i testRepository: PHAREHUB/PHARE
Length of output: 2272
🏁 Script executed:
# Check for any existing sigma type validation elsewhere in the codebase
rg "np.isscalar.*sigma" --type py
rg "isinstance.*sigma" --type pyRepository: PHAREHUB/PHARE
Length of output: 40
🏁 Script executed:
# Understand how numpy handles float indices in slicing
python3 << 'EOF'
import numpy as np
# Test 1: Does numpy allow float indices in slicing?
try:
arr = np.array([1, 2, 3, 4, 5])
result = arr[2.5:]
print("Float index in slice: works")
except TypeError as e:
print(f"Float index in slice: {e}")
# Test 2: What about float arithmetic in slice construction?
n_pad = 2 * 4.5 # Simulating float sigma
sigma = 4.5
try:
print(f"n_pad (float): {n_pad}, type: {type(n_pad)}")
ndim = 1
select = tuple([slice(n_pad, -n_pad)])
print(f"Slice construction with float: {select}")
arr = np.zeros((10,))
result = arr[select]
print(f"Slicing with float: works, result shape: {result.shape}")
except (TypeError, ValueError) as e:
print(f"Slicing with float: {e}")
# Test 3: Check scipy.ndimage.gaussian_filter sigma behavior
from scipy.ndimage import gaussian_filter
test_data = np.random.rand(10, 10)
try:
result = gaussian_filter(test_data, sigma=4.5)
print(f"gaussian_filter with float sigma: works")
except Exception as e:
print(f"gaussian_filter with float sigma: {e}")
try:
result = gaussian_filter(test_data, sigma=[2.0, 3.0])
print(f"gaussian_filter with sequence sigma: works")
except Exception as e:
print(f"gaussian_filter with sequence sigma: {e}")
EOFRepository: PHAREHUB/PHARE
Length of output: 144
🏁 Script executed:
# Verify what happens when float values are used in slice objects with numpy arrays
python3 << 'EOF'
import numpy as np
# Test float indices in slice with numpy array
arr = np.zeros((10,))
n_pad = 4.5 # float value
try:
select = tuple([slice(n_pad, -n_pad)])
result = arr[select]
print(f"Float in slice works. Result shape: {result.shape}")
print(f"Slice object: {select}")
except TypeError as e:
print(f"Float in slice fails: {e}")
# Test with actual usage from code
ndim = 1
nb_ghosts = np.array([1])
n_pad_float = 4.5
n_pad_int = int(np.ceil(4.5))
select_float = tuple([slice(n_pad_float-nb_ghosts[i], -n_pad_float+nb_ghosts[i]) for i in range(ndim)])
select_int = tuple([slice(n_pad_int-nb_ghosts[i], -n_pad_int+nb_ghosts[i]) for i in range(ndim)])
arr = np.zeros((15,))
try:
result_float = arr[select_float]
result_int = arr[select_int]
print(f"Float slice result: {result_float.shape}")
print(f"Int slice result: {result_int.shape}")
print(f"Are results identical? {np.array_equal(result_float, result_int)}")
except Exception as e:
print(f"Error: {e}")
# Test with sequence sigma
from scipy.ndimage import gaussian_filter
data = np.random.rand(10, 10)
try:
# This should work according to scipy docs
result = gaussian_filter(data, sigma=(2.0, 3.0))
print(f"gaussian_filter with sequence sigma works")
# But what happens if we treat it like the gF code does?
sigma = (2.0, 3.0)
n_pad = 2 * sigma # This will create a tuple
print(f"n_pad with sequence sigma: {n_pad}, type: {type(n_pad)}")
# Try to use it in slicing
ndim = 2
nb_ghosts = np.array([1, 1])
select = tuple([slice(n_pad[i]-nb_ghosts[i], -n_pad[i]+nb_ghosts[i]) for i in range(ndim)])
result = data[select]
print(f"Slicing with sequence-derived n_pad works: {result.shape}")
except Exception as e:
print(f"Error with sequence sigma: {type(e).__name__}: {e}")
EOFRepository: PHAREHUB/PHARE
Length of output: 144
🏁 Script executed:
# Check if there are any tests for the gF function to see expected behavior
find . -name "*.py" -type f -exec grep -l "def test.*gF\|gF.*test" {} \;Repository: PHAREHUB/PHARE
Length of output: 40
🏁 Script executed:
# Search for actual usage of gF function in the codebase
rg "gF\(" --type py -B 2 -A 2Repository: PHAREHUB/PHARE
Length of output: 292
🏁 Script executed:
# Check if gF is ever called with sequence sigma in tests or examples
rg "gF\s*\(" --type py -A 5 | head -50
# Look for test files that might call gF
fd "test.*\.py" --type f | head -20Repository: PHAREHUB/PHARE
Length of output: 1424
🏁 Script executed:
# Simulate what happens with sequence sigma in the actual code path
python3 << 'EOF'
import numpy as np
# Simulate the actual code behavior with sequence sigma
sigma_sequence = (2.0, 3.0)
n_pad = 2 * sigma_sequence
print(f"n_pad with sequence sigma: {n_pad}")
print(f"Type of n_pad: {type(n_pad)}")
# Try the slice arithmetic
ndim = 2
nb_ghosts = np.array([1, 1])
try:
select = tuple([slice(n_pad[i]-nb_ghosts[i], -n_pad[i]+nb_ghosts[i]) for i in range(ndim)])
print(f"Slice construction works: {select}")
except (TypeError, IndexError) as e:
print(f"Slice construction fails: {type(e).__name__}: {e}")
# Now test with float scalar sigma
sigma_float = 2.5
n_pad_float = 2 * sigma_float
print(f"\nn_pad with float scalar sigma: {n_pad_float}")
print(f"Type of n_pad: {type(n_pad_float)}")
try:
select = tuple([slice(n_pad_float-nb_ghosts[i], -n_pad_float+nb_ghosts[i]) for i in range(ndim)])
print(f"Slice construction works with float: {select}")
# Test actual slicing
data = np.zeros((10, 10))
result = data[select]
print(f"Actual slicing works with float indices: {result.shape}")
except (TypeError, IndexError) as e:
print(f"Error: {type(e).__name__}: {e}")
EOFRepository: PHAREHUB/PHARE
Length of output: 175
🏁 Script executed:
# Simple test without numpy of the sequence sigma issue
python3 << 'EOF'
# Test 1: What happens when we multiply a tuple by 2?
sigma_sequence = (2.0, 3.0)
n_pad = 2 * sigma_sequence
print(f"n_pad with sequence sigma: {n_pad}")
print(f"Type: {type(n_pad)}")
# Test 2: Can we use tuple elements in arithmetic?
try:
result = n_pad[0] - 1
print(f"Can index tuple n_pad: {result}")
except TypeError as e:
print(f"Error indexing: {e}")
# Test 3: What about scalar float multiplication?
sigma_scalar = 2.5
n_pad_scalar = 2 * sigma_scalar
print(f"\nn_pad with float scalar sigma: {n_pad_scalar}")
print(f"Type: {type(n_pad_scalar)}")
# Test 4: Using float in range operations (simulating slice indices)
try:
n_pad_float = 4.5
idx = int(n_pad_float)
print(f"Can convert float to int: {idx}")
# But what about direct float use?
print(f"Float value {n_pad_float} - int {1} = {n_pad_float - 1}, type: {type(n_pad_float - 1)}")
except Exception as e:
print(f"Error: {e}")
EOFRepository: PHAREHUB/PHARE
Length of output: 293
🏁 Script executed:
# Check the actual scipy.ndimage.gaussian_filter source or behavior with different sigma types
# by looking at how it's used in other parts of the codebase
rg "gaussian_filter" --type py -B 2 -A 2 | grep -E "(sigma|gaussian_filter)" | head -30Repository: PHAREHUB/PHARE
Length of output: 2006
🏁 Script executed:
# Look at how sigma is validated in the gFilt function mentioned earlier
rg "def gFilt" --type py -A 20Repository: PHAREHUB/PHARE
Length of output: 1585
Add validation for scalar sigma and fix padding calculation.
The sigma parameter accepts scalar or sequence values (per scipy.ndimage.gaussian_filter), but the function assumes scalar. Passing a sequence causes 2 * sigma to produce tuple concatenation, breaking slice indexing. Also, the validation sigma == 1 misses fractional values < 2.
🐛 Proposed fix
def gF(hier, **kwargs):
- sigma = kwargs.get("sigma", 4)
- if sigma == 1:
- raise ValueError("sigma value has to be at least 2")
+ sigma = kwargs.get("sigma", 4)
+ if not np.isscalar(sigma):
+ raise ValueError("sigma must be a scalar")
+ if sigma < 2:
+ raise ValueError("sigma value has to be at least 2")
h_ = hier.__deepcopy__(memo={})
ndim = hier.ndim
- n_pad = 2*sigma
+ n_pad = int(np.ceil(2 * sigma))🧰 Tools
🪛 Ruff (0.14.13)
93-93: Avoid specifying long messages outside the exception class
(TRY003)
127-127: Avoid specifying long messages outside the exception class
(TRY003)
🤖 Prompt for AI Agents
In `@pyphare/pyphare/core/ufuncs.py` around lines 90 - 134, The gF function
currently assumes sigma is a scalar and computes n_pad = 2*sigma and slices with
that single value, which breaks when sigma is a sequence and also only checks
sigma == 1; modify gF to accept either a scalar or an iterable: if scalar,
promote it to a tuple of length ndim; if iterable, validate its length equals
hier.ndim and that every sigma_i >= 2 (reject values < 2); compute per-dimension
n_pad_i = int(2 * sigma_i) (ensuring integers) and use n_pad_i when building the
extended coordinate arrays (s_) and when building the select tuple for slicing
(replace single n_pad with per-dimension n_pad_i in the slice computation); keep
calls to scipy.ndimage.gaussian_filter with the original per-dimension sigma
sequence so filtering remains correct and assign pdata.dataset from gf_ using
the per-dimension select slices.
| def peakIds(hier, **kwargs): | ||
| from scipy.signal import find_peaks | ||
|
|
||
| times = list(hier.times()) | ||
| if len(times) == 1: | ||
| time = times[0] | ||
| else: | ||
| raise ValueError('multiple time is not yet implemented') | ||
|
|
||
| pks_ = np.array([]) | ||
| hgs_ = np.array([]) | ||
|
|
||
| names_ = kwargs.pop("names", None) | ||
| if names_ is None: | ||
| names_ = list(hier.levels(time)[0].patches[0].patch_datas.keys()) | ||
|
|
||
| ph_ = kwargs.get('peak_heights', None) | ||
| if ph_ is None: | ||
| raise ValueError("the kwarg 'peak_heights' is mandatory for now...") | ||
|
|
||
| for lvl in hier.levels(time).values(): | ||
| for patch in lvl.patches: | ||
| for name in names_: | ||
| pdata = patch.patch_datas[name] | ||
| ds = np.asarray(pdata.dataset) | ||
| pks = find_peaks(ds, **kwargs) | ||
| for pk, hg in zip(pks[0], pks[1]['peak_heights']): | ||
| pks_ = np.append(pks_, np.add(np.multiply(pk, patch.dl), patch.origin)) | ||
| hgs_ = np.append(hgs_, hg) | ||
|
|
||
| return pks_, hgs_ |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
scipy.signal.find_peaks parameters height peak_heights return value
💡 Result:
scipy.signal.find_peaks(x, height=...) returns:
peaks: a 1Dndarrayof indices of samples inxthat are peaks and satisfy all conditions. [1]properties: adictof intermediate/returned peak properties. If you suppliedheight, it includes:properties["peak_heights"]: the height (amplitude) of each returned peak, i.e., the value ofxat those peak indices. [1]
About the height parameter:
heightcan be:- a single number (minimum height),
- an array (per-sample constraint),
- or a 2-element sequence
(min_height, max_height)(either end can be omitted). [1]
- If you want
peak_heightscomputed/returned without filtering out peaks by height, pass the open intervalheight=(None, None). [1]
Sources: [1]
🏁 Script executed:
cat -n pyphare/pyphare/core/ufuncs.py | sed -n '136,166p'Repository: PHAREHUB/PHARE
Length of output: 1306
Pass height parameter to find_peaks(), not peak_heights. The current code validates for peak_heights (line 152) and passes it via **kwargs to find_peaks() (line 161), which will raise TypeError since peak_heights is output-only. Filter kwargs to exclude peak_heights and validate for height instead.
🐛 Proposed fix
names_ = kwargs.pop("names", None)
if names_ is None:
names_ = list(hier.levels(time)[0].patches[0].patch_datas.keys())
- ph_ = kwargs.get('peak_heights', None)
- if ph_ is None:
- raise ValueError("the kwarg 'peak_heights' is mandatory for now...")
+ find_peaks_kwargs = {k: v for k, v in kwargs.items() if k != "peak_heights"}
+ if "height" not in find_peaks_kwargs:
+ raise ValueError("the kwarg 'height' is mandatory for now...")
for lvl in hier.levels(time).values():
for patch in lvl.patches:
for name in names_:
pdata = patch.patch_datas[name]
ds = np.asarray(pdata.dataset)
- pks = find_peaks(ds, **kwargs)
+ pks = find_peaks(ds, **find_peaks_kwargs)
for pk, hg in zip(pks[0], pks[1]['peak_heights']):
pks_ = np.append(pks_, np.add(np.multiply(pk, patch.dl), patch.origin))
hgs_ = np.append(hgs_, hg)🧰 Tools
🪛 Ruff (0.14.13)
143-143: Avoid specifying long messages outside the exception class
(TRY003)
154-154: Avoid specifying long messages outside the exception class
(TRY003)
162-162: zip() without an explicit strict= parameter
Add explicit value for parameter strict=
(B905)
🤖 Prompt for AI Agents
In `@pyphare/pyphare/core/ufuncs.py` around lines 136 - 166, In peakIds, require
and validate the input 'height' parameter (not the output-only 'peak_heights'),
remove any 'peak_heights' key from kwargs before calling
scipy.signal.find_peaks, and pass the filtered kwargs to find_peaks;
specifically, update the validation that currently checks
kwargs.get('peak_heights') to check for 'height' instead, pop/delete
'peak_heights' from kwargs if present, then call find_peaks(ds,
**filtered_kwargs) and continue using pks[1]['peak_heights'] from the result for
peak magnitudes.
| # time0 = list(hier.times())[0] | ||
| # level0 = 0 | ||
| # p0 = 0 | ||
| # pd0 = hier.levels(time0)[level0].patches[p0].patch_datas | ||
| # key0 = list(pd0.keys())[0] | ||
| # nb_ghosts = np.max(pd0[key0].ghosts_nbr) |
nicolasaunai
left a comment
There was a problem hiding this comment.
@rochSmets question : I had in mind you told me this is still WIP as it may not work for hierarchies with multiple patches? Can you confirm ?
I would prefer merging code that works :)
Also we may want to split this in two and have the interpolator thing as a built-in internal thing to every hierarchy with call operator which allow users to slice hierarchy any way they'd want. TBD.
|
|
||
| def _compute_sqrt(patch_datas, **kwargs): | ||
| ref_name = next(iter(patch_datas.keys())) | ||
| # ref_name = next(iter(patch_datas.keys())) TODO this is always "value" |
There was a problem hiding this comment.
is there something to do here ?
There was a problem hiding this comment.
Actionable comments posted: 6
🤖 Fix all issues with AI agents
In `@pyphare/pyphare/core/ufuncs.py`:
- Around line 112-113: The boolean comparison on n_pad vs nb_ghosts (where
nb_ghosts is pdata.ghosts_nbr and may be an array) is inverted and invalid for
arrays; change the guard to explicitly check elementwise and only raise when
padding is strictly smaller than the ghost region (e.g. use numpy: if
np.any(n_pad < pdata.ghosts_nbr): raise ValueError('sigma value is too small')
), referencing the variables n_pad and pdata.ghosts_nbr so the array comparison
uses np.any/np.all as appropriate for the intended strictness.
- Around line 90-96: The sigma validation in gF is too lax and n_pad can become
a float; change the check from "if sigma == 1" to "if sigma < 2" to reject
values in (1,2), and ensure n_pad is an integer by computing it as n_pad = int(4
* sigma + 1) (or validate/coerce sigma to an integer >= 2 before use); update
any uses of sigma/n_pad in gF accordingly so slice() calls and numpy indexing
receive integer indices.
In `@pyphare/pyphare/pharesee/hierarchy/patch.py`:
- Around line 112-114: The Patch.__array_function__ currently prints debug info
and passes Patch instances straight to NumPy functions, causing
recursion/errors; remove the debug print and instead unwrap any Patch arguments
and kwargs to their underlying numpy array (check isinstance(x, Patch) and
extract the backing attribute e.g. x.array or x.data), call func on the
unwrapped args/kwargs, then if the result is a numpy.ndarray (or array-like)
wrap it back into a Patch (e.g. Patch(result) or Patch.from_array(result))
before returning; ensure kwargs are handled the same way to avoid passing Patch
objects into NumPy.
- Around line 72-110: Fix __array_ufunc__ in Patch: remove debug prints and
correct the log string, then build per-key grouped sequences (not a flat list)
from inputs by validating all inputs are Patch and have identical patch_datas
keys; for each key collect the corresponding values (FieldData or array-like)
across inputs, apply the ufunc to the tuple of those values to produce a new
value for that key, verify lengths/compatibility as needed, and construct a dict
mapping key->result (not a list) before returning Patch(result_dict,
patch_id=self.id, layout=self.layout, attrs=self.attrs); reference symbols:
__array_ufunc__, Patch, patch_datas, FieldData, Patch.__init__, id, layout,
attrs.
In `@pyphare/pyphare/pharesee/hierarchy/patchdata.py`:
- Around line 268-271: ParticleData.__str__ assumes self.dataset has a shape
attribute and index 0, which can raise AttributeError/IndexError; update
ParticleData.__str__ to safely derive the particle count by first checking for
hasattr(self.dataset, "shape") and a valid shape[0], else fall back to using
len(self.dataset) if iterable, and finally use a safe sentinel like "unknown" if
neither works; reference the ParticleData.__str__, self.dataset and
self.pop_name symbols when locating and changing this method.
- Around line 198-218: Remove the debug print statements and fix the copy-paste
message in FieldData.__array_ufunc__; don't log with prints (use proper logging
if needed) and change the incorrect "__array_function__" text to
"__array_ufunc__" if you keep a message. Make both methods consistent by
returning non-ndarray results unchanged (i.e., if result is not an np.ndarray
return result) instead of implicitly returning None in __array_ufunc__; keep the
existing wrapping behavior that returns a new FieldData(self.layout,
"<func>@"+self.field_name, result, centering=self.centerings) when result is an
ndarray. Ensure you reference the same attributes (self.layout, self.field_name,
self.centerings) and functions (__array_ufunc__, __array_function__) when
editing.
🧹 Nitpick comments (2)
pyphare/pyphare/pharesee/hierarchy/patchdata.py (1)
194-196: Inconsistent indentation in__array__body.The method body is indented by 3 levels (12 spaces) instead of the expected 2 levels (8 spaces). While Python won't error on this, it's inconsistent with the rest of the class.
Fix indentation
def __array__(self, dtype=None): - # numpy array protocol - return np.asarray(self.dataset, dtype=dtype) + # numpy array protocol + return np.asarray(self.dataset, dtype=dtype)pyphare/pyphare/core/ufuncs.py (1)
62-75: Usewarnings.warninstead ofLine 75 uses
print()to warn about a potentially dirty filter. This is easy to miss in production. Usewarnings.warn(...)orlogging.warning(...)for better observability.
| def gF(hier, **kwargs): | ||
| sigma = kwargs.get("sigma", 4) | ||
| if sigma == 1: | ||
| raise ValueError("sigma value has to be at least 2") | ||
| h_ = hier.__deepcopy__(memo={}) | ||
| ndim = hier.ndim | ||
| n_pad = 4*sigma+1 |
There was a problem hiding this comment.
Sigma validation is incomplete and n_pad may be a float.
sigma == 1only rejects exactly1; any value in(1, 2)passes. The check should besigma < 2.n_pad = 4*sigma+1— ifsigmais a float (e.g.4.5),n_padwill be a float. This propagates tonp.arange(fine) but also to theslice()calls at line 132, where float indices will raise aTypeErrorin NumPy.
Proposed fix
def gF(hier, **kwargs):
sigma = kwargs.get("sigma", 4)
- if sigma == 1:
- raise ValueError("sigma value has to be at least 2")
+ if not np.isscalar(sigma):
+ raise TypeError("sigma must be a scalar value")
+ if sigma < 2:
+ raise ValueError("sigma value has to be at least 2")
h_ = hier.__deepcopy__(memo={})
ndim = hier.ndim
- n_pad = 4*sigma+1
+ n_pad = int(4*sigma+1)🧰 Tools
🪛 Ruff (0.15.0)
[warning] 93-93: Avoid specifying long messages outside the exception class
(TRY003)
🤖 Prompt for AI Agents
In `@pyphare/pyphare/core/ufuncs.py` around lines 90 - 96, The sigma validation in
gF is too lax and n_pad can become a float; change the check from "if sigma ==
1" to "if sigma < 2" to reject values in (1,2), and ensure n_pad is an integer
by computing it as n_pad = int(4 * sigma + 1) (or validate/coerce sigma to an
integer >= 2 before use); update any uses of sigma/n_pad in gF accordingly so
slice() calls and numpy indexing receive integer indices.
| if not n_pad > nb_ghosts: | ||
| raise ValueError('sigma value is too small') |
There was a problem hiding this comment.
Inverted guard condition: not n_pad > nb_ghosts allows n_pad == nb_ghosts.
The check not n_pad > nb_ghosts is equivalent to n_pad <= nb_ghosts, which raises when n_pad equals nb_ghosts. Given the comment at lines 97-100 about needing padding beyond the ghost region, the intent seems to be n_pad <= nb_ghosts should be fine and only n_pad < nb_ghosts is problematic. But more importantly, nb_ghosts here is pdata.ghosts_nbr which may be an array — this comparison will produce an array, and using it in a boolean context will fail for multi-dimensional cases.
Consider clarifying the intent and using np.all() for the array comparison:
- if not n_pad > nb_ghosts:
- raise ValueError('sigma value is too small')
+ if np.any(n_pad <= nb_ghosts):
+ raise ValueError('sigma value is too small: n_pad must exceed nb_ghosts')🧰 Tools
🪛 Ruff (0.15.0)
[warning] 113-113: Avoid specifying long messages outside the exception class
(TRY003)
🤖 Prompt for AI Agents
In `@pyphare/pyphare/core/ufuncs.py` around lines 112 - 113, The boolean
comparison on n_pad vs nb_ghosts (where nb_ghosts is pdata.ghosts_nbr and may be
an array) is inverted and invalid for arrays; change the guard to explicitly
check elementwise and only raise when padding is strictly smaller than the ghost
region (e.g. use numpy: if np.any(n_pad < pdata.ghosts_nbr): raise
ValueError('sigma value is too small') ), referencing the variables n_pad and
pdata.ghosts_nbr so the array comparison uses np.any/np.all as appropriate for
the intended strictness.
| def __array_ufunc__(self, ufunc, method, *inputs, **kwargs): | ||
| print(f"__array_function__ of Patch called for {ufunc.__name__}") | ||
| if method != "__call__": | ||
| return NotImplemented | ||
|
|
||
| # Collect per-argument sequences | ||
| seqs = [] | ||
| for x in inputs: | ||
| print("zobi : ", type(x)) | ||
| if isinstance(x, Patch): | ||
| print("zob", x.patch_datas) | ||
| for k, v in x.patch_datas.items(): | ||
| print("toto", k, v, type(v)) | ||
| seqs.append(v) | ||
| if isinstance(v, FieldData): | ||
| print("is a fieldData") | ||
| #seqs.append(x.patch_datas) | ||
| else: | ||
| # TODO seqs.append(x) ? | ||
| raise TypeError("this arg should be a Patch") | ||
|
|
||
| # print(type(seqs[0])) | ||
| # print(seqs[0].dataset.shape) | ||
| # # Outer length must match | ||
| # n = len(seqs[0]) # length of the first patch | ||
| # if not all(len(s) == n for s in seqs): # each of the other Patches have to be homogeneous | ||
| # raise ValueError("Patch length mismatch") | ||
|
|
||
| print(seqs) | ||
| for elem in zip(*seqs): | ||
| print(elem) | ||
|
|
||
| # Pairwise application | ||
| result = [ | ||
| ufunc(*elems, **kwargs) | ||
| for elems in zip(*seqs) | ||
| ] | ||
|
|
||
| return Patch(result, patch_id=self.id, layout=self.layout, attrs=self.attrs) |
There was a problem hiding this comment.
__array_ufunc__ has broken logic and will crash at runtime.
Several issues compound here:
-
Flat list instead of per-input grouping: The loop on lines 79-91 appends every value from every input Patch's
patch_datasdict into a single flatseqslist. For two Patch inputs each having keys{"Bx": …, "By": …},seqsbecomes[fd_Bx_1, fd_By_1, fd_Bx_2, fd_By_2]. Thenzip(*seqs)treats eachFieldDataas an iterable, which is not the intent. The correct structure should group corresponding keys across inputs. -
Patchconstructor expects adict, not alist: Line 110 passesresult(a list) toPatch(...), butPatch.__init__callspatch_datas.values()on line 25, which will raiseAttributeErroron a list. -
Debug prints with informal names (
zobi,zob,toto) and commented-out blocks must be removed. -
Copy-paste error in log (line 73): says
__array_function__but this is__array_ufunc__.
Sketch of a corrected implementation
def __array_ufunc__(self, ufunc, method, *inputs, **kwargs):
- print(f"__array_function__ of Patch called for {ufunc.__name__}")
if method != "__call__":
return NotImplemented
- # Collect per-argument sequences
- seqs = []
- for x in inputs:
- print("zobi : ", type(x))
- if isinstance(x, Patch):
- print("zob", x.patch_datas)
- for k, v in x.patch_datas.items():
- print("toto", k, v, type(v))
- seqs.append(v)
- if isinstance(v, FieldData):
- print("is a fieldData")
- `#seqs.append`(x.patch_datas)
- else:
- # TODO seqs.append(x) ?
- raise TypeError("this arg should be a Patch")
-
- # print(type(seqs[0]))
- # print(seqs[0].dataset.shape)
- # # Outer length must match
- # n = len(seqs[0]) # length of the first patch
- # if not all(len(s) == n for s in seqs): # each of the other Patches have to be homogeneous
- # raise ValueError("Patch length mismatch")
-
- print(seqs)
- for elem in zip(*seqs):
- print(elem)
-
- # Pairwise application
- result = [
- ufunc(*elems, **kwargs)
- for elems in zip(*seqs)
- ]
-
- return Patch(result, patch_id=self.id, layout=self.layout, attrs=self.attrs)
+ keys = list(self.patch_datas.keys())
+ result = {}
+ for key in keys:
+ elems = []
+ for x in inputs:
+ if isinstance(x, Patch):
+ elems.append(x.patch_datas[key])
+ else:
+ elems.append(x)
+ result[key] = ufunc(*elems, **kwargs)
+ return Patch(result, patch_id=self.id, layout=self.layout, attrs=self.attrs)🧰 Tools
🪛 Ruff (0.15.0)
[warning] 91-91: Avoid specifying long messages outside the exception class
(TRY003)
[warning] 101-101: zip() without an explicit strict= parameter
Add explicit value for parameter strict=
(B905)
[warning] 107-107: zip() without an explicit strict= parameter
Add explicit value for parameter strict=
(B905)
🤖 Prompt for AI Agents
In `@pyphare/pyphare/pharesee/hierarchy/patch.py` around lines 72 - 110, Fix
__array_ufunc__ in Patch: remove debug prints and correct the log string, then
build per-key grouped sequences (not a flat list) from inputs by validating all
inputs are Patch and have identical patch_datas keys; for each key collect the
corresponding values (FieldData or array-like) across inputs, apply the ufunc to
the tuple of those values to produce a new value for that key, verify
lengths/compatibility as needed, and construct a dict mapping key->result (not a
list) before returning Patch(result_dict, patch_id=self.id, layout=self.layout,
attrs=self.attrs); reference symbols: __array_ufunc__, Patch, patch_datas,
FieldData, Patch.__init__, id, layout, attrs.
| def __array_function__(self, func, types, args, kwargs): | ||
| print(f"__array_function__ of Patch {func.__name__} called for {[getattr(a, 'name', a) for a in args]}") | ||
| return func(*args, **kwargs) |
There was a problem hiding this comment.
__array_function__ is a no-op passthrough that doesn't unwrap Patch objects.
This delegates directly to func(*args, **kwargs), which means func (a NumPy function) receives Patch objects it doesn't understand. This will either recurse infinitely or raise. Also, the debug print should be removed.
🤖 Prompt for AI Agents
In `@pyphare/pyphare/pharesee/hierarchy/patch.py` around lines 112 - 114, The
Patch.__array_function__ currently prints debug info and passes Patch instances
straight to NumPy functions, causing recursion/errors; remove the debug print
and instead unwrap any Patch arguments and kwargs to their underlying numpy
array (check isinstance(x, Patch) and extract the backing attribute e.g. x.array
or x.data), call func on the unwrapped args/kwargs, then if the result is a
numpy.ndarray (or array-like) wrap it back into a Patch (e.g. Patch(result) or
Patch.from_array(result)) before returning; ensure kwargs are handled the same
way to avoid passing Patch objects into NumPy.
| def __array_ufunc__(self, ufunc, method, *inputs, **kwargs): | ||
| print(f"__array_function__ of FieldData called for {ufunc.__name__}") | ||
| if method != "__call__": | ||
| raise NotImplementedError | ||
|
|
||
| unwrapped = [i.dataset if isinstance(i, FieldData) else i for i in inputs] | ||
| result = getattr(ufunc, method)(*unwrapped, **kwargs) | ||
|
|
||
| if isinstance(result, np.ndarray): | ||
| return FieldData(self.layout, ufunc.__name__+"@"+self.field_name, result, centering=self.centerings) | ||
|
|
||
| def __array_function__(self, func, types, args, kwargs): | ||
| print(f"__array_function__ of FieldData {func.__name__} called with {[getattr(a, 'name', a) for a in args]}") | ||
|
|
||
| unwrapped = [a.dataset if isinstance(a, FieldData) else a for a in args] | ||
| result = func(*unwrapped, **kwargs) | ||
|
|
||
| if isinstance(result, np.ndarray): | ||
| return FieldData(self.layout, func.__name__+"@"+self.field_name, result, centering=self.centerings) | ||
| else: | ||
| return result |
There was a problem hiding this comment.
Remove debug print statements and fix copy-paste error in log message.
- Line 199 logs
"__array_function__"but this is__array_ufunc__— copy-paste bug. - Both
printcalls (lines 199, 210) should be removed (or replaced with proper logging) before merge. __array_ufunc__returnsNoneimplicitly when the result is not anndarray(e.g., a scalar from a reduction ufunc). This will silently swallow valid results. Similarly,__array_function__only wrapsndarrayresults but already returns non-ndarray results on line 218 — the two methods should be consistent.
Proposed fix
def __array_ufunc__(self, ufunc, method, *inputs, **kwargs):
- print(f"__array_function__ of FieldData called for {ufunc.__name__}")
if method != "__call__":
raise NotImplementedError
unwrapped = [i.dataset if isinstance(i, FieldData) else i for i in inputs]
result = getattr(ufunc, method)(*unwrapped, **kwargs)
if isinstance(result, np.ndarray):
return FieldData(self.layout, ufunc.__name__+"@"+self.field_name, result, centering=self.centerings)
+ return result
def __array_function__(self, func, types, args, kwargs):
- print(f"__array_function__ of FieldData {func.__name__} called with {[getattr(a, 'name', a) for a in args]}")
-
unwrapped = [a.dataset if isinstance(a, FieldData) else a for a in args]
result = func(*unwrapped, **kwargs)
if isinstance(result, np.ndarray):
return FieldData(self.layout, func.__name__+"@"+self.field_name, result, centering=self.centerings)
else:
return result🤖 Prompt for AI Agents
In `@pyphare/pyphare/pharesee/hierarchy/patchdata.py` around lines 198 - 218,
Remove the debug print statements and fix the copy-paste message in
FieldData.__array_ufunc__; don't log with prints (use proper logging if needed)
and change the incorrect "__array_function__" text to "__array_ufunc__" if you
keep a message. Make both methods consistent by returning non-ndarray results
unchanged (i.e., if result is not an np.ndarray return result) instead of
implicitly returning None in __array_ufunc__; keep the existing wrapping
behavior that returns a new FieldData(self.layout, "<func>@"+self.field_name,
result, centering=self.centerings) when result is an ndarray. Ensure you
reference the same attributes (self.layout, self.field_name, self.centerings)
and functions (__array_ufunc__, __array_function__) when editing.
| def __str__(self): | ||
| return "ParticleData: (pop_name={}, #_of_part={})".format( | ||
| self.pop_name, self.dataset.shape[0] # TODO need to be tested | ||
| ) |
There was a problem hiding this comment.
ParticleData.__str__ has an unverified assumption about dataset.shape.
The TODO on line 270 acknowledges this. If self.dataset doesn't have a shape attribute (or is not a 2D array), this will raise an AttributeError or IndexError at runtime. Consider a safer fallback.
Safer alternative
def __str__(self):
- return "ParticleData: (pop_name={}, `#_of_part`={})".format(
- self.pop_name, self.dataset.shape[0] # TODO need to be tested
- )
+ count = self.dataset.shape[0] if hasattr(self.dataset, "shape") else "?"
+ return f"ParticleData: (pop_name={self.pop_name}, `#_of_part`={count})"🤖 Prompt for AI Agents
In `@pyphare/pyphare/pharesee/hierarchy/patchdata.py` around lines 268 - 271,
ParticleData.__str__ assumes self.dataset has a shape attribute and index 0,
which can raise AttributeError/IndexError; update ParticleData.__str__ to safely
derive the particle count by first checking for hasattr(self.dataset, "shape")
and a valid shape[0], else fall back to using len(self.dataset) if iterable, and
finally use a safe sentinel like "unknown" if neither works; reference the
ParticleData.__str__, self.dataset and self.pop_name symbols when locating and
changing this method.
| interp_[qty] = build_interpolator(data, coords, interp, box, dl, qty, nbrGhosts) | ||
| return interp_ | ||
|
|
||
| def __array_ufunc__(self, ufunc, method, *inputs, **kwargs): |
Check notice
Code scanning / CodeQL
Explicit returns mixed with implicit (fall through) returns Note
| data_files=self.data_files, | ||
| selection_box=self.selection_box) | ||
|
|
||
| from .scalarfield import ScalarField |
Check notice
Code scanning / CodeQL
Cyclic import Note
| selection_box=self.selection_box) | ||
|
|
||
| from .scalarfield import ScalarField | ||
| from .vectorfield import VectorField |
Check notice
Code scanning / CodeQL
Cyclic import Note
|
Is there a plot we can add to |
Gaussian filter and find peaks are to be imported with
from pyphare.core.ufuncs import gF, peakIdsFor example, for a
vectorFieldlike the ion velocity withV = run.GetVi(time), it can be smoothed with a gaussian filter withsigma=6asv = gF(V, sigma=6). Hencevis a differentvectorFieldwith exactly the same structure, except that the contained dataset are smoothed.One can also look at the peaks with
pks, hgs = peakIds(v, names=['x',], height=0.015)where we need to provide the name of the dataset,valuefor example for ascalarField. In this example, the kwargheightis forwarded to thefind_peaksofscipy.ndimage. It goes the same for whatever kwargs that can be provided tofind_peaks. The returned quantities are the same, that is a tuple with the peaks location and a tuple (of same size) with their height.