Conversation
📝 WalkthroughWalkthroughAdds comprehensive 3D particle-splitting support and 3D tests, refactors pattern/dispatcher types, hardens/error-API and MPI error handling, changes restart handling and timing instrumentation in the simulator, updates many APIs to const-ref or new helpers, and expands CMake/test gating for multi-dimension suites. Changes
Sequence Diagram(s)(omitted) Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related PRs
Suggested labels
Suggested reviewers
🚥 Pre-merge checks | ❌ 3❌ Failed checks (1 warning, 2 inconclusive)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
29a16f4 to
57849e8
Compare
12cffd8 to
17fdb48
Compare
There was a problem hiding this comment.
Actionable comments posted: 4
♻️ Duplicate comments (7)
tests/core/data/gridlayout/deriv.py (1)
77-78:⚠️ Potential issue | 🔴 Critical
scipy.misc.derivativeis removed in SciPy ≥ 1.15 — moving the import insidemain()does not fix the breakage.Moving the
import scipy.miscinsidemain()only defers theImportErrorto runtime; it does not resolve the fact thatscipy.miscand all its contents were entirely removed in SciPy 1.15.0. Sinceprimitive(coord) = math.cos(coord), its derivative is analytically-math.sin(coord)and no numerical approximation is needed.Fix: replace with the analytic derivative and remove the dead import
def main(path="./"): - import scipy.misc # see doc/conventions.md `#2.2.1` - if len(sys.argv) == 2:- derivedValue = scipy.misc.derivative( - primitive, coord, dx, order=3 - ) + derivedValue = -math.sin(coord)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/core/data/gridlayout/deriv.py` around lines 77 - 78, The test imports scipy.misc and attempts to use scipy.misc.derivative which was removed in SciPy ≥1.15; move away from the dead import by deleting the scipy.misc import and replace any numerical derivative call with the analytic derivative for primitive(coord)=math.cos(coord): return -math.sin(coord). Update the function or call site (e.g., main() and any helper named derivative or primitive) to compute -math.sin(coord) instead of calling scipy.misc.derivative, and remove the unused import line.tests/simulator/advance/CMakeLists.txt (1)
13-26: SamePHARE_PYTEST_SIMULATORSdefault-value concern as intests/simulator/initialize/CMakeLists.txt.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/simulator/advance/CMakeLists.txt` around lines 13 - 26, PHARE_PYTEST_SIMULATORS is used without a guaranteed default here (affecting the conditional blocks that call phare_mpi_python3_exec with PHARE_MPI_PROCS), so mirror the fix from tests/simulator/initialize by either explicitly setting a sane default for PHARE_PYTEST_SIMULATORS at top of this CMakeLists (e.g. via set(PHARE_PYTEST_SIMULATORS <default> CACHE STRING ...) or guarding usage with if(DEFINED PHARE_PYTEST_SIMULATORS) or if(NOT PHARE_PYTEST_SIMULATORS) checks; ensure the conditional expressions that wrap the calls to phare_mpi_python3_exec reference the now-defined/defaulted PHARE_PYTEST_SIMULATORS to avoid unintended behavior when the variable is unset.res/cmake/dep/samrai.cmake (1)
20-28:⚠️ Potential issue | 🟠 Major
SAMRAI_DEBUGcurrently flips debug-check behavior.When
SAMRAI_DEBUGis ON, the threeDEBUG_*cache vars are set toOff, and they are set toOnin theelsebranch. This inverts the flag’s intent.Proposed fix
if (SAMRAI_DEBUG) - set (DEBUG_INITIALIZE_UNDEFINED Off CACHE BOOL "YES!" FORCE) - set (DEBUG_CHECK_ASSERTIONS Off CACHE BOOL "YES!" FORCE) - set (DEBUG_CHECK_DIM_ASSERTIONS Off CACHE BOOL "YES!" FORCE) + set (DEBUG_INITIALIZE_UNDEFINED On CACHE BOOL "YES!" FORCE) + set (DEBUG_CHECK_ASSERTIONS On CACHE BOOL "YES!" FORCE) + set (DEBUG_CHECK_DIM_ASSERTIONS On CACHE BOOL "YES!" FORCE) else() - set (DEBUG_INITIALIZE_UNDEFINED On CACHE BOOL "NO!" FORCE) - set (DEBUG_CHECK_ASSERTIONS On CACHE BOOL "NO!" FORCE) - set (DEBUG_CHECK_DIM_ASSERTIONS On CACHE BOOL "NO!" FORCE) + set (DEBUG_INITIALIZE_UNDEFINED Off CACHE BOOL "NO!" FORCE) + set (DEBUG_CHECK_ASSERTIONS Off CACHE BOOL "NO!" FORCE) + set (DEBUG_CHECK_DIM_ASSERTIONS Off CACHE BOOL "NO!" FORCE) endif(SAMRAI_DEBUG)#!/bin/bash set -euo pipefail nl -ba res/cmake/dep/samrai.cmake | sed -n '18,30p'Expected result: current branch values show the inversion directly.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@res/cmake/dep/samrai.cmake` around lines 20 - 28, The SAMRAI_DEBUG conditional in samrai.cmake inverts the intended behavior: when SAMRAI_DEBUG is true it sets DEBUG_INITIALIZE_UNDEFINED, DEBUG_CHECK_ASSERTIONS, and DEBUG_CHECK_DIM_ASSERTIONS to Off, and sets them to On in the else branch; change the branches so that when SAMRAI_DEBUG is true those three CACHE BOOL variables are set to On (with appropriate CACHE/FORCE as currently used) and set to Off in the else branch, keeping the variable names DEBUG_INITIALIZE_UNDEFINED, DEBUG_CHECK_ASSERTIONS, DEBUG_CHECK_DIM_ASSERTIONS and the existing CACHE/BOOL/FORCE arguments unchanged.pyphare/pyphare/pharein/maxwellian_fluid_model.py (1)
353-360:⚠️ Potential issue | 🔴 CriticalBug:
dual_rightandprimal_rightstill hardcodedomain[0]instead ofdomain[idir].The
sim.dl[idir]fix was applied todual_left/primal_left, but the right-side offsets still usedomain[0]. For Y (idir=1) and Z (idir=2) directions, the domain extent in the corresponding direction must be used. The same bug persists invalidate2d(lines 270, 274).Proposed fix for validate3d (apply same pattern to validate2d)
dual_left = (np.arange(-nbrDualGhosts, nbrDualGhosts) + 0.5) * sim.dl[ idir ] - dual_right = dual_left + domain[0] + dual_right = dual_left + domain[idir] primal_left = ( np.arange(-nbrPrimalGhosts, nbrPrimalGhosts) * sim.dl[idir] ) - primal_right = primal_left + domain[0] + primal_right = primal_left + domain[idir]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyphare/pyphare/pharein/maxwellian_fluid_model.py` around lines 353 - 360, dual_right and primal_right are incorrectly using domain[0] instead of the dimension-specific domain extent; update their calculations to use domain[idir] (matching the existing sim.dl[idir] usage) in validate3d (where dual_right and primal_right are defined) and apply the same change in validate2d (the analogous occurrences around the previous lines). Locate the variables dual_left/dual_right and primal_left/primal_right in validate3d and validate2d and replace domain[0] with domain[idir] so the offsets use the correct directional domain extent.tests/simulator/test_advance.py (1)
806-809:⚠️ Potential issue | 🟠 MajorMixed return type persists —
error_boxes(list) vssuccessful_test_nbr(int).When
rethrow_isFalseand assertion errors occur, this returns a list to the caller at line 878 which then callsself.assertGreater(successful_test_nbr, ...)— producing aTypeError. Even ifrethrow_is typicallyTrue, this is a latent bug.🐛 Proposed fix
- if len(error_boxes): - return error_boxes - return successful_test_nbr + if error_boxes: + self.fail(f"level ghost subcycle interpolation errors: {error_boxes}") + return successful_test_nbr🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/simulator/test_advance.py` around lines 806 - 809, The function currently returns a list (error_boxes) when failures occurred but elsewhere the caller expects an int (successful_test_nbr) causing a TypeError; modify the final return logic so the function always returns successful_test_nbr (an int) regardless of error_boxes when rethrow_ is False, and preserve error details by logging or storing error_boxes instead of returning them; update the block that checks len(error_boxes) to log/process error_boxes and then return successful_test_nbr, keeping references to error_boxes, successful_test_nbr and the rethrow_ flag to locate and change the behavior.tests/functional/harris/harris_3d.py (1)
170-171: Diag directory cleanup is not registered.The
register_diag_dir_for_cleanupcall is absent (previously noted as commented out), so"phare_outputs/harris_3d"is never cleaned up after the test.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/functional/harris/harris_3d.py` around lines 170 - 171, Add a call to register_diag_dir_for_cleanup("phare_outputs/harris_3d") so the test cleans its output directory; place the call in the module's main entry just before startMPI() (i.e., add register_diag_dir_for_cleanup(...) above the existing startMPI() invocation) so the diagnostic directory is registered for removal when the test runs.src/amr/data/particles/refine/split_3d.hpp (1)
83-85:⚠️ Potential issue | 🟠 MajorFix LimePattern middle points: they are not edge midpoints.
Line 85 (
addSquare(8, 0)) places the middle group on axis-aligned face points, not vertical edge midpoints. For the “1 per edge - centered” contract, these four points must be{±delta, 0, ±delta}.Proposed fix
addSquare(0, delta); // top addSquare(4, -delta); // bottom - addSquare(8, 0); // middle + // vertical edge midpoints at y = 0 + Super::deltas_[8] = {delta, zero, delta}; + Super::deltas_[9] = {delta, zero, -delta}; + Super::deltas_[10] = {-delta, zero, delta}; + Super::deltas_[11] = {-delta, zero, -delta};🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/amr/data/particles/refine/split_3d.hpp` around lines 83 - 85, The LimePattern middle group currently created by the call addSquare(8, 0) produces axis-aligned face points; change the middle-group generation so it creates the four vertical edge-midpoint positions {±delta, 0, ±delta} to satisfy the “1 per edge - centered” contract—update the call/site that invokes addSquare(8, 0) in split_3d.hpp (or alter addSquare to support an edge-midpoint mode) so the produced points are the vertical edge midpoints rather than face centers.
🧹 Nitpick comments (14)
src/amr/data/field/coarsening/coarsen_weighter.hpp (1)
19-22: Polish the new comment wording for clarity.Nice context addition; a small wording tweak will read more clearly (“hard-coded to 2”).
✏️ Suggested wording
- * This implementation dates from the early phase of the project where - * the refinement ratio was not hard-coded 2 and thus works for the general case + * This implementation dates from an early phase of the project, when + * the refinement ratio was not hard-coded to 2, and thus works for the general case.As per coding guidelines,
**/*.hpp: review C++ code for clean code and expressiveness.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/amr/data/field/coarsening/coarsen_weighter.hpp` around lines 19 - 22, Update the explanatory comment in coarsening implementation to improve clarity: change the phrase "hard-coded 2" to "hard-coded to 2" and rephrase the sentence to read smoothly (e.g., "This implementation dates from the early phase of the project when the refinement ratio was not hard-coded to 2, so it supports the general case"). Locate the comment near the coarsening implementation in coarsen_weighter.hpp (the coarsening/weighter class or related coarsen function) and replace the existing lines with the polished wording.src/core/data/particles/particle.hpp (1)
5-6: Add an explicit<limits>include forstd::numeric_limits.
std::numeric_limitsis used on line 26, but<limits>is not explicitly included. Relying on transitive includes from other headers can break portability across different standard library implementations and compilers.Proposed diff
`#include` "core/def.hpp" `#include` "core/utilities/point/point.hpp" `#include` <array> +#include <limits> `#include` <random> `#include` <iostream> `#include` <algorithm> `#include` <type_traits>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/data/particles/particle.hpp` around lines 5 - 6, The file is missing an explicit include for <limits> despite using std::numeric_limits (around line 26); add an explicit `#include` <limits> alongside the other includes in particle.hpp so std::numeric_limits is resolved portably (update the include block near the existing "core/def.hpp" and "core/utilities/point/point.hpp" lines); no other code changes needed.res/cmake/options.cmake (1)
74-75: Consider adding new options toprint_phare_options.Per the file's own guidance (line 2), both
useExceptionsInsteadOfMPIAbortandSAMRAI_DEBUGare absent from theprint_phare_optionsfunction, making it harder to audit active build configuration at a glance.♻️ Suggested addition to
print_phare_optionsmessage("profile guided optimization use : " ${PGO_USE}) + message("Use exceptions instead of MPI abort : " ${useExceptionsInsteadOfMPIAbort}) + message("Build SAMRAI with assertions : " ${SAMRAI_DEBUG}) message("MPI_LIBRARY_PATH : " ${MPI_LIBRARY_PATH})Also applies to: 112-113
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@res/cmake/options.cmake` around lines 74 - 75, The print_phare_options function is missing the build options useExceptionsInsteadOfMPIAbort and SAMRAI_DEBUG; update print_phare_options to include and display these options' current values by referencing their CMake option names (useExceptionsInsteadOfMPIAbort and SAMRAI_DEBUG) so they are printed alongside the other options, ensuring the function shows the ON/OFF state for both symbols when invoked.src/amr/tagging/default_hybrid_tagger_strategy.hpp (1)
109-120: Hoistfield_diffout of the innermost loops to improve code clarity and reduce unnecessary allocations in the hot path.
field_diffis recreated for each tagged cell in both 2D (line 109-120) and 3D (line 154-167) branches. Moving the lambda definition outside the nested loops avoids repeated closure construction and improves code expressiveness. The lambda captures loop variables by reference, so it can safely be moved to the outer scope.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/amr/tagging/default_hybrid_tagger_strategy.hpp` around lines 109 - 120, The lambda field_diff is being redefined inside hot nested loops; move its definition out to the surrounding scope and change it to take explicit index arguments (e.g., (int ix, int iy) for the 2D case and a separate overload or a 3-arg variant for the 3D case) instead of capturing loop variables, then update the call sites inside the loops to pass the current indices; this removes repeated closure construction, avoids capturing loop locals, and preserves the same computations (delta_2x, delta_2y, delta_x, delta_y and returned criter_x/criter_y) while keeping behavior identical.tests/simulator/test_initialization.py (1)
385-390: Closure captures loop variablesselect,patch,nbrGhosts.The
domaininner function captures variables that change with each loop iteration. This is safe here because the function is called immediately, but Ruff B023 flags it as a footgun. Binding via default arguments would silence the warning and guard against future refactoring.Proposed fix
- def domain(patch_data): - if dim == 1: - return patch_data.dataset[select] + def domain(patch_data, _s=select, _p=patch, _ng=nbrGhosts): + if dim == 1: + return patch_data.dataset[_s] return patch_data.dataset[:].reshape( - patch.box.shape + (nbrGhosts * 2) + 1 - )[select] + _p.box.shape + (_ng * 2) + 1 + )[_s]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/simulator/test_initialization.py` around lines 385 - 390, The inner function domain captures loop variables select, patch, nbrGhosts (and dim) which can change across iterations and trigger Ruff B023; fix by binding those loop-scoped values as default parameters on domain (e.g., def domain(patch_data, select=select, patch=patch, nbrGhosts=nbrGhosts, dim=dim): ...) and then use those local defaults inside the body so the function no longer closes over mutable loop variables.src/core/utilities/mpi_utils.hpp (1)
252-265: Asymmetric implementations:maxuses allgather,minuses allreduce.
max()collects all values across ranks (O(P) memory + allgather) then takesstd::max_element, whilemin()uses a singleMPI_AllreducewithMPI_MIN(O(1) extra memory). Consider usingMPI_AllreducewithMPI_MAXformax()as well, for consistency and better scalability.Proposed fix
auto max(auto const local) { - auto const mpi_size = size(); - auto const perMPI = collect(local, mpi_size); - return *std::max_element(std::begin(perMPI), std::end(perMPI)); + auto global = local; + MPI_Allreduce(&local, &global, 1, mpi_type_for(local), MPI_MAX, MPI_COMM_WORLD); + return global; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/utilities/mpi_utils.hpp` around lines 252 - 265, The max() implementation currently uses collect(local, mpi_size) and std::max_element which is asymmetric and O(P) memory; change max() to mirror min() by performing an MPI_Allreduce(&local, &global, 1, mpi_type_for(local), MPI_MAX, MPI_COMM_WORLD) (like min() does with MPI_MIN), returning the reduced global value; keep using the same helper symbols (mpi_type_for, size if needed) and remove the allgather/collect usage so max() scales and matches min()’s pattern.tests/simulator/test_advance.py (2)
251-257: Unreachable comments insidegetHierarchyafter the finalreturn.These section-header comments (lines 251–257) sit after
return particle_hieron line 249, making them dead code insidegetHierarchy. They appear intended as a visual separator before the test definitions. Move them outside the function scope for clarity.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/simulator/test_advance.py` around lines 251 - 257, The section-header comments are placed after the final "return particle_hier" inside the getHierarchy function, making them unreachable; move those comment blocks so they sit outside (after) the end of the getHierarchy function scope (i.e., dedent them to module/test level just after the function definition) so they act as a visual separator before the test definitions rather than dead code within getHierarchy.
308-316: Ambiguous variable namel(flagged by linter).The comprehension uses
lfromzip(upper, lower, dl), which is easy to confuse with1. Consider usingloorlowinstead.♻️ Proposed fix
- sizex, sizey = [ - (u - l) * d for u, l, d in zip(upper, lower, dl) - ] + sizex, sizey = [ + (u - lo) * d for u, lo, d in zip(upper, lower, dl) + ]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/simulator/test_advance.py` around lines 308 - 316, In makerec replace the ambiguous comprehension variable `l` in the sizex,sizey calculation with a clearer name (e.g., `lo` or `low`) to avoid confusion with the digit `1`; specifically update the list comprehension in makerec that currently reads `[(u - l) * d for u, l, d in zip(upper, lower, dl)]` to use `[(u - lo) * d for u, lo, d in zip(upper, lower, dl)]` (and update any matching variable occurrence in makerec) so the function signature and returned Rectangle code (makerec, origin calculation) remain unchanged.src/core/data/ndarray/ndarray_vector.hpp (2)
116-117:zstart()/zend()accessshape_[2]unconditionally — UB if called on a 1D/2DMaskedView.These accessors are only invoked within
if constexpr (dimension == 3)branches, so they should never be instantiated for lower dimensions. However, nothing prevents an accidental explicit call. Consider adding astatic_assertor arequiresclause.♻️ Suggested guard
- NO_DISCARD auto zstart() const { return mask_.min(); } - NO_DISCARD auto zend() const { return shape_[2] - 1 - mask_.max(); } + NO_DISCARD auto zstart() const requires (dimension >= 3) { return mask_.min(); } + NO_DISCARD auto zend() const requires (dimension >= 3) { return shape_[2] - 1 - mask_.max(); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/data/ndarray/ndarray_vector.hpp` around lines 116 - 117, The accessors zstart() and zend() call shape_[2] unconditionally which is undefined for 1D/2D MaskedView; restrict their instantiation by adding a compile-time guard (e.g., add a static_assert( dimension == 3, "zstart/zend only valid for 3D") inside each method or add a requires clause / if constexpr template constraint on the methods so they only participate when dimension == 3) so accidental explicit calls on lower-dimensional instances fail to compile; update the zstart() and zend() definitions to include that constraint referencing the existing dimension template parameter and mask_/shape_ members.
244-252: Double initialization whenvalueis provided.The vector is first value-initialized (zeroed for
double), then immediately overwritten byfill(*value). For test code this is fine, but if ever used on hot paths with large arrays, consider using thestd::vector(count, value, alloc)constructor directly.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/data/ndarray/ndarray_vector.hpp` around lines 244 - 252, The NdArrayVector constructor currently value-initializes data_ then calls fill(*value), causing double initialization; change the constructor so data_ is constructed with the proper count and either the default value or the provided value up-front: compute the element count from ncells (same way as now) and in the member initializer for data_ use the single-vector constructor that takes (count, value, Allocator_t{}) when value.has_value(), otherwise use the count-only constructor; update the explicit NdArrayVector(std::array<std::uint32_t, dim> const& ncells, std::optional<type> const& value) initializer to conditionally initialize data_ to avoid the extra fill call.tests/simulator/initialize/test_fields_init_3d.py (1)
40-45: RAM concern is noted but unaddressed.The comment on line 40 flags that this test uses too much RAM. For 3D with
np.asarray([20, 50, 75])andcells=10, the largest case (ppc=75 in 3D → 75,000 particles) should be manageable, but confirm this doesn't cause CI failures on resource-constrained runners.Would you like me to open an issue to track isolating this test for memory-constrained environments?
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/simulator/initialize/test_fields_init_3d.py` around lines 40 - 45, The test uses large particle counts in 3D (np.asarray([20, 50, 75]) with cells=10) which can blow memory on CI; modify the test_density_decreases_as_1overSqrtN block to guard or reduce sizes: either replace the ppc array with smaller values for 3D (e.g., np.asarray([10,20,50]) or similar) or add a pytest.skip/pytest.mark.skipif that checks available RAM (use psutil.virtual_memory().available < threshold) or an environment flag (e.g., CI) before calling self._test_density_decreases_as_1overSqrtN so the heavy case is skipped on resource-constrained runners while leaving the test logic in place.tests/core/data/ndarray/test_main.cpp (1)
399-399: Unnecessary fully-qualified names givenusing namespace PHARE::coreon line 9.
PHARE::core::NdArrayMaskandPHARE::core::NdArrayVector<3>can be simplified toNdArrayMaskandNdArrayVector<3>for consistency with the rest of the file.Also applies to: 419-419, 446-446
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/core/data/ndarray/test_main.cpp` at line 399, Remove the redundant PHARE::core:: qualifier from type aliases introduced after the file already imports the namespace (e.g., change PHARE::core::NdArrayMask to NdArrayMask and PHARE::core::NdArrayVector<3> to NdArrayVector<3> in the alias declarations such as the one creating Mask and any similar aliases at the other reported locations) so the code consistently uses the unqualified names provided by using namespace PHARE::core.tests/functional/harris/harris_3d.py (1)
69-70: Ambiguous variable namel(E741).Ruff flags
las ambiguous because it can be confused with1orI. Consider renaming tolengthorl_scale, though this is a common physics notation.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/functional/harris/harris_3d.py` around lines 69 - 70, Rename the ambiguous parameter l in function S(y, y0, l) to a clearer name (e.g., l_scale or length) and update the function signature and body to use that new name (for example change S(y, y0, l) to S(y, y0, l_scale) and replace (y - y0) / l with (y - y0) / l_scale); then update all callers of S to pass and/or reference the new parameter name so no usages break.tests/simulator/test_diagnostics.py (1)
128-140:permute()mutates itsdicargument in-place.Line 130
dic.update(simArgs.copy())modifies the caller's dict. Currently safe because each call site passes a fresh dict literal, but this is fragile. Consider creating a local copy first.♻️ Suggested improvement
def permute(dic): interp_orders = [1, 2, 3] - dic.update(simArgs.copy()) + merged = {**dic, **simArgs} return [ dict( ndim=ndim, interp=interp_order, - simInput=deepcopy(dic), + simInput=deepcopy(merged), ) for ndim, interp_order in itertools.product( supported_dimensions(), interp_orders ) ]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/simulator/test_diagnostics.py` around lines 128 - 140, The permute function mutates its dic parameter via dic.update(simArgs.copy()), which can change caller state; fix it by creating a local copy of dic (e.g., local_dic = a shallow/deep copy) and update that local copy with simArgs before using it to build the returned list; update references in permute to use this local copy (keep use of deepcopy(simInput) as-is) so callers are not affected.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/amr/samrai.hpp`:
- Around line 57-79: The static_assert(false, ...) in getVectorFromRestart and
putVectorToRestart is ill-formed; add a dependent-false trait (e.g.,
template<class> inline constexpr bool dependent_false = false;) and replace both
static_assert(false, "...") with static_assert(dependent_false<T>, "SAMRAI ...
Vector not supported, add it!") so the assertion is dependent on T and only
triggers at instantiation for unsupported types; update both functions
(getVectorFromRestart and putVectorToRestart) to use this trait in their else
branches.
In `@src/diagnostic/diagnostic_model_view.hpp`:
- Line 8: Remove the redundant duplicate include of "amr/amr_constants.hpp" in
diagnostic_model_view.hpp by deleting the second occurrence so the header is
only included once; locate the repeated `#include` "amr/amr_constants.hpp" and
remove it, ensuring no other references are changed and include guards or pragma
once remain intact.
In `@tests/core/data/ion_population/test_ion_population_fixtures.hpp`:
- Around line 169-178: The move constructor for UsableIons currently copies the
base via Super(that) because the named rvalue reference parameter `that` is an
lvalue; change the base construction to move by calling Super(std::move(that))
in the UsableIons(UsableIons&& that) ctor so the base is moved instead of
copied, keeping the rest of the member std::move(...) initializers as-is.
In `@tests/simulator/advance/test_fields_advance_3d.py`:
- Around line 64-66: The variable largest_patch_size returned from
check_patch_size is unused; update the binding to make intent explicit by
replacing largest_patch_size with a throwaway variable (e.g., _ ) while keeping
smallest_patch_size, so change the assignment involving check_patch_size(ndim,
interp_order=interp_order, cells=cells) to bind the unused first return to a
throwaway name and preserve smallest_patch_size.
---
Duplicate comments:
In `@pyphare/pyphare/pharein/maxwellian_fluid_model.py`:
- Around line 353-360: dual_right and primal_right are incorrectly using
domain[0] instead of the dimension-specific domain extent; update their
calculations to use domain[idir] (matching the existing sim.dl[idir] usage) in
validate3d (where dual_right and primal_right are defined) and apply the same
change in validate2d (the analogous occurrences around the previous lines).
Locate the variables dual_left/dual_right and primal_left/primal_right in
validate3d and validate2d and replace domain[0] with domain[idir] so the offsets
use the correct directional domain extent.
In `@res/cmake/dep/samrai.cmake`:
- Around line 20-28: The SAMRAI_DEBUG conditional in samrai.cmake inverts the
intended behavior: when SAMRAI_DEBUG is true it sets DEBUG_INITIALIZE_UNDEFINED,
DEBUG_CHECK_ASSERTIONS, and DEBUG_CHECK_DIM_ASSERTIONS to Off, and sets them to
On in the else branch; change the branches so that when SAMRAI_DEBUG is true
those three CACHE BOOL variables are set to On (with appropriate CACHE/FORCE as
currently used) and set to Off in the else branch, keeping the variable names
DEBUG_INITIALIZE_UNDEFINED, DEBUG_CHECK_ASSERTIONS, DEBUG_CHECK_DIM_ASSERTIONS
and the existing CACHE/BOOL/FORCE arguments unchanged.
In `@src/amr/data/particles/refine/split_3d.hpp`:
- Around line 83-85: The LimePattern middle group currently created by the call
addSquare(8, 0) produces axis-aligned face points; change the middle-group
generation so it creates the four vertical edge-midpoint positions {±delta, 0,
±delta} to satisfy the “1 per edge - centered” contract—update the call/site
that invokes addSquare(8, 0) in split_3d.hpp (or alter addSquare to support an
edge-midpoint mode) so the produced points are the vertical edge midpoints
rather than face centers.
In `@tests/core/data/gridlayout/deriv.py`:
- Around line 77-78: The test imports scipy.misc and attempts to use
scipy.misc.derivative which was removed in SciPy ≥1.15; move away from the dead
import by deleting the scipy.misc import and replace any numerical derivative
call with the analytic derivative for primitive(coord)=math.cos(coord): return
-math.sin(coord). Update the function or call site (e.g., main() and any helper
named derivative or primitive) to compute -math.sin(coord) instead of calling
scipy.misc.derivative, and remove the unused import line.
In `@tests/functional/harris/harris_3d.py`:
- Around line 170-171: Add a call to
register_diag_dir_for_cleanup("phare_outputs/harris_3d") so the test cleans its
output directory; place the call in the module's main entry just before
startMPI() (i.e., add register_diag_dir_for_cleanup(...) above the existing
startMPI() invocation) so the diagnostic directory is registered for removal
when the test runs.
In `@tests/simulator/advance/CMakeLists.txt`:
- Around line 13-26: PHARE_PYTEST_SIMULATORS is used without a guaranteed
default here (affecting the conditional blocks that call phare_mpi_python3_exec
with PHARE_MPI_PROCS), so mirror the fix from tests/simulator/initialize by
either explicitly setting a sane default for PHARE_PYTEST_SIMULATORS at top of
this CMakeLists (e.g. via set(PHARE_PYTEST_SIMULATORS <default> CACHE STRING
...) or guarding usage with if(DEFINED PHARE_PYTEST_SIMULATORS) or if(NOT
PHARE_PYTEST_SIMULATORS) checks; ensure the conditional expressions that wrap
the calls to phare_mpi_python3_exec reference the now-defined/defaulted
PHARE_PYTEST_SIMULATORS to avoid unintended behavior when the variable is unset.
In `@tests/simulator/test_advance.py`:
- Around line 806-809: The function currently returns a list (error_boxes) when
failures occurred but elsewhere the caller expects an int (successful_test_nbr)
causing a TypeError; modify the final return logic so the function always
returns successful_test_nbr (an int) regardless of error_boxes when rethrow_ is
False, and preserve error details by logging or storing error_boxes instead of
returning them; update the block that checks len(error_boxes) to log/process
error_boxes and then return successful_test_nbr, keeping references to
error_boxes, successful_test_nbr and the rethrow_ flag to locate and change the
behavior.
---
Nitpick comments:
In `@res/cmake/options.cmake`:
- Around line 74-75: The print_phare_options function is missing the build
options useExceptionsInsteadOfMPIAbort and SAMRAI_DEBUG; update
print_phare_options to include and display these options' current values by
referencing their CMake option names (useExceptionsInsteadOfMPIAbort and
SAMRAI_DEBUG) so they are printed alongside the other options, ensuring the
function shows the ON/OFF state for both symbols when invoked.
In `@src/amr/data/field/coarsening/coarsen_weighter.hpp`:
- Around line 19-22: Update the explanatory comment in coarsening implementation
to improve clarity: change the phrase "hard-coded 2" to "hard-coded to 2" and
rephrase the sentence to read smoothly (e.g., "This implementation dates from
the early phase of the project when the refinement ratio was not hard-coded to
2, so it supports the general case"). Locate the comment near the coarsening
implementation in coarsen_weighter.hpp (the coarsening/weighter class or related
coarsen function) and replace the existing lines with the polished wording.
In `@src/amr/tagging/default_hybrid_tagger_strategy.hpp`:
- Around line 109-120: The lambda field_diff is being redefined inside hot
nested loops; move its definition out to the surrounding scope and change it to
take explicit index arguments (e.g., (int ix, int iy) for the 2D case and a
separate overload or a 3-arg variant for the 3D case) instead of capturing loop
variables, then update the call sites inside the loops to pass the current
indices; this removes repeated closure construction, avoids capturing loop
locals, and preserves the same computations (delta_2x, delta_2y, delta_x,
delta_y and returned criter_x/criter_y) while keeping behavior identical.
In `@src/core/data/ndarray/ndarray_vector.hpp`:
- Around line 116-117: The accessors zstart() and zend() call shape_[2]
unconditionally which is undefined for 1D/2D MaskedView; restrict their
instantiation by adding a compile-time guard (e.g., add a static_assert(
dimension == 3, "zstart/zend only valid for 3D") inside each method or add a
requires clause / if constexpr template constraint on the methods so they only
participate when dimension == 3) so accidental explicit calls on
lower-dimensional instances fail to compile; update the zstart() and zend()
definitions to include that constraint referencing the existing dimension
template parameter and mask_/shape_ members.
- Around line 244-252: The NdArrayVector constructor currently value-initializes
data_ then calls fill(*value), causing double initialization; change the
constructor so data_ is constructed with the proper count and either the default
value or the provided value up-front: compute the element count from ncells
(same way as now) and in the member initializer for data_ use the single-vector
constructor that takes (count, value, Allocator_t{}) when value.has_value(),
otherwise use the count-only constructor; update the explicit
NdArrayVector(std::array<std::uint32_t, dim> const& ncells, std::optional<type>
const& value) initializer to conditionally initialize data_ to avoid the extra
fill call.
In `@src/core/data/particles/particle.hpp`:
- Around line 5-6: The file is missing an explicit include for <limits> despite
using std::numeric_limits (around line 26); add an explicit `#include` <limits>
alongside the other includes in particle.hpp so std::numeric_limits is resolved
portably (update the include block near the existing "core/def.hpp" and
"core/utilities/point/point.hpp" lines); no other code changes needed.
In `@src/core/utilities/mpi_utils.hpp`:
- Around line 252-265: The max() implementation currently uses collect(local,
mpi_size) and std::max_element which is asymmetric and O(P) memory; change max()
to mirror min() by performing an MPI_Allreduce(&local, &global, 1,
mpi_type_for(local), MPI_MAX, MPI_COMM_WORLD) (like min() does with MPI_MIN),
returning the reduced global value; keep using the same helper symbols
(mpi_type_for, size if needed) and remove the allgather/collect usage so max()
scales and matches min()’s pattern.
In `@tests/core/data/ndarray/test_main.cpp`:
- Line 399: Remove the redundant PHARE::core:: qualifier from type aliases
introduced after the file already imports the namespace (e.g., change
PHARE::core::NdArrayMask to NdArrayMask and PHARE::core::NdArrayVector<3> to
NdArrayVector<3> in the alias declarations such as the one creating Mask and any
similar aliases at the other reported locations) so the code consistently uses
the unqualified names provided by using namespace PHARE::core.
In `@tests/functional/harris/harris_3d.py`:
- Around line 69-70: Rename the ambiguous parameter l in function S(y, y0, l) to
a clearer name (e.g., l_scale or length) and update the function signature and
body to use that new name (for example change S(y, y0, l) to S(y, y0, l_scale)
and replace (y - y0) / l with (y - y0) / l_scale); then update all callers of S
to pass and/or reference the new parameter name so no usages break.
In `@tests/simulator/initialize/test_fields_init_3d.py`:
- Around line 40-45: The test uses large particle counts in 3D (np.asarray([20,
50, 75]) with cells=10) which can blow memory on CI; modify the
test_density_decreases_as_1overSqrtN block to guard or reduce sizes: either
replace the ppc array with smaller values for 3D (e.g., np.asarray([10,20,50])
or similar) or add a pytest.skip/pytest.mark.skipif that checks available RAM
(use psutil.virtual_memory().available < threshold) or an environment flag
(e.g., CI) before calling self._test_density_decreases_as_1overSqrtN so the
heavy case is skipped on resource-constrained runners while leaving the test
logic in place.
In `@tests/simulator/test_advance.py`:
- Around line 251-257: The section-header comments are placed after the final
"return particle_hier" inside the getHierarchy function, making them
unreachable; move those comment blocks so they sit outside (after) the end of
the getHierarchy function scope (i.e., dedent them to module/test level just
after the function definition) so they act as a visual separator before the test
definitions rather than dead code within getHierarchy.
- Around line 308-316: In makerec replace the ambiguous comprehension variable
`l` in the sizex,sizey calculation with a clearer name (e.g., `lo` or `low`) to
avoid confusion with the digit `1`; specifically update the list comprehension
in makerec that currently reads `[(u - l) * d for u, l, d in zip(upper, lower,
dl)]` to use `[(u - lo) * d for u, lo, d in zip(upper, lower, dl)]` (and update
any matching variable occurrence in makerec) so the function signature and
returned Rectangle code (makerec, origin calculation) remain unchanged.
In `@tests/simulator/test_diagnostics.py`:
- Around line 128-140: The permute function mutates its dic parameter via
dic.update(simArgs.copy()), which can change caller state; fix it by creating a
local copy of dic (e.g., local_dic = a shallow/deep copy) and update that local
copy with simArgs before using it to build the returned list; update references
in permute to use this local copy (keep use of deepcopy(simInput) as-is) so
callers are not affected.
In `@tests/simulator/test_initialization.py`:
- Around line 385-390: The inner function domain captures loop variables select,
patch, nbrGhosts (and dim) which can change across iterations and trigger Ruff
B023; fix by binding those loop-scoped values as default parameters on domain
(e.g., def domain(patch_data, select=select, patch=patch, nbrGhosts=nbrGhosts,
dim=dim): ...) and then use those local defaults inside the body so the function
no longer closes over mutable loop variables.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (77)
.github/workflows/cmake_ubuntu.ymlpyphare/pyphare/pharein/__init__.pypyphare/pyphare/pharein/maxwellian_fluid_model.pypyphare/pyphare/simulator/simulator.pyres/amr/splitting.ymlres/cmake/def.cmakeres/cmake/dep/samrai.cmakeres/cmake/options.cmakeres/sim/all.txtsrc/amr/data/field/coarsening/coarsen_weighter.hppsrc/amr/data/field/coarsening/electric_field_coarsener.hppsrc/amr/data/field/field_data.hppsrc/amr/data/particles/refine/particles_data_split.hppsrc/amr/data/particles/refine/split.hppsrc/amr/data/particles/refine/split_1d.hppsrc/amr/data/particles/refine/split_2d.hppsrc/amr/data/particles/refine/split_3d.hppsrc/amr/data/particles/refine/splitter.hppsrc/amr/data/tensorfield/tensor_field_data.hppsrc/amr/level_initializer/hybrid_level_initializer.hppsrc/amr/messengers/field_operate_transaction.hppsrc/amr/multiphysics_integrator.hppsrc/amr/resources_manager/resources_manager.hppsrc/amr/samrai.cppsrc/amr/samrai.hppsrc/amr/solvers/solver_ppc.hppsrc/amr/tagging/default_hybrid_tagger_strategy.hppsrc/core/data/grid/grid.hppsrc/core/data/grid/gridlayout.hppsrc/core/data/ndarray/ndarray_vector.hppsrc/core/data/particles/particle.hppsrc/core/data/particles/particle_array.hppsrc/core/data/tensorfield/tensorfield.hppsrc/core/errors.hppsrc/core/numerics/ion_updater/ion_updater.hppsrc/core/utilities/meta/meta_utilities.hppsrc/core/utilities/mpi_utils.cppsrc/core/utilities/mpi_utils.hppsrc/core/utilities/point/point.hppsrc/core/utilities/types.hppsrc/diagnostic/detail/h5writer.hppsrc/diagnostic/diagnostic_model_view.hppsrc/python3/data_wrangler.hppsrc/simulator/simulator.hpptests/amr/data/particles/refine/test_split.cpptests/core/data/gridlayout/deriv.pytests/core/data/gridlayout/test_gridlayout.hpptests/core/data/ion_population/test_ion_population_fixtures.hpptests/core/data/ndarray/test_main.cpptests/diagnostic/CMakeLists.txttests/diagnostic/job_1d.py.intests/diagnostic/job_2d.py.intests/diagnostic/job_3d.py.intests/diagnostic/test-diagnostics_1d.cpptests/diagnostic/test-diagnostics_2d.cpptests/diagnostic/test-diagnostics_3d.cpptests/functional/harris/harris_3d.pytests/simulator/CMakeLists.txttests/simulator/__init__.pytests/simulator/advance/CMakeLists.txttests/simulator/advance/test_fields_advance_1d.pytests/simulator/advance/test_fields_advance_2d.pytests/simulator/advance/test_fields_advance_3d.pytests/simulator/advance/test_particles_advance_1d.pytests/simulator/advance/test_particles_advance_2d.pytests/simulator/advance/test_particles_advance_3d.pytests/simulator/initialize/CMakeLists.txttests/simulator/initialize/test_fields_init_2d.pytests/simulator/initialize/test_fields_init_3d.pytests/simulator/initialize/test_particles_init_3d.pytests/simulator/per_test.hpptests/simulator/refined_particle_nbr.pytests/simulator/refinement/test_2d_10_core.pytests/simulator/test_advance.pytests/simulator/test_diagnostics.pytests/simulator/test_initialization.pytests/simulator/test_restarts.py
💤 Files with no reviewable changes (2)
- tests/simulator/refined_particle_nbr.py
- src/amr/samrai.cpp
🚧 Files skipped from review as they are similar to previous changes (18)
- src/simulator/simulator.hpp
- src/amr/solvers/solver_ppc.hpp
- res/sim/all.txt
- src/diagnostic/detail/h5writer.hpp
- src/core/numerics/ion_updater/ion_updater.hpp
- tests/diagnostic/CMakeLists.txt
- src/core/utilities/point/point.hpp
- src/amr/data/field/coarsening/electric_field_coarsener.hpp
- tests/diagnostic/job_3d.py.in
- tests/simulator/init.py
- tests/core/data/gridlayout/test_gridlayout.hpp
- tests/simulator/initialize/test_particles_init_3d.py
- tests/amr/data/particles/refine/test_split.cpp
- res/amr/splitting.yml
- tests/simulator/initialize/test_fields_init_2d.py
- src/core/utilities/meta/meta_utilities.hpp
- src/amr/data/field/field_data.hpp
- src/core/data/particles/particle_array.hpp
tests/core/data/ion_population/test_ion_population_fixtures.hpp
Outdated
Show resolved
Hide resolved
eb39acf to
504acea
Compare
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/core/utilities/types.hpp (1)
241-247:⚠️ Potential issue | 🟡 MinorDocument (or guard) the throwing path through
from_string.
get_env_asaccepts a defaultt, which signals "safe fallback" semantics to callers. However, when the env var is present but contains an unparseable value,from_string<T>throwsstd::runtime_error— the default is never used. This silent mismatch between the function's implied contract and its actual behaviour is a potential surprise at call sites.Options:
- Wrap with a try/catch and return the default on parse failure.
- Leave the throw intentional but add a brief comment making the contract explicit.
🛡️ Option 1 – return default on bad parse
template<typename T> NO_DISCARD inline T get_env_as(std::string const& key, T const& t) { if (auto e = get_env(key)) - return from_string<T>(*e); + { + try { return from_string<T>(*e); } + catch (std::runtime_error const&) {} + } return t; }📝 Option 2 – document the throwing contract
+ // Returns t if key is absent; throws std::runtime_error if key is present but not parseable as T. template<typename T> NO_DISCARD inline T get_env_as(std::string const& key, T const& t)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/utilities/types.hpp` around lines 241 - 247, The get_env_as<T> function currently delegates parsing to from_string<T> which can throw, violating callers' expectation of a safe default; update get_env_as to catch exceptions from from_string<T> (std::exception or std::runtime_error) and on parse failure return the provided default t, and optionally log or swallow the error, or alternatively add a brief comment above get_env_as explaining that from_string<T> may throw and the default will not be used if parsing fails; reference the get_env_as template and the from_string<T> call when making the change.
♻️ Duplicate comments (6)
src/amr/tagging/default_hybrid_tagger_strategy.hpp (1)
169-173: CodeQL "unused local variable" warnings on lines 169–171 are false positives.All nine structured-binding names (
Bx_x,Bx_y,Bx_z,By_x,By_y,By_z,Bz_x,Bz_y,Bz_z) are directly consumed by thestd::maxinitializer list on line 173. This is a known CodeQL limitation with structured bindings.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/amr/tagging/default_hybrid_tagger_strategy.hpp` around lines 169 - 173, CodeQL reports false "unused local variable" warnings for the structured bindings Bx_x/Bx_y/.../Bz_z even though they are used in computing crit; to eliminate the warnings, avoid structured bindings and instead capture the three components into small arrays or a tuple variable and then reference elements when building the std::max initializer list. Concretely, replace the three structured-binding lines that call field_diff(Bx)/field_diff(By)/field_diff(Bz) with something like auto Bx_vals = field_diff(Bx), By_vals = field_diff(By), Bz_vals = field_diff(Bz) and then compute crit = std::max({Bx_vals[0], Bx_vals[1], Bx_vals[2], By_vals[0], By_vals[1], By_vals[2], Bz_vals[0], Bz_vals[1], Bz_vals[2]}); this preserves the same values used by crit (no functional change) and avoids the structured-binding false positive for Bx_x/.../Bz_z.res/cmake/dep/samrai.cmake (1)
20-28:⚠️ Potential issue | 🟠 MajorInverted logic:
SAMRAI_DEBUG=ONdisables SAMRAI debug checks;SAMRAI_DEBUG=OFFenables them.The
if/elsebranches are swapped. WhenSAMRAI_DEBUGisON, the threeDEBUG_*flags should beOn; whenOFF, they should beOff. The current code does the opposite, meaning:
- Release/CI builds (
SAMRAI_DEBUG=OFF) will build SAMRAI withDEBUG_INITIALIZE_UNDEFINED=On,DEBUG_CHECK_ASSERTIONS=On,DEBUG_CHECK_DIM_ASSERTIONS=On— adding significant runtime overhead and likely failing assertion-guarded paths that are intentionally not covered.- Debug builds (
SAMRAI_DEBUG=ON) will have all SAMRAI assertions silently disabled.The CACHE BOOL description strings (
"YES!"/"NO!") are also on the wrong branches.🐛 Proposed fix
if (SAMRAI_DEBUG) - set (DEBUG_INITIALIZE_UNDEFINED Off CACHE BOOL "YES!" FORCE) - set (DEBUG_CHECK_ASSERTIONS Off CACHE BOOL "YES!" FORCE) - set (DEBUG_CHECK_DIM_ASSERTIONS Off CACHE BOOL "YES!" FORCE) + set (DEBUG_INITIALIZE_UNDEFINED On CACHE BOOL "YES!" FORCE) + set (DEBUG_CHECK_ASSERTIONS On CACHE BOOL "YES!" FORCE) + set (DEBUG_CHECK_DIM_ASSERTIONS On CACHE BOOL "YES!" FORCE) else() - set (DEBUG_INITIALIZE_UNDEFINED On CACHE BOOL "NO!" FORCE) - set (DEBUG_CHECK_ASSERTIONS On CACHE BOOL "NO!" FORCE) - set (DEBUG_CHECK_DIM_ASSERTIONS On CACHE BOOL "NO!" FORCE) + set (DEBUG_INITIALIZE_UNDEFINED Off CACHE BOOL "NO!" FORCE) + set (DEBUG_CHECK_ASSERTIONS Off CACHE BOOL "NO!" FORCE) + set (DEBUG_CHECK_DIM_ASSERTIONS Off CACHE BOOL "NO!" FORCE) endif(SAMRAI_DEBUG)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@res/cmake/dep/samrai.cmake` around lines 20 - 28, The if/else branch is inverted: when SAMRAI_DEBUG is true the three SAMRAI debug options (DEBUG_INITIALIZE_UNDEFINED, DEBUG_CHECK_ASSERTIONS, DEBUG_CHECK_DIM_ASSERTIONS) must be set On, and when false they must be set Off; update the SAMRAI_DEBUG conditional so the On/Off values are swapped to match intent and also correct the CACHE BOOL description strings ("YES!"/"NO!") to their matching branches so the messages align with the actual boolean values.pyphare/pyphare/pharein/maxwellian_fluid_model.py (1)
348-374:⚠️ Potential issue | 🔴 CriticalBug:
domain[0]is hardcoded for right-boundary offsets — should bedomain[idir].Lines 356 and 360 compute
dual_rightandprimal_rightusingdomain[0]regardless of direction. Foridir=1(Y) andidir=2(Z), this produces wrong boundary coordinates, so periodicity validation will check incorrect points. The same bug exists invalidate2dat lines 270 and 274.Proposed fix for validate3d
dual_left = (np.arange(-nbrDualGhosts, nbrDualGhosts) + 0.5) * sim.dl[ idir ] - dual_right = dual_left + domain[0] + dual_right = dual_left + domain[idir] primal_left = ( np.arange(-nbrPrimalGhosts, nbrPrimalGhosts) * sim.dl[idir] ) - primal_right = primal_left + domain[0] + primal_right = primal_left + domain[idir]Apply the same fix in
validate2d(lines 270 and 274):- dual_right = dual_left + domain[0] + dual_right = dual_left + domain[idir] ... - primal_right = primal_left + domain[0] + primal_right = primal_left + domain[idir]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyphare/pyphare/pharein/maxwellian_fluid_model.py` around lines 348 - 374, In validate3d (and the analogous validate2d) the right-side offset for dual_right and primal_right is incorrectly using domain[0] for all directions; change those computations so dual_right = dual_left + domain[idir] and primal_right = primal_left + domain[idir] (update the assignments that define dual_right and primal_right in functions validate3d and validate2d so they use domain[idir] instead of domain[0]) so getCoord(...) and the periodicity checks for each idir use the correct per-direction domain size.tests/simulator/test_advance.py (1)
806-809:⚠️ Potential issue | 🟠 MajorMixed return type (
listvsint) still causesTypeErrorin caller.When
error_boxesis non-empty, the function returns alist. The caller at Line 878 doesself.assertGreater(successful_test_nbr, ...)on this value, which will raiseTypeError: '>' not supported between instances of 'list' and 'int'. This masks the actual test failure.🐛 Proposed fix
if len(error_boxes): - return error_boxes - return successful_test_nbr + self.fail(f"level ghost subcycle interpolation errors: {error_boxes}") + return successful_test_nbr🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/simulator/test_advance.py` around lines 806 - 809, The function currently returns either successful_test_nbr (int) or error_boxes (list), causing a TypeError in the caller that expects an int; change the behavior so the function always returns an int on success and raises an AssertionError (including error_boxes details) when error_boxes is non-empty. Locate the block that checks len(error_boxes) (using the variables error_boxes and successful_test_nbr) and replace the return of error_boxes with raising AssertionError(f"Simulation errors: {error_boxes}") so callers like the one asserting on successful_test_nbr receive an exception with error details instead of a list.tests/diagnostic/test-diagnostics_3d.cpp (1)
1-36: LGTM.Mirrors the 1D/2D diagnostic test structure;
core::mpi::size()-basedout_dirsuffix for concurrent MPI runs is consistent with the existing pattern.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/diagnostic/test-diagnostics_3d.cpp` around lines 1 - 36, No change required: the test file mirrors the 1D/2D structure correctly (TYPED_TEST cases: fluid, particles, electromag, allFromPython), initializes PHARE::SamraiLifeCycle in main, and appends core::mpi::size() to out_dir for concurrent MPI runs; leave the file as-is.src/amr/data/particles/refine/split_3d.hpp (1)
76-86:⚠️ Potential issue | 🟠 MajorFix LimePattern vertical-edge coordinates (current middle group is incorrect).
Line [85] (
addSquare(8, 0)) generates{0,0,±δ}and{±δ,0,0}, but vertical edge centers are{±δ,0,±δ}. This breaks the "1 per edge - centered" contract and propagates to all 3D splitters that useLimePattern.Proposed fix
- addSquare(8, 0); // middle + // vertical edges + Super::deltas_[8] = { delta, 0.0f, delta}; + Super::deltas_[9] = { delta, 0.0f, -delta}; + Super::deltas_[10] = {-delta, 0.0f, delta}; + Super::deltas_[11] = {-delta, 0.0f, -delta};🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/amr/data/particles/refine/split_3d.hpp` around lines 76 - 86, The middle "vertical-edge" group is wrong: calling addSquare(8, 0) produces {0,0,±δ} and {±δ,0,0} but the vertical-edge centers must be {±δ,0,±δ}; update the code that fills Super::deltas_ for the middle group (the entries written by addSquare at offset 8) so that Super::deltas_[8..11] are assigned the four combinations {+δ,0,+δ}, {-δ,0,+δ}, {+δ,0,-δ}, {-δ,0,-δ} (or equivalently implement a new helper instead of addSquare for the middle group) so LimePattern (and any 3D splitters using it) produce one center per edge as intended.
🧹 Nitpick comments (10)
src/amr/messengers/field_operate_transaction.hpp (1)
1-2: Header guard abbreviatesoperateasop, diverging from the filename.
PHARE_AMR_MESSENGERS_FIELD_OP_TRANSACTION_HPPshould mirror the full filenamefield_operate_transaction.hpp.♻️ Proposed fix
-#ifndef PHARE_AMR_MESSENGERS_FIELD_OP_TRANSACTION_HPP -#define PHARE_AMR_MESSENGERS_FIELD_OP_TRANSACTION_HPP +#ifndef PHARE_AMR_MESSENGERS_FIELD_OPERATE_TRANSACTION_HPP +#define PHARE_AMR_MESSENGERS_FIELD_OPERATE_TRANSACTION_HPPAnd at the bottom:
-#endif // PHARE_AMR_MESSENGERS_FIELD_OP_TRANSACTION_HPP +#endif // PHARE_AMR_MESSENGERS_FIELD_OPERATE_TRANSACTION_HPPBased on learnings: PhilipDeegan prefers header guard names that include the full directory path structure, following the pattern
PHARE_[PATH_WITH_UNDERSCORES]_HPP.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/amr/messengers/field_operate_transaction.hpp` around lines 1 - 2, The header guard macro currently abbreviates "operate" as "op" (PHARE_AMR_MESSENGERS_FIELD_OP_TRANSACTION_HPP) which doesn't match the filename; rename the macro to PHARE_AMR_MESSENGERS_FIELD_OPERATE_TRANSACTION_HPP in both the `#ifndef/`#define at the top and the corresponding `#endif` comment/guard at the bottom of field_operate_transaction.hpp so the guard mirrors the full filename and follows the PHARE_[PATH_WITH_UNDERSCORES]_HPP pattern.res/amr/splitting.yml (1)
139-141: Missing trailing newline at end of file.All previous dimension sections conclude with a blank line; this file ends abruptly on line 141 without one. Add a trailing newline for consistency and POSIX compliance.
🔧 Proposed fix
N_particles_27: delta: 1.276815 - weight: .104047 .05564 \ No newline at end of file + weight: .104047 .05564🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@res/amr/splitting.yml` around lines 139 - 141, The file ends without a trailing newline (the last YAML node "N_particles_27" block terminates abruptly); update the end of file so there's a single blank line / newline character after the final line (after "weight: .104047 .05564") to match other dimension sections and ensure POSIX-compliant trailing newline.src/core/errors.hpp (1)
71-81: Passkeyandvalbyconst&to avoid unnecessary copies on every call to the error singleton.The
log()method is on a singleton path; receivingstd::stringby value forces a copy on each invocation even when the callers already holdconst std::string&or string literals.♻️ Proposed fix
- void log(std::string const key, std::string const val) + void log(std::string const& key, std::string const& val)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/errors.hpp` around lines 71 - 81, The log function currently takes parameters by value causing unnecessary copies on each call; change the signature of log to take const std::string& key and const std::string& val (referencing the log method) and update any declarations/overloads accordingly so errors.emplace(key, val) and error_count[key] usage continue to work without additional copies; this avoids repeated allocations on the singleton error path and preserves existing behavior for errors and error_count maps.tests/simulator/per_test.hpp (1)
118-128: Template parameter nameSimulatoris inconsistent withSimulatorParamused inSimulator1dTestandSimulator2dTest.♻️ Proposed fix
-template<typename Simulator> -struct Simulator3dTest : public ::testing::Test +template<typename SimulatorParam> +struct Simulator3dTest : public ::testing::Test🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/simulator/per_test.hpp` around lines 118 - 128, The template parameter name in Simulator3dTest is inconsistent with the other test suites; change the template declaration template<typename Simulator> in struct Simulator3dTest to use the same name used elsewhere (e.g., template<typename SimulatorParam>) so it matches the pattern used by Simulator1dTest/Simulator2dTest and the Types alias (Simulator3d and SimulatorTestParam), update any references inside Simulator3dTest to use SimulatorParam if present, and ensure the TYPED_TEST_SUITE(Simulator3dTest, Simulator3d) remains valid.src/core/utilities/mpi_utils.cpp (1)
48-56: Usestd::cerrinstead ofstd::coutfor the error message before abort.
std::cerris the idiomatic stream for error output and is unbuffered by default, ensuring the message is visible even if the process terminates abruptly. Whilestd::endldoes flushstd::cout, usingstd::cerrmakes the intent clearer and is more robust.Proposed fix
void log_error(std::string const key, std::string const val) { `#if` PHARE_ERRORS_USE_MPI_ABORT - std::cout << "MPI ABORT: PHARE ERROR: " << key << " " << val << std::endl; + std::cerr << "MPI ABORT: PHARE ERROR: " << key << " " << val << std::endl; MPI_Abort(MPI_COMM_WORLD, -1); `#else` core::Errors::instance().log(key, val); `#endif` }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/utilities/mpi_utils.cpp` around lines 48 - 56, Replace the use of std::cout with std::cerr in the log_error function when PHARE_ERRORS_USE_MPI_ABORT is enabled so the error message is written to the unbuffered error stream before calling MPI_Abort; update the line inside the PHARE_ERRORS_USE_MPI_ABORT branch that currently prints via std::cout to use std::cerr (keeping the message content and flushing as now) so the message is reliably emitted even if the process aborts.tests/functional/harris/harris_3d.py (2)
34-59: Simulation hasmax_nbr_levels=1withrefinement="tagging"— tagging will never produce a refined level.With
max_nbr_levels=1, only level 0 exists, so the tagging/refinement configuration (threshold, nesting_buffer, tag_buffer) is effectively dead. If this is intentional (just a smoke test for 3D), consider a brief comment; otherwise bump tomax_nbr_levels=2.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/functional/harris/harris_3d.py` around lines 34 - 59, The Simulation is configured with refinement="tagging" but max_nbr_levels=1 so no refinement can occur; update the Simulation invocation (symbol: Simulation, parameter: max_nbr_levels) to allow refinement by setting max_nbr_levels=2 OR, if this is intentionally a smoke test, add a brief comment next to the Simulation block explaining why tagging_threshold, nesting_buffer and tag_buffer are present but unused (symbols: tagging_threshold, nesting_buffer, tag_buffer, refinement) so reviewers understand the intent.
69-70: Ambiguous variable namel(Ruff E741).
lcan be confused with1orIin many fonts. Consider renaming tolengthorl_scalefor clarity.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/functional/harris/harris_3d.py` around lines 69 - 70, Rename the ambiguous parameter `l` in function S(y, y0, l) to a clearer name (e.g., `length` or `l_scale`) throughout the file: update the function signature S(y, y0, l_scale), replace all uses of `l` inside the function with the new name, and update any calls to S elsewhere in the test (or imports) to pass the renamed parameter; also update any related comments or docstrings to reflect the new parameter name.tests/simulator/test_initialization.py (1)
385-397:domain()closure capturesselectwhich is reassigned later in the same scope.The
domaininner function capturesselect(line 383) by reference, andselectis reassigned on line 397 to a meshgrid result. This works correctly becausedomain()is called on lines 392–395 before the reassignment. However, future edits that move thedomain()calls after line 397 would silently break. Consider passingselectas a parameter todomainto make the binding explicit.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/simulator/test_initialization.py` around lines 385 - 397, The inner function domain currently closes over the outer variable select which is later reassigned (via select = pdatas["protons_Fx"].meshgrid(select)), making its binding fragile; change domain to accept select as an explicit parameter (def domain(patch_data, select): ...) and update all calls (ni = domain(pdatas["rho"], select), vxact/vyact/vzact calls) to pass the current select value so the function no longer relies on a captured variable and is safe if calls are reordered.src/amr/data/particles/refine/splitter.hpp (1)
146-165: Empty pattern struct declarations serve as tag types for specialization.These are forward-declared tag types that will be specialized in
split_1d.hpp,split_2d.hpp, andsplit_3d.hpp. The naming convention (color-based: Pink, Purple, Brown, Lime, White) matches the existingBlackPattern. Consider adding a brief comment explaining the color naming convention, as it's not immediately obvious what each pattern represents geometrically.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/amr/data/particles/refine/splitter.hpp` around lines 146 - 165, Add a short comment above the empty tag structs explaining that PinkPattern, PurplePattern, BrownPattern, LimePattern, and WhitePattern are color-named tag types used for template specialization (like the existing BlackPattern) and briefly describe what each color represents geometrically or which split files specialize them (e.g., which patterns correspond to 1D/2D/3D splits in split_1d.hpp, split_2d.hpp, split_3d.hpp); place this comment near the struct declarations (PinkPattern, PurplePattern, BrownPattern, LimePattern, WhitePattern) so readers immediately understand the naming convention and where to find their specializations.src/amr/data/particles/refine/split_1d.hpp (1)
127-141: Constructor initializer names wrong base dispatcher alias.The
Splitter<DimConst<1>, InterpConst<3>, RefinedParticlesConst<2>>at Line 135 initializesSplitPattern_1_2_2_Dispatcherbut its inheritance isSplitPattern_1_3_2_Dispatcher. Both aliases currently resolve to the samePatternDispatcher<PinkPattern<DimConst<1>>>, so this compiles and runs correctly, but the naming mismatch is confusing and fragile if the aliases ever diverge.♻️ Proposed fix
constexpr Splitter() - : SplitPattern_1_2_2_Dispatcher{{weight[0], delta[0]}} + : SplitPattern_1_3_2_Dispatcher{{weight[0], delta[0]}} { }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/amr/data/particles/refine/split_1d.hpp` around lines 127 - 141, The constructor for Splitter<DimConst<1>, InterpConst<3>, RefinedParticlesConst<2>> initializes the wrong base dispatcher alias (uses SplitPattern_1_2_2_Dispatcher) while the class inherits from SplitPattern_1_3_2_Dispatcher; change the constructor initializer to initialize SplitPattern_1_3_2_Dispatcher (keeping the same initializer list { { weight[0], delta[0] } }) so the initializer name matches the inherited base (refer to the Splitter<> specialization and the SplitPattern_1_3_2_Dispatcher alias).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/core/data/ndarray/ndarray_vector.hpp`:
- Around line 205-209: NdArrayView::operator[] forwards masks into MaskedView
which currently stores the mask as a reference (Mask const& mask_), causing
dangling references when callers pass temporaries; change MaskedView to hold the
mask by value (e.g., Mask mask_) and simplify its constructors to a single
value-taking constructor that copies/moves the mask into that member so
temporaries remain valid when the MaskedView is returned and later used by
methods like operator=, xstart(), ystart(), zstart().
- Around line 402-442: The loops in fill3D (and similar logic in fill1D and
fill2D) perform expressions like shape[d] - 1 - max_ which can underflow if max_
>= shape[d] or if min_ > max_; add explicit bounds validation at the start of
each fill function: check each relevant dimension from array.shape() to ensure
max_ < shape[d] and min_ <= max_; on violation either clamp indices safely or
return/throw an informative error. Use the validated/sanitized values for loop
start/end computations (or cast to signed before subtracting) to prevent
unsigned underflow; reference functions: fill3D, fill2D, fill1D and members
min_, max_ when making these checks.
In `@src/core/utilities/mpi_utils.hpp`:
- Around line 252-257: The max function currently gathers all ranks via
collect(local, mpi_size) and then does std::max_element, causing O(ranks)
overhead; replace that logic with an MPI_Allreduce using MPI_MAX (same pattern
as the min implementation) so the reduction happens in-place across processes
and returns the global maximum directly from max(auto const local). Locate the
function named max(auto const local) and swap the collect/std::max_element
sequence for an MPI_Allreduce call that reduces local with MPI_MAX and returns
the reduced scalar, ensuring types match the existing min/any patterns.
In `@src/core/utilities/point/point.hpp`:
- Around line 106-120: The to_string_with_precision helper is returning the
original ostringstream content and discarding the last-digit correction; update
to_string_with_precision so it returns the modified local string (the variable
named "str") instead of out.str(), ensuring the rounding/fixup applied to "str"
is what callers (including the Point::str method) receive; locate the
to_string_with_precision function and replace the return expression to return
the corrected "str" after the fixup is applied.
---
Outside diff comments:
In `@src/core/utilities/types.hpp`:
- Around line 241-247: The get_env_as<T> function currently delegates parsing to
from_string<T> which can throw, violating callers' expectation of a safe
default; update get_env_as to catch exceptions from from_string<T>
(std::exception or std::runtime_error) and on parse failure return the provided
default t, and optionally log or swallow the error, or alternatively add a brief
comment above get_env_as explaining that from_string<T> may throw and the
default will not be used if parsing fails; reference the get_env_as template and
the from_string<T> call when making the change.
---
Duplicate comments:
In `@pyphare/pyphare/pharein/maxwellian_fluid_model.py`:
- Around line 348-374: In validate3d (and the analogous validate2d) the
right-side offset for dual_right and primal_right is incorrectly using domain[0]
for all directions; change those computations so dual_right = dual_left +
domain[idir] and primal_right = primal_left + domain[idir] (update the
assignments that define dual_right and primal_right in functions validate3d and
validate2d so they use domain[idir] instead of domain[0]) so getCoord(...) and
the periodicity checks for each idir use the correct per-direction domain size.
In `@res/cmake/dep/samrai.cmake`:
- Around line 20-28: The if/else branch is inverted: when SAMRAI_DEBUG is true
the three SAMRAI debug options (DEBUG_INITIALIZE_UNDEFINED,
DEBUG_CHECK_ASSERTIONS, DEBUG_CHECK_DIM_ASSERTIONS) must be set On, and when
false they must be set Off; update the SAMRAI_DEBUG conditional so the On/Off
values are swapped to match intent and also correct the CACHE BOOL description
strings ("YES!"/"NO!") to their matching branches so the messages align with the
actual boolean values.
In `@src/amr/data/particles/refine/split_3d.hpp`:
- Around line 76-86: The middle "vertical-edge" group is wrong: calling
addSquare(8, 0) produces {0,0,±δ} and {±δ,0,0} but the vertical-edge centers
must be {±δ,0,±δ}; update the code that fills Super::deltas_ for the middle
group (the entries written by addSquare at offset 8) so that
Super::deltas_[8..11] are assigned the four combinations {+δ,0,+δ}, {-δ,0,+δ},
{+δ,0,-δ}, {-δ,0,-δ} (or equivalently implement a new helper instead of
addSquare for the middle group) so LimePattern (and any 3D splitters using it)
produce one center per edge as intended.
In `@src/amr/tagging/default_hybrid_tagger_strategy.hpp`:
- Around line 169-173: CodeQL reports false "unused local variable" warnings for
the structured bindings Bx_x/Bx_y/.../Bz_z even though they are used in
computing crit; to eliminate the warnings, avoid structured bindings and instead
capture the three components into small arrays or a tuple variable and then
reference elements when building the std::max initializer list. Concretely,
replace the three structured-binding lines that call
field_diff(Bx)/field_diff(By)/field_diff(Bz) with something like auto Bx_vals =
field_diff(Bx), By_vals = field_diff(By), Bz_vals = field_diff(Bz) and then
compute crit = std::max({Bx_vals[0], Bx_vals[1], Bx_vals[2], By_vals[0],
By_vals[1], By_vals[2], Bz_vals[0], Bz_vals[1], Bz_vals[2]}); this preserves the
same values used by crit (no functional change) and avoids the
structured-binding false positive for Bx_x/.../Bz_z.
In `@tests/diagnostic/test-diagnostics_3d.cpp`:
- Around line 1-36: No change required: the test file mirrors the 1D/2D
structure correctly (TYPED_TEST cases: fluid, particles, electromag,
allFromPython), initializes PHARE::SamraiLifeCycle in main, and appends
core::mpi::size() to out_dir for concurrent MPI runs; leave the file as-is.
In `@tests/simulator/test_advance.py`:
- Around line 806-809: The function currently returns either successful_test_nbr
(int) or error_boxes (list), causing a TypeError in the caller that expects an
int; change the behavior so the function always returns an int on success and
raises an AssertionError (including error_boxes details) when error_boxes is
non-empty. Locate the block that checks len(error_boxes) (using the variables
error_boxes and successful_test_nbr) and replace the return of error_boxes with
raising AssertionError(f"Simulation errors: {error_boxes}") so callers like the
one asserting on successful_test_nbr receive an exception with error details
instead of a list.
---
Nitpick comments:
In `@res/amr/splitting.yml`:
- Around line 139-141: The file ends without a trailing newline (the last YAML
node "N_particles_27" block terminates abruptly); update the end of file so
there's a single blank line / newline character after the final line (after
"weight: .104047 .05564") to match other dimension sections and ensure
POSIX-compliant trailing newline.
In `@src/amr/data/particles/refine/split_1d.hpp`:
- Around line 127-141: The constructor for Splitter<DimConst<1>, InterpConst<3>,
RefinedParticlesConst<2>> initializes the wrong base dispatcher alias (uses
SplitPattern_1_2_2_Dispatcher) while the class inherits from
SplitPattern_1_3_2_Dispatcher; change the constructor initializer to initialize
SplitPattern_1_3_2_Dispatcher (keeping the same initializer list { { weight[0],
delta[0] } }) so the initializer name matches the inherited base (refer to the
Splitter<> specialization and the SplitPattern_1_3_2_Dispatcher alias).
In `@src/amr/data/particles/refine/splitter.hpp`:
- Around line 146-165: Add a short comment above the empty tag structs
explaining that PinkPattern, PurplePattern, BrownPattern, LimePattern, and
WhitePattern are color-named tag types used for template specialization (like
the existing BlackPattern) and briefly describe what each color represents
geometrically or which split files specialize them (e.g., which patterns
correspond to 1D/2D/3D splits in split_1d.hpp, split_2d.hpp, split_3d.hpp);
place this comment near the struct declarations (PinkPattern, PurplePattern,
BrownPattern, LimePattern, WhitePattern) so readers immediately understand the
naming convention and where to find their specializations.
In `@src/amr/messengers/field_operate_transaction.hpp`:
- Around line 1-2: The header guard macro currently abbreviates "operate" as
"op" (PHARE_AMR_MESSENGERS_FIELD_OP_TRANSACTION_HPP) which doesn't match the
filename; rename the macro to PHARE_AMR_MESSENGERS_FIELD_OPERATE_TRANSACTION_HPP
in both the `#ifndef/`#define at the top and the corresponding `#endif`
comment/guard at the bottom of field_operate_transaction.hpp so the guard
mirrors the full filename and follows the PHARE_[PATH_WITH_UNDERSCORES]_HPP
pattern.
In `@src/core/errors.hpp`:
- Around line 71-81: The log function currently takes parameters by value
causing unnecessary copies on each call; change the signature of log to take
const std::string& key and const std::string& val (referencing the log method)
and update any declarations/overloads accordingly so errors.emplace(key, val)
and error_count[key] usage continue to work without additional copies; this
avoids repeated allocations on the singleton error path and preserves existing
behavior for errors and error_count maps.
In `@src/core/utilities/mpi_utils.cpp`:
- Around line 48-56: Replace the use of std::cout with std::cerr in the
log_error function when PHARE_ERRORS_USE_MPI_ABORT is enabled so the error
message is written to the unbuffered error stream before calling MPI_Abort;
update the line inside the PHARE_ERRORS_USE_MPI_ABORT branch that currently
prints via std::cout to use std::cerr (keeping the message content and flushing
as now) so the message is reliably emitted even if the process aborts.
In `@tests/functional/harris/harris_3d.py`:
- Around line 34-59: The Simulation is configured with refinement="tagging" but
max_nbr_levels=1 so no refinement can occur; update the Simulation invocation
(symbol: Simulation, parameter: max_nbr_levels) to allow refinement by setting
max_nbr_levels=2 OR, if this is intentionally a smoke test, add a brief comment
next to the Simulation block explaining why tagging_threshold, nesting_buffer
and tag_buffer are present but unused (symbols: tagging_threshold,
nesting_buffer, tag_buffer, refinement) so reviewers understand the intent.
- Around line 69-70: Rename the ambiguous parameter `l` in function S(y, y0, l)
to a clearer name (e.g., `length` or `l_scale`) throughout the file: update the
function signature S(y, y0, l_scale), replace all uses of `l` inside the
function with the new name, and update any calls to S elsewhere in the test (or
imports) to pass the renamed parameter; also update any related comments or
docstrings to reflect the new parameter name.
In `@tests/simulator/per_test.hpp`:
- Around line 118-128: The template parameter name in Simulator3dTest is
inconsistent with the other test suites; change the template declaration
template<typename Simulator> in struct Simulator3dTest to use the same name used
elsewhere (e.g., template<typename SimulatorParam>) so it matches the pattern
used by Simulator1dTest/Simulator2dTest and the Types alias (Simulator3d and
SimulatorTestParam), update any references inside Simulator3dTest to use
SimulatorParam if present, and ensure the TYPED_TEST_SUITE(Simulator3dTest,
Simulator3d) remains valid.
In `@tests/simulator/test_initialization.py`:
- Around line 385-397: The inner function domain currently closes over the outer
variable select which is later reassigned (via select =
pdatas["protons_Fx"].meshgrid(select)), making its binding fragile; change
domain to accept select as an explicit parameter (def domain(patch_data,
select): ...) and update all calls (ni = domain(pdatas["rho"], select),
vxact/vyact/vzact calls) to pass the current select value so the function no
longer relies on a captured variable and is safe if calls are reordered.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (78)
.github/workflows/cmake_ubuntu.ymlpyphare/pyphare/pharein/__init__.pypyphare/pyphare/pharein/init.pypyphare/pyphare/pharein/maxwellian_fluid_model.pypyphare/pyphare/simulator/simulator.pyres/amr/splitting.ymlres/cmake/def.cmakeres/cmake/dep/samrai.cmakeres/cmake/options.cmakeres/sim/all.txtsrc/amr/data/field/coarsening/coarsen_weighter.hppsrc/amr/data/field/coarsening/electric_field_coarsener.hppsrc/amr/data/field/field_data.hppsrc/amr/data/particles/refine/particles_data_split.hppsrc/amr/data/particles/refine/split.hppsrc/amr/data/particles/refine/split_1d.hppsrc/amr/data/particles/refine/split_2d.hppsrc/amr/data/particles/refine/split_3d.hppsrc/amr/data/particles/refine/splitter.hppsrc/amr/data/tensorfield/tensor_field_data.hppsrc/amr/level_initializer/hybrid_level_initializer.hppsrc/amr/messengers/field_operate_transaction.hppsrc/amr/multiphysics_integrator.hppsrc/amr/resources_manager/resources_manager.hppsrc/amr/samrai.cppsrc/amr/samrai.hppsrc/amr/solvers/solver_ppc.hppsrc/amr/tagging/default_hybrid_tagger_strategy.hppsrc/core/data/grid/grid.hppsrc/core/data/grid/gridlayout.hppsrc/core/data/ndarray/ndarray_vector.hppsrc/core/data/particles/particle.hppsrc/core/data/particles/particle_array.hppsrc/core/data/tensorfield/tensorfield.hppsrc/core/errors.hppsrc/core/numerics/ion_updater/ion_updater.hppsrc/core/utilities/meta/meta_utilities.hppsrc/core/utilities/mpi_utils.cppsrc/core/utilities/mpi_utils.hppsrc/core/utilities/point/point.hppsrc/core/utilities/types.hppsrc/diagnostic/detail/h5writer.hppsrc/diagnostic/diagnostic_model_view.hppsrc/python3/data_wrangler.hppsrc/simulator/simulator.hpptests/amr/data/particles/refine/test_split.cpptests/core/data/gridlayout/deriv.pytests/core/data/gridlayout/test_gridlayout.hpptests/core/data/ndarray/test_main.cpptests/diagnostic/CMakeLists.txttests/diagnostic/job_1d.py.intests/diagnostic/job_2d.py.intests/diagnostic/job_3d.py.intests/diagnostic/test-diagnostics_1d.cpptests/diagnostic/test-diagnostics_2d.cpptests/diagnostic/test-diagnostics_3d.cpptests/functional/harris/harris_2d.pytests/functional/harris/harris_3d.pytests/simulator/CMakeLists.txttests/simulator/__init__.pytests/simulator/advance/CMakeLists.txttests/simulator/advance/test_fields_advance_1d.pytests/simulator/advance/test_fields_advance_2d.pytests/simulator/advance/test_fields_advance_3d.pytests/simulator/advance/test_particles_advance_1d.pytests/simulator/advance/test_particles_advance_2d.pytests/simulator/advance/test_particles_advance_3d.pytests/simulator/initialize/CMakeLists.txttests/simulator/initialize/test_fields_init_2d.pytests/simulator/initialize/test_fields_init_3d.pytests/simulator/initialize/test_particles_init_3d.pytests/simulator/per_test.hpptests/simulator/refined_particle_nbr.pytests/simulator/refinement/test_2d_10_core.pytests/simulator/test_advance.pytests/simulator/test_diagnostics.pytests/simulator/test_initialization.pytests/simulator/test_restarts.py
💤 Files with no reviewable changes (2)
- tests/simulator/refined_particle_nbr.py
- src/amr/samrai.cpp
✅ Files skipped from review due to trivial changes (1)
- src/diagnostic/detail/h5writer.hpp
🚧 Files skipped from review as they are similar to previous changes (21)
- src/amr/data/field/coarsening/coarsen_weighter.hpp
- res/sim/all.txt
- tests/simulator/advance/test_fields_advance_2d.py
- tests/simulator/initialize/test_particles_init_3d.py
- tests/simulator/advance/test_particles_advance_2d.py
- src/core/numerics/ion_updater/ion_updater.hpp
- tests/simulator/initialize/test_fields_init_2d.py
- tests/simulator/refinement/test_2d_10_core.py
- tests/simulator/advance/test_fields_advance_1d.py
- src/amr/resources_manager/resources_manager.hpp
- tests/diagnostic/test-diagnostics_2d.cpp
- src/amr/data/particles/refine/particles_data_split.hpp
- tests/diagnostic/job_1d.py.in
- src/amr/solvers/solver_ppc.hpp
- tests/simulator/CMakeLists.txt
- tests/simulator/init.py
- tests/simulator/advance/test_particles_advance_1d.py
- src/simulator/simulator.hpp
- src/amr/data/particles/refine/split.hpp
- res/cmake/options.cmake
- tests/core/data/gridlayout/deriv.py
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (6)
src/core/utilities/types.hpp (1)
241-247:⚠️ Potential issue | 🟡 Minor
get_env_ashas an asymmetric fallback contract.When the key is absent
tis returned, but when the key is present and unparseablefrom_string<T>throwsstd::runtime_errorrather than falling back tot. If this is intentional (fail-fast on misconfiguration), it should be documented; if not, atry/catchreturnington parse failure would give a consistent "get-with-default" contract.🛠️ Proposed fix (consistent get-with-default)
template<typename T> NO_DISCARD inline T get_env_as(std::string const& key, T const& t) { if (auto e = get_env(key)) - return from_string<T>(*e); + { + try { return from_string<T>(*e); } + catch (...) { return t; } + } return t; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/utilities/types.hpp` around lines 241 - 247, The get_env_as template currently returns the provided default when the env key is missing but lets from_string<T> throw if parsing fails, producing an asymmetric contract; update get_env_as (the template function named get_env_as) to catch parsing errors (catch std::exception or std::runtime_error thrown by from_string<T>) and return the default parameter t on parse failure to provide a consistent "get-with-default" behavior, or alternatively add a clear documentation comment on get_env_as indicating that parsing errors are intentionally propagated (choose one approach and implement it consistently).src/core/errors.hpp (1)
71-81:⚠️ Potential issue | 🟡 Minor
errorsmap silently drops subsequent values for the same key.When
log()is called multiple times with the samekeybut differentval, only the value from the first call is retained inerrors. The count increments correctly, but any evolving context (e.g., different particle IDs or patch paths for the same error type) is lost. If the intent is strict deduplication by key this is fine, but it should be noted that diagnostics fromerrorswill always reflect only the first occurrence's value.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/errors.hpp` around lines 71 - 81, The current log() only stores the first val for a given key because of the if(!errors.count(key)) path; change it so the latest context is preserved by always assigning errors[key] = val (instead of only emplacing on first sight) while still initializing and incrementing error_count as needed (use error_count.emplace(key,0) if missing, then error_count[key]++). Update the function log to always overwrite errors[key] with the new val and ensure error_count initialization and increment logic (symbols: function log, maps errors and error_count) remain correct.src/amr/data/particles/refine/particles_data_split.hpp (1)
181-183:⚠️ Potential issue | 🟠 MajorReplace the captured lambda with an explicit functor for predictable performance in this hot particle path.
Lines 181-183 use a capture (
[&destinationBox]) inside nested loops in particle refinement. In this codebase, runtime particle operations should use explicit non-captured functors instead of lambdas with captures to avoid performance issues and improve inlining in hot paths.♻️ Suggested refactor
+ struct DestinationBoxContains + { + SAMRAI::hier::Box const& destinationBox; + + template<typename ParticleT> + bool operator()(ParticleT const& particle) const + { + return isInBox(destinationBox, particle); + } + }; + for (auto const& destinationBox : destBoxes) { auto const splitBox = getSplitBox(destinationBox); - - auto const isInDest = [&destinationBox](auto const& particle) { - return isInBox(destinationBox, particle); - }; + auto const isInDest = DestinationBoxContains{destinationBox};🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/amr/data/particles/refine/particles_data_split.hpp` around lines 181 - 183, The inline lambda "isInDest" that captures destinationBox should be replaced with an explicit non-capturing functor struct (e.g., IsInDestFunctor) to avoid capture overhead in the hot particle path: define a struct with a constructor taking destinationBox (by const reference or pointer) and an operator()(const ParticleType& particle) that returns isInBox(destinationBox, particle), then replace the lambda usage with an instance of that functor (constructed once before the loops) so the code uses IsInDestFunctor(destinationBox) instead of the captured lambda.src/amr/data/particles/refine/split_1d.hpp (1)
127-141:⚠️ Potential issue | 🟡 MinorPre-existing naming mismatch: constructor references
SplitPattern_1_2_2_Dispatcherinstead ofSplitPattern_1_3_2_Dispatcher.Line 135 initializes with
SplitPattern_1_2_2_Dispatcherbut the base class (line 132) isSplitPattern_1_3_2_Dispatcher. Both happen to alias the same type (PatternDispatcher<PinkPattern<DimConst<1>>>), so there's no functional bug, but it's a confusing copy-paste artifact.Suggested fix
constexpr Splitter() - : SplitPattern_1_2_2_Dispatcher{{weight[0], delta[0]}} + : SplitPattern_1_3_2_Dispatcher{{weight[0], delta[0]}} { }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/amr/data/particles/refine/split_1d.hpp` around lines 127 - 141, The constructor for Splitter<DimConst<1>, InterpConst<3>, RefinedParticlesConst<2>> initializes the wrong dispatcher alias (SplitPattern_1_2_2_Dispatcher) which mismatches the base class SplitPattern_1_3_2_Dispatcher; change the member initializer to use SplitPattern_1_3_2_Dispatcher so the constructor consistently initializes the same base/dispatcher type (refer to Splitter<DimConst<1>, InterpConst<3>, RefinedParticlesConst<2>> and the alias SplitPattern_1_3_2_Dispatcher and update the initializer list accordingly).res/cmake/options.cmake (1)
139-139:⚠️ Potential issue | 🟡 MinorRemove stale reference to undefined
lowResourceTestsoption.Line 139 references
${lowResourceTests}, but this option is no longer defined in this file. The option was removed but the reference was not cleaned up. SincelowResourceTestsis still used inpyphare/pyphare_tests/test_pharesee/CMakeLists.txt:11, either:
- Remove the line if the feature is no longer supported, or
- Restore the
option(lowResourceTests ...)definition if still needed.Note: Replacing with
useExceptionsInsteadOfMPIAbortis not appropriate as they serve different purposes.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@res/cmake/options.cmake` at line 139, The message line referencing ${lowResourceTests} ("message(\"Exclude heavier tests : \" ${lowResourceTests})") is stale; either remove that message entirely if lowResourceTests is no longer supported, or restore the option by adding an option(lowResourceTests "Enable low-resource test subset" OFF) (or similar description and default) alongside the other option(...) declarations so the variable is defined and the existing usage in pyphare/pyphare_tests/test_pharesee/CMakeLists.txt:11 continues to work; pick one of these two fixes and apply it consistently across the CMake options.src/core/data/grid/gridlayout.hpp (1)
784-799:⚠️ Potential issue | 🟠 MajorFix mixed signed/unsigned arithmetic in the subtraction before assignment.
At line 795, the subexpression
(AMRBox_.lower[i] - localStart)performsint - std::uint32_t, promoting theinttostd::uint32_t. If theintvalue is smaller thanlocalStart(as expected semantically), the subtraction wraps, producing a large unsigned value. The subsequent subtractionAMRPoint[i] - <large_uint32_t>can then underflow before assignment toint, making the assertion unreliable. Use explicitint64_tcasts to avoid unwanted promotion:Proposed fix
template<typename T> NO_DISCARD auto AMRToLocal(Point<T, dimension> const& AMRPoint) const { static_assert(std::is_integral_v<T>, "Error, must be MeshIndex (integral Point)"); Point<std::uint32_t, dimension> localPoint; @@ for (auto i = 0u; i < dimension; ++i) { - int local = AMRPoint[i] - (AMRBox_.lower[i] - localStart); - assert(local >= 0); - localPoint[i] = local; + auto const offset = static_cast<std::int64_t>(AMRBox_.lower[i]) + - static_cast<std::int64_t>(localStart); + auto const local = static_cast<std::int64_t>(AMRPoint[i]) - offset; + assert(local >= 0); + localPoint[i] = static_cast<std::uint32_t>(local); } return localPoint; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/data/grid/gridlayout.hpp` around lines 784 - 799, The subtraction in AMRToLocal causes unsigned promotion (AMRBox_.lower[i] - localStart) and potential wrap; fix by doing all intermediate arithmetic in a signed wider type: compute a signed int64_t delta = static_cast<int64_t>(AMRBox_.lower[i]) - static_cast<int64_t>(localStart) (using the physicalStartIndex(QtyCentering::dual, Direction::X) value stored in localStart), then compute a signed int64_t localSigned = static_cast<int64_t>(AMRPoint[i]) - delta and assign localPoint[i] = static_cast<std::uint32_t>(localSigned) after asserting localSigned >= 0; update uses of AMRPoint, AMRBox_.lower, localStart, localPoint and the function AMRToLocal accordingly.
♻️ Duplicate comments (6)
src/amr/data/particles/refine/split_3d.hpp (1)
66-87:addSquare(8, 0)produces face-center midpoints, not vertical edge midpoints.This was raised in a prior review and the discussion is still open. The "1 per edge – centered" contract requires 12 unique edge midpoints, but the middle group (
y=0) generates{0, 0, ±δ}and{±δ, 0, 0}, which are 4 of the 6 face-center positions (overlapping withPinkPattern), instead of the 4 vertical edge midpoints{±δ, 0, ±δ}.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/amr/data/particles/refine/split_3d.hpp` around lines 66 - 87, The middle group call addSquare(8, 0) currently writes face-center midpoints (0,0,±δ) and (±δ,0,0) which duplicates PinkPattern; instead the LimePattern must produce the four vertical edge midpoints {±δ,0,±δ}. Replace the addSquare(8, 0) call (or add a new helper) so that the entries at Super::deltas_[8..11] are set to { delta, 0, delta }, { delta, 0, -delta }, { -delta, 0, delta }, { -delta, 0, -delta } respectively (keeping existing top and bottom addSquare calls intact) so LimePattern<DimConst<3>> yields 12 unique edge-centered deltas.pyphare/pyphare/pharein/maxwellian_fluid_model.py (1)
270-270:⚠️ Potential issue | 🔴 Critical
domain[0]still used invalidate2dfordual_right/primal_right— should bedomain[idir].Lines 270 and 274 were not updated (not marked as changed), so
dual_rightandprimal_rightstill usedomain[0]regardless of direction. For directions Y (idir=1) the ghost coordinate range is computed against the X domain extent, making the Y and Z periodicity checks incorrect. This was flagged in a previous review and reportedly fixed in commitca4a5bc, but the fix was not applied to these two lines.🐛 Proposed fix
- dual_right = dual_left + domain[0] + dual_right = dual_left + domain[idir] primal_left = ( np.arange(-nbrPrimalGhosts, nbrPrimalGhosts) * sim.dl[idir] ) - primal_right = primal_left + domain[0] + primal_right = primal_left + domain[idir]Also applies to: 274-274
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyphare/pyphare/pharein/maxwellian_fluid_model.py` at line 270, In validate2d, dual_right and primal_right are incorrectly computed using domain[0] regardless of direction; change the calculations that set dual_right (currently using domain[0]) and primal_right (also using domain[0]) to use domain[idir] so the ghost coordinate range checks respect the selected direction (refer to the validate2d function and the dual_right/primal_right assignments).tests/simulator/test_advance.py (1)
807-809:⚠️ Potential issue | 🔴 CriticalMixed return type (
listvsint) still present —TypeErrorin caller when errors exist.Line 808 (
return error_boxes) was not changed and still returns a list. The caller at lines 873–880 unconditionally callsself.assertGreater(successful_test_nbr, ...)on the return value, which raisesTypeError: '>' not supported between instances of 'list' and 'int'in Python 3 when anyerror_boxesare accumulated. The proposed fix from the previous review was not applied.🐛 Proposed fix
if len(error_boxes): - return error_boxes + self.assertEqual(error_boxes, [], "level ghost subcycle interpolation errors") return successful_test_nbr🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/simulator/test_advance.py` around lines 807 - 809, The test currently returns a list when errors exist (error_boxes) which causes a TypeError where the caller expects an int; update the failing branch to return an integer (e.g., 0) instead of error_boxes so the function always returns the same type as successful_test_nbr; locate the conditional using error_boxes and successful_test_nbr in tests/simulator/test_advance.py and replace the return of error_boxes with a numeric failure value (or otherwise ensure the function always returns successful_test_nbr or an int) so callers like the assertGreater check work without type errors.src/core/data/ndarray/ndarray_vector.hpp (2)
364-373:⚠️ Potential issue | 🟠 MajorAdd mask bounds validation before unsigned index arithmetic.
Loops using
shape[d] - 1 - max_can underflow when mask inputs are invalid, leading to huge loop bounds and out-of-range writes/reads.Proposed fix
class NdArrayMask { public: + template<typename Shape> + void validate_(Shape const& shape) const + { + if (min_ > max_) + throw std::out_of_range("NdArrayMask: min must be <= max"); + for (std::size_t d = 0; d < shape.size(); ++d) + if (max_ >= shape[d]) + throw std::out_of_range("NdArrayMask: mask out of bounds"); + } @@ void fill1D(Array& array, typename Array::type val) const { auto shape = array.shape(); + validate_(shape); @@ void fill2D(Array& array, typename Array::type val) const { auto shape = array.shape(); + validate_(shape); @@ void fill3D(Array& array, typename Array::type val) const { auto shape = array.shape(); + validate_(shape); @@ NO_DISCARD auto nCells(Array const& array) { auto shape = array.shape(); + validate_(shape);As per coding guidelines:
**/*.hpp: Review the C++ code, point out issues relative to principles of clean code, expressiveness, and performance.#!/bin/bash # Check for unsigned-bound math usage and whether guard checks exist. rg -n -C2 'shape\[[0-9]+\]\s*-\s*1\s*-\s*max_|for \(auto .*<= .*shape' src/core/data/ndarray/ndarray_vector.hpp rg -n 'min_ > max_|max_ >= shape|validate_' src/core/data/ndarray/ndarray_vector.hppAlso applies to: 376-444, 447-469
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/data/ndarray/ndarray_vector.hpp` around lines 364 - 373, The loops in fill1D use unsigned index arithmetic (shape[0] - 1 - max_) which can underflow if mask values (min_, max_) are invalid; add explicit bounds validation at the start of fill1D (and the similar routines in the same file covering ranges 376-444 and 447-469): check that min_ <= max_ and that max_ < shape[0] (or equivalently min_ < shape[0]), and return or throw a clear error on violation; after validation compute safe signed indices (or cast validated values to a size_t) before the for-loops so the expressions like shape[0] - 1 - max_ cannot underflow and prevent out-of-range reads/writes.
80-92:⚠️ Potential issue | 🔴 Critical
MaskedViewcan hold a dangling mask whenoperator[]is passed a temporary.At Line 121,
mask_is a reference, but Line 203-207 forwards temporaries (e.g.array[Mask{0}]). That creates UB after construction.Proposed fix
+#include <type_traits> @@ private: Array& array_; std::array<std::uint32_t, dimension> shape_; - Mask const& mask_; + std::decay_t<Mask> mask_;#!/bin/bash # Verify the risky pattern is still present in current code. rg -n -C2 'class MaskedView|Mask const& mask_|MaskedView\(Array& array, Mask&& mask\)|operator\[\]\(Mask&& mask\)' src/core/data/ndarray/ndarray_vector.hppAlso applies to: 121-121, 203-207
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/data/ndarray/ndarray_vector.hpp` around lines 80 - 92, MaskedView currently stores mask_ as a reference which can dangle when operator[](Mask&&) forwards a temporary; change mask_ to own the Mask (replace the reference member mask_ with a value-type Mask) and update the two constructors MaskedView(Array& array, Mask const& mask) and MaskedView(Array& array, Mask&& mask) to initialize the owned mask (e.g., mask_{mask} and mask_{std::move(mask)} respectively); also review operator[](Mask&&) to continue forwarding temporaries safely to the value-taking constructor (or accept Mask by value) so no dangling reference can occur.src/core/data/grid/gridlayout.hpp (1)
808-817: No separate issue here; this inherits the point-conversion root cause.
AMRToLocal(Box<...>)delegates toAMRToLocal(Point<...>), so fixing Line 795 addresses this path too.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/data/grid/gridlayout.hpp` around lines 808 - 817, AMRToLocal(Box<T, dimension>) simply delegates to AMRToLocal(Point<...>), so the real fix is to correct the point-conversion behavior: ensure AMRToLocal(Point<T, dimension> const&) properly converts integral MeshIndex type T to a Box/Point with std::uint32_t coordinates (preserving semantics and clamping if needed) and returns the matching Point<std::uint32_t, dimension> so the Box overload works; also make sure AMRToLocal(Box<T, dimension>) calls the point overload unambiguously (e.g., AMRToLocal<Point<T, dimension>> or rely on return-type-compatible overload) and keep the static_assert on T being integral (as in AMRToLocal(Box...)) so both AMRToLocal(Point...) and AMRToLocal(Box...) consistently handle MeshIndex-to-uint32_t conversion.
🧹 Nitpick comments (13)
src/amr/multiphysics_integrator.hpp (1)
504-504: Consider keeping integrator hot-path logs at a less verbose scope.Line 504 and Line 573 are both in high-frequency execution paths; using
PHARE_LOG_SCOPE(1, ...)here can materially increase logging overhead/noise in large 3D runs. Consider keeping these at level3(or guarding level-1 logging behind a debug/profiling toggle).♻️ Suggested adjustment
- PHARE_LOG_SCOPE(1, "Multiphys::advanceLevel"); + PHARE_LOG_SCOPE(3, "Multiphys::advanceLevel"); ... - PHARE_LOG_SCOPE(1, "Multiphys::standardLevelSynchronization"); + PHARE_LOG_SCOPE(3, "Multiphys::standardLevelSynchronization");As per coding guidelines
**/*.hpp: "Review the C++ code, point out issues relative to principles of clean code, expressiveness, and performance."Also applies to: 573-574
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/amr/multiphysics_integrator.hpp` at line 504, The hot-path logging call PHARE_LOG_SCOPE(1, "Multiphys::advanceLevel") should be lowered to a less verbose scope to avoid overhead; replace the level 1 scope with level 3 (PHARE_LOG_SCOPE(3, ...)) or wrap the level-1 call behind a debug/profiling toggle so both occurrences in the Multiphys::advanceLevel code path use the reduced verbosity or conditional logging; update all matching PHARE_LOG_SCOPE calls in that function (the ones logging "Multiphys::advanceLevel") accordingly.src/python3/data_wrangler.hpp (1)
148-148: RedundantMPI_Comm_sizecall —mpi_sizeis already in scope.
core::mpi::max(input.size())uses the defaultmpi_size = 0, which triggers an internalMPI_Comm_size(MPI_COMM_WORLD, &mpi_size)(seesrc/core/utilities/mpi_utils.hpplines 28–35). Sincempi_sizewas computed on line 123 and is still used by thecollectlambda, passing it here avoids the redundant call.♻️ Proposed fix
- auto const max = core::mpi::max(input.size()); + auto const max = core::mpi::max(input.size(), mpi_size);Based on learnings, PhilipDeegan has a preference for minimal and efficient code, avoiding unnecessary operations where the value is already available.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/python3/data_wrangler.hpp` at line 148, The call to core::mpi::max(input.size()) causes an unnecessary MPI_Comm_size because it uses the default mpi_size; replace that with core::mpi::max(input.size(), mpi_size) (using the existing mpi_size computed earlier) so the function reuses the already-known mpi_size and avoids a redundant MPI_Comm_size; update the invocation near the collect lambda that uses input.size() to pass mpi_size explicitly.tests/simulator/test_diagnostics.py (3)
253-272: Minor naming inconsistency:dimvsndim.
test_dump_elapsed_time_diagsusesdimwhile the reworkedtest_dump_diagsandunique_diag_dir_for_test_caseusendim. Not a functional issue since this test isn't parameterized, but aligning the naming would improve readability.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/simulator/test_diagnostics.py` around lines 253 - 272, The test uses the parameter name dim while other tests and helpers use ndim; rename the parameter and all local uses in test_dump_elapsed_time_diags to ndim to match test_dump_diags and unique_diag_dir_for_test_case (update the function signature, the for loop that builds simInput entries, b0 construction, the call to unique_diag_dir_for_test_case, and the assert checking simulation.cells). Keep behavior identical—only rename the identifier for consistency.
218-240: Consider the test matrix size with 3D.With 3 patch-size permutations ×
supported_dimensions()× 3 interpolation orders, the test matrix could be 27+ cases, each spinning up a full simulation. The 3D cases (40³ cells, 2 populations, ppc=10) may be heavy for CI. If test time becomes a concern, consider gating 3D or higher interp orders behind a marker/environment variable.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/simulator/test_diagnostics.py` around lines 218 - 240, The test_matrix for test_dump_diags generates many heavy 3D cases; reduce CI load by gating expensive combinations (3D and/or high interp orders) — update test_dump_diags to skip or mark those cases using pytest markers or a skipif that checks an env var (e.g., CI_RUN_HEAVY) and reference the existing helpers/identifiers: test_dump_diags, permute, supported_dimensions (or ndim), interp, dump_all_diags, config, and unique_diag_dir_for_test_case; implement a conditional that either skips when ndim==3 and patch sizes/largest_patch_size lead to large grids or only runs those when an env var/marker (e.g., "run_heavy_tests") is set so CI runs the lighter matrix by default.
5-11: Nit: redundantcopyimports.Both
import copy(line 5) andfrom copy import deepcopy(line 11) are present. Line 256 usescopy.deepcopy(simArgs)while line 135 uses the baredeepcopy. Consider using onlyfrom copy import deepcopyand replacingcopy.deepcopyon line 256 withdeepcopyto be consistent.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/simulator/test_diagnostics.py` around lines 5 - 11, Remove the redundant import by deleting "import copy" and keep "from copy import deepcopy"; then replace the explicit "copy.deepcopy(simArgs)" call with the bare "deepcopy(simArgs)" so all deep-copy usage is consistent (search for copy.deepcopy in tests/simulator/test_diagnostics.py and update it to use deepcopy).src/core/utilities/mpi_utils.cpp (1)
57-65: Consider usingstd::cerrinstead ofstd::coutfor the abort path.
core::Errors::instance().log()(the non-abort path, Line 63) writes tostd::cerr, but the abort path on Line 60 writes tostd::cout.std::cerris semantically correct for error output and is unbuffered by default, making it more reliable in crash/abort scenarios even withoutstd::endl. The inconsistency could also confuse log capture pipelines that only redirect stderr.♻️ Proposed fix
- std::cout << "MPI ABORT: PHARE ERROR: " << key << " " << val << std::endl; + std::cerr << "MPI ABORT: PHARE ERROR: " << key << " " << val << std::endl;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/utilities/mpi_utils.cpp` around lines 57 - 65, The abort path in log_error (guarded by PHARE_ERRORS_USE_MPI_ABORT) prints to std::cout while the non-abort path uses core::Errors::instance().log() which writes to std::cerr; change the abort path to write the same error message to std::cerr (instead of std::cout) and keep the existing flush (std::endl) before calling MPI_Abort(MPI_COMM_WORLD, -1) so error output goes to stderr and matches core::Errors::instance().log().src/core/numerics/ion_updater/ion_updater.hpp (2)
261-261: Magic number3ingrowcall.The hardcoded growth radius
3has no explanatory name. Even in a diagnostic context, a named constant clarifies intent (e.g.,partGhostWidthwhich already exists onUpdaterSelectionBoxing, or a dedicateddebugSearchRadius).♻️ Suggested named constant
- auto const nearbyBox = grow(Box(particle.iCell, particle.iCell), 3); + constexpr int debugSearchRadius = 3; + auto const nearbyBox = grow(Box(particle.iCell, particle.iCell), debugSearchRadius);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/numerics/ion_updater/ion_updater.hpp` at line 261, Replace the magic literal 3 in the grow call with a named constant: introduce a descriptive constant (e.g., debugSearchRadius) and set it to the existing UpdaterSelectionBoxing partGhostWidth where appropriate (or to that value if accessible via UpdaterSelectionBoxing::partGhostWidth), then call grow(Box(particle.iCell, particle.iCell), debugSearchRadius) so the intent is clear; update the nearbyBox creation site (symbol nearbyBox, function grow and type Box) to use that named constant.
159-162: Consider replacing the captured lambda with a captureless functor (optional, debug-only path).
outsideGhostBoxfollows the same lambda-with-capture pattern as the existing sibling selectors. Because it is only ever invoked insidePHARE_DEBUG_DO, there is no runtime performance concern here — but for consistency with the broader codebase direction, a captureless functor struct would align better.♻️ Suggested captureless functor form
- Selector_t const outsideGhostBox = [ghostBox = ghostBox](auto& particleRange) { - return particleRange.array().partition( - particleRange, [&](auto const& cell) { return !isIn(cell, ghostBox); }); - }; + struct OutsideGhostBoxSelector { + Box_t const ghostBox; + auto operator()(auto& particleRange) const { + return particleRange.array().partition( + particleRange, [this](auto const& cell) { return !isIn(cell, ghostBox); }); + } + }; + OutsideGhostBoxSelector const outsideGhostBox{ghostBox};Based on learnings: "In the PHARE codebase, for runtime particle-based operations, avoid lambdas with captures due to performance issues… runtime particle operations benefit from using explicit non-captured functor classes instead of lambdas with capture." The concern is moot on the hot path here since
outsideGhostBoxis debug-only, but the convention still applies.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/numerics/ion_updater/ion_updater.hpp` around lines 159 - 162, Replace the capturing lambda outsideGhostBox with a captureless functor: define a struct OutsideGhostBox that stores a const reference to ghostBox, implements operator()(auto& particleRange) to call particleRange.array().partition(particleRange, Predicate) and define a small captureless Predicate functor (e.g., IsOutsideGhostBoxCell) holding the same ghostBox reference whose operator()(auto const& cell) returns !isIn(cell, ghostBox); then construct Selector_t const outsideGhostBox = OutsideGhostBox{ghostBox} and use it in the same PHARE_DEBUG_DO invocation. This keeps semantics identical (uses isIn, particleRange.array().partition, and ghostBox) but removes the lambda capture.src/amr/tagging/default_hybrid_tagger_strategy.hpp (1)
154-167: Hoistfield_diffout of the innermost loop to eliminate unnecessary recreation in the hot path.The lambda is rebuilt for every voxel in the triple-nested loop. Define it once before the
iTag_zloop and passix,iy,izas parameters instead of capturing them, keeping the inner loop leaner and avoiding repeated lambda object allocation.♻️ Proposed refactor
- for (auto iTag_z = 0u, iz = start_z; iTag_z <= end_z; ++iz, ++iTag_z) - { - auto const field_diff = [&](auto const& F) { + auto const field_diff = [](auto const& F, auto ix, auto iy, auto iz) { auto const delta_2x = std::abs(F(ix + 2, iy, iz) - F(ix, iy, iz)); auto const delta_2y = std::abs(F(ix, iy + 2, iz) - F(ix, iy, iz)); auto const delta_2z = std::abs(F(ix, iy, iz + 2) - F(ix, iy, iz)); auto const delta_x = std::abs(F(ix + 1, iy, iz) - F(ix, iy, iz)); auto const delta_y = std::abs(F(ix, iy + 1, iz) - F(ix, iy, iz)); auto const delta_z = std::abs(F(ix, iy, iz + 1) - F(ix, iy, iz)); auto const criter_x = delta_2x / (1 + delta_x); auto const criter_y = delta_2y / (1 + delta_y); auto const criter_z = delta_2z / (1 + delta_z); return std::make_tuple(criter_x, criter_y, criter_z); - }; + }; + + for (auto iTag_z = 0u, iz = start_z; iTag_z <= end_z; ++iz, ++iTag_z) + { - auto const& [Bx_x, Bx_y, Bx_z] = field_diff(Bx); - auto const& [By_x, By_y, By_z] = field_diff(By); - auto const& [Bz_x, Bz_y, Bz_z] = field_diff(Bz); + auto const& [Bx_x, Bx_y, Bx_z] = field_diff(Bx, ix, iy, iz); + auto const& [By_x, By_y, By_z] = field_diff(By, ix, iy, iz); + auto const& [Bz_x, Bz_y, Bz_z] = field_diff(Bz, ix, iy, iz);Also applies to: 169-171
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/amr/tagging/default_hybrid_tagger_strategy.hpp` around lines 154 - 167, The lambda field_diff is being recreated per-voxel inside the triple loop; move its definition out of the innermost loop (define it once above the iTag_z loop) and change it to a small free/static lambda or helper function that accepts (ix, iy, iz, F) as parameters instead of capturing ix/iy/iz, then call it from the inner loop to avoid per-iteration allocation; do the same refactor for the similar lambdas referenced around lines 169-171 so all per-voxel computations reuse a single callable and only pass indices/fields as arguments.tests/simulator/test_advance.py (1)
407-408: Use bareraiseto preserve the original traceback.
raise eresets the traceback to this line; bareraisepropagates it unchanged. Same applies to line 658.♻️ Proposed fix
- if self.rethrow_: - raise e + if self.rethrow_: + raiseApply identically to line 658.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/simulator/test_advance.py` around lines 407 - 408, Replace the explicit "raise e" with a bare "raise" in the except blocks that check self.rethrow_ so the original traceback is preserved; specifically update the except clause using the exception variable "e" and the attribute "self.rethrow_" in tests/simulator/test_advance.py (apply the same change at the second occurrence around the other except block where "e" is re-raised, referenced in the review).tests/functional/harris/harris_3d.py (1)
69-70: Rename ambiguous variablel(easily confused with1).♻️ Proposed rename
- def S(y, y0, l): - return 0.5 * (1.0 + np.tanh((y - y0) / l)) + def S(y, y0, width): + return 0.5 * (1.0 + np.tanh((y - y0) / width))Update call sites:
S(y, Ly * 0.3, L)andS(y, Ly * 0.7, L)—Lthere refers to the outerL = 0.5, which is fine.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/functional/harris/harris_3d.py` around lines 69 - 70, The helper function S(y, y0, l) uses an ambiguous parameter name "l" that can be mistaken for the digit 1; rename the parameter to a clearer name (e.g., "L" or "scale") in the function definition S and update all call sites (e.g., S(y, Ly * 0.3, L) and S(y, Ly * 0.7, L)) to pass the renamed parameter, and ensure any internal references inside S use the new name.tests/simulator/test_initialization.py (1)
577-581: Minor: unused loop variablepiand unused tuple elementedges.
piis never referenced in the loop body (line 577) andedgesfromnp.histogramdd(line 580) is never used.♻️ Proposed fix
- for pi, patch in enumerate(datahier.level(0).patches): + for patch in datahier.level(0).patches: pd = patch.patch_datas["protons_particles"] icells = pd.dataset[patch.box].iCells - H, edges = np.histogramdd(icells, bins=patch.box.shape) + H, _ = np.histogramdd(icells, bins=patch.box.shape)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/simulator/test_initialization.py` around lines 577 - 581, The loop currently uses an unused enumerate variable `pi` and unpacks an unused `edges` from np.histogramdd; change the loop to "for patch in datahier.level(0).patches:" (remove enumerate and `pi`) and call np.histogramdd so you only keep the histogram result (e.g., H = np.histogramdd(icells, bins=patch.box.shape)[0] or H, _ = np.histogramdd(...)) so `edges` is not assigned.res/cmake/def.cmake (1)
25-31:FORCEsilently overrides an explicit-DuseExceptionsInsteadOfMPIAbort=OFFwhendevMode=ON.
PHARE_PYTEST_SIMULATORS(Lines 100–106) is correctly guarded withif(NOT DEFINED ...)so a CLI override wins.useExceptionsInsteadOfMPIAbortusesCACHE ... FORCEinstead, so passing-DuseExceptionsInsteadOfMPIAbort=OFF -DdevMode=ONhas no effect. If that's the intended lock-down for dev mode, it's fine, but it's worth being deliberate about the asymmetry.Also, the cache description
"YES!"(Line 26) is not meaningful to anyone readingcmake-guiorccmake.♻️ Suggested tweak (only if override should be allowed in devMode)
-if(devMode) # set defaults - set (useExceptionsInsteadOfMPIAbort ON CACHE BOOL "YES!" FORCE) -endif(devMode) +if(devMode) # set defaults + if(NOT DEFINED useExceptionsInsteadOfMPIAbort) + set (useExceptionsInsteadOfMPIAbort ON CACHE BOOL "Use exceptions instead of MPI_Abort for error handling") + endif() +endif(devMode)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@res/cmake/def.cmake` around lines 25 - 31, The current devMode block force-sets the cache variable useExceptionsInsteadOfMPIAbort which silently overrides any CLI -D setting; change the logic so the cache variable is only given a default (remove FORCE) or only set when NOT DEFINED so users can override with -D; update the cache description from "YES!" to a meaningful string (e.g., "Use exceptions instead of MPI_Abort") and ensure the existing conditional that appends -DPHARE_ERRORS_USE_MPI_ABORT=0 to PHARE_FLAGS remains tied to useExceptionsInsteadOfMPIAbort so behavior is unchanged (refer to devMode, useExceptionsInsteadOfMPIAbort, PHARE_FLAGS, and PHARE_ERRORS_USE_MPI_ABORT).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@pyphare/pyphare/pharein/maxwellian_fluid_model.py`:
- Around line 353-360: The bug: in validate3d (same pattern as validate2d)
dual_right and primal_right use domain[0] regardless of the loop index; update
the computations of dual_right and primal_right to use domain[idir] so the ghost
extents are correct for Y/Z directions; locate the block computing
dual_left/dual_right and primal_left/primal_right (references: dual_left,
dual_right, primal_left, primal_right, sim.dl[idir], nbrDualGhosts,
nbrPrimalGhosts) and replace domain[0] with domain[idir] for those two
assignments.
---
Outside diff comments:
In `@res/cmake/options.cmake`:
- Line 139: The message line referencing ${lowResourceTests} ("message(\"Exclude
heavier tests : \" ${lowResourceTests})") is stale; either
remove that message entirely if lowResourceTests is no longer supported, or
restore the option by adding an option(lowResourceTests "Enable low-resource
test subset" OFF) (or similar description and default) alongside the other
option(...) declarations so the variable is defined and the existing usage in
pyphare/pyphare_tests/test_pharesee/CMakeLists.txt:11 continues to work; pick
one of these two fixes and apply it consistently across the CMake options.
In `@src/amr/data/particles/refine/particles_data_split.hpp`:
- Around line 181-183: The inline lambda "isInDest" that captures destinationBox
should be replaced with an explicit non-capturing functor struct (e.g.,
IsInDestFunctor) to avoid capture overhead in the hot particle path: define a
struct with a constructor taking destinationBox (by const reference or pointer)
and an operator()(const ParticleType& particle) that returns
isInBox(destinationBox, particle), then replace the lambda usage with an
instance of that functor (constructed once before the loops) so the code uses
IsInDestFunctor(destinationBox) instead of the captured lambda.
In `@src/amr/data/particles/refine/split_1d.hpp`:
- Around line 127-141: The constructor for Splitter<DimConst<1>, InterpConst<3>,
RefinedParticlesConst<2>> initializes the wrong dispatcher alias
(SplitPattern_1_2_2_Dispatcher) which mismatches the base class
SplitPattern_1_3_2_Dispatcher; change the member initializer to use
SplitPattern_1_3_2_Dispatcher so the constructor consistently initializes the
same base/dispatcher type (refer to Splitter<DimConst<1>, InterpConst<3>,
RefinedParticlesConst<2>> and the alias SplitPattern_1_3_2_Dispatcher and update
the initializer list accordingly).
In `@src/core/data/grid/gridlayout.hpp`:
- Around line 784-799: The subtraction in AMRToLocal causes unsigned promotion
(AMRBox_.lower[i] - localStart) and potential wrap; fix by doing all
intermediate arithmetic in a signed wider type: compute a signed int64_t delta =
static_cast<int64_t>(AMRBox_.lower[i]) - static_cast<int64_t>(localStart) (using
the physicalStartIndex(QtyCentering::dual, Direction::X) value stored in
localStart), then compute a signed int64_t localSigned =
static_cast<int64_t>(AMRPoint[i]) - delta and assign localPoint[i] =
static_cast<std::uint32_t>(localSigned) after asserting localSigned >= 0; update
uses of AMRPoint, AMRBox_.lower, localStart, localPoint and the function
AMRToLocal accordingly.
In `@src/core/errors.hpp`:
- Around line 71-81: The current log() only stores the first val for a given key
because of the if(!errors.count(key)) path; change it so the latest context is
preserved by always assigning errors[key] = val (instead of only emplacing on
first sight) while still initializing and incrementing error_count as needed
(use error_count.emplace(key,0) if missing, then error_count[key]++). Update the
function log to always overwrite errors[key] with the new val and ensure
error_count initialization and increment logic (symbols: function log, maps
errors and error_count) remain correct.
In `@src/core/utilities/types.hpp`:
- Around line 241-247: The get_env_as template currently returns the provided
default when the env key is missing but lets from_string<T> throw if parsing
fails, producing an asymmetric contract; update get_env_as (the template
function named get_env_as) to catch parsing errors (catch std::exception or
std::runtime_error thrown by from_string<T>) and return the default parameter t
on parse failure to provide a consistent "get-with-default" behavior, or
alternatively add a clear documentation comment on get_env_as indicating that
parsing errors are intentionally propagated (choose one approach and implement
it consistently).
---
Duplicate comments:
In `@pyphare/pyphare/pharein/maxwellian_fluid_model.py`:
- Line 270: In validate2d, dual_right and primal_right are incorrectly computed
using domain[0] regardless of direction; change the calculations that set
dual_right (currently using domain[0]) and primal_right (also using domain[0])
to use domain[idir] so the ghost coordinate range checks respect the selected
direction (refer to the validate2d function and the dual_right/primal_right
assignments).
In `@src/amr/data/particles/refine/split_3d.hpp`:
- Around line 66-87: The middle group call addSquare(8, 0) currently writes
face-center midpoints (0,0,±δ) and (±δ,0,0) which duplicates PinkPattern;
instead the LimePattern must produce the four vertical edge midpoints {±δ,0,±δ}.
Replace the addSquare(8, 0) call (or add a new helper) so that the entries at
Super::deltas_[8..11] are set to { delta, 0, delta }, { delta, 0, -delta }, {
-delta, 0, delta }, { -delta, 0, -delta } respectively (keeping existing top
and bottom addSquare calls intact) so LimePattern<DimConst<3>> yields 12 unique
edge-centered deltas.
In `@src/core/data/grid/gridlayout.hpp`:
- Around line 808-817: AMRToLocal(Box<T, dimension>) simply delegates to
AMRToLocal(Point<...>), so the real fix is to correct the point-conversion
behavior: ensure AMRToLocal(Point<T, dimension> const&) properly converts
integral MeshIndex type T to a Box/Point with std::uint32_t coordinates
(preserving semantics and clamping if needed) and returns the matching
Point<std::uint32_t, dimension> so the Box overload works; also make sure
AMRToLocal(Box<T, dimension>) calls the point overload unambiguously (e.g.,
AMRToLocal<Point<T, dimension>> or rely on return-type-compatible overload) and
keep the static_assert on T being integral (as in AMRToLocal(Box...)) so both
AMRToLocal(Point...) and AMRToLocal(Box...) consistently handle
MeshIndex-to-uint32_t conversion.
In `@src/core/data/ndarray/ndarray_vector.hpp`:
- Around line 364-373: The loops in fill1D use unsigned index arithmetic
(shape[0] - 1 - max_) which can underflow if mask values (min_, max_) are
invalid; add explicit bounds validation at the start of fill1D (and the similar
routines in the same file covering ranges 376-444 and 447-469): check that min_
<= max_ and that max_ < shape[0] (or equivalently min_ < shape[0]), and return
or throw a clear error on violation; after validation compute safe signed
indices (or cast validated values to a size_t) before the for-loops so the
expressions like shape[0] - 1 - max_ cannot underflow and prevent out-of-range
reads/writes.
- Around line 80-92: MaskedView currently stores mask_ as a reference which can
dangle when operator[](Mask&&) forwards a temporary; change mask_ to own the
Mask (replace the reference member mask_ with a value-type Mask) and update the
two constructors MaskedView(Array& array, Mask const& mask) and
MaskedView(Array& array, Mask&& mask) to initialize the owned mask (e.g.,
mask_{mask} and mask_{std::move(mask)} respectively); also review
operator[](Mask&&) to continue forwarding temporaries safely to the value-taking
constructor (or accept Mask by value) so no dangling reference can occur.
In `@tests/simulator/test_advance.py`:
- Around line 807-809: The test currently returns a list when errors exist
(error_boxes) which causes a TypeError where the caller expects an int; update
the failing branch to return an integer (e.g., 0) instead of error_boxes so the
function always returns the same type as successful_test_nbr; locate the
conditional using error_boxes and successful_test_nbr in
tests/simulator/test_advance.py and replace the return of error_boxes with a
numeric failure value (or otherwise ensure the function always returns
successful_test_nbr or an int) so callers like the assertGreater check work
without type errors.
---
Nitpick comments:
In `@res/cmake/def.cmake`:
- Around line 25-31: The current devMode block force-sets the cache variable
useExceptionsInsteadOfMPIAbort which silently overrides any CLI -D setting;
change the logic so the cache variable is only given a default (remove FORCE) or
only set when NOT DEFINED so users can override with -D; update the cache
description from "YES!" to a meaningful string (e.g., "Use exceptions instead of
MPI_Abort") and ensure the existing conditional that appends
-DPHARE_ERRORS_USE_MPI_ABORT=0 to PHARE_FLAGS remains tied to
useExceptionsInsteadOfMPIAbort so behavior is unchanged (refer to devMode,
useExceptionsInsteadOfMPIAbort, PHARE_FLAGS, and PHARE_ERRORS_USE_MPI_ABORT).
In `@src/amr/multiphysics_integrator.hpp`:
- Line 504: The hot-path logging call PHARE_LOG_SCOPE(1,
"Multiphys::advanceLevel") should be lowered to a less verbose scope to avoid
overhead; replace the level 1 scope with level 3 (PHARE_LOG_SCOPE(3, ...)) or
wrap the level-1 call behind a debug/profiling toggle so both occurrences in the
Multiphys::advanceLevel code path use the reduced verbosity or conditional
logging; update all matching PHARE_LOG_SCOPE calls in that function (the ones
logging "Multiphys::advanceLevel") accordingly.
In `@src/amr/tagging/default_hybrid_tagger_strategy.hpp`:
- Around line 154-167: The lambda field_diff is being recreated per-voxel inside
the triple loop; move its definition out of the innermost loop (define it once
above the iTag_z loop) and change it to a small free/static lambda or helper
function that accepts (ix, iy, iz, F) as parameters instead of capturing
ix/iy/iz, then call it from the inner loop to avoid per-iteration allocation; do
the same refactor for the similar lambdas referenced around lines 169-171 so all
per-voxel computations reuse a single callable and only pass indices/fields as
arguments.
In `@src/core/numerics/ion_updater/ion_updater.hpp`:
- Line 261: Replace the magic literal 3 in the grow call with a named constant:
introduce a descriptive constant (e.g., debugSearchRadius) and set it to the
existing UpdaterSelectionBoxing partGhostWidth where appropriate (or to that
value if accessible via UpdaterSelectionBoxing::partGhostWidth), then call
grow(Box(particle.iCell, particle.iCell), debugSearchRadius) so the intent is
clear; update the nearbyBox creation site (symbol nearbyBox, function grow and
type Box) to use that named constant.
- Around line 159-162: Replace the capturing lambda outsideGhostBox with a
captureless functor: define a struct OutsideGhostBox that stores a const
reference to ghostBox, implements operator()(auto& particleRange) to call
particleRange.array().partition(particleRange, Predicate) and define a small
captureless Predicate functor (e.g., IsOutsideGhostBoxCell) holding the same
ghostBox reference whose operator()(auto const& cell) returns !isIn(cell,
ghostBox); then construct Selector_t const outsideGhostBox =
OutsideGhostBox{ghostBox} and use it in the same PHARE_DEBUG_DO invocation. This
keeps semantics identical (uses isIn, particleRange.array().partition, and
ghostBox) but removes the lambda capture.
In `@src/core/utilities/mpi_utils.cpp`:
- Around line 57-65: The abort path in log_error (guarded by
PHARE_ERRORS_USE_MPI_ABORT) prints to std::cout while the non-abort path uses
core::Errors::instance().log() which writes to std::cerr; change the abort path
to write the same error message to std::cerr (instead of std::cout) and keep the
existing flush (std::endl) before calling MPI_Abort(MPI_COMM_WORLD, -1) so error
output goes to stderr and matches core::Errors::instance().log().
In `@src/python3/data_wrangler.hpp`:
- Line 148: The call to core::mpi::max(input.size()) causes an unnecessary
MPI_Comm_size because it uses the default mpi_size; replace that with
core::mpi::max(input.size(), mpi_size) (using the existing mpi_size computed
earlier) so the function reuses the already-known mpi_size and avoids a
redundant MPI_Comm_size; update the invocation near the collect lambda that uses
input.size() to pass mpi_size explicitly.
In `@tests/functional/harris/harris_3d.py`:
- Around line 69-70: The helper function S(y, y0, l) uses an ambiguous parameter
name "l" that can be mistaken for the digit 1; rename the parameter to a clearer
name (e.g., "L" or "scale") in the function definition S and update all call
sites (e.g., S(y, Ly * 0.3, L) and S(y, Ly * 0.7, L)) to pass the renamed
parameter, and ensure any internal references inside S use the new name.
In `@tests/simulator/test_advance.py`:
- Around line 407-408: Replace the explicit "raise e" with a bare "raise" in the
except blocks that check self.rethrow_ so the original traceback is preserved;
specifically update the except clause using the exception variable "e" and the
attribute "self.rethrow_" in tests/simulator/test_advance.py (apply the same
change at the second occurrence around the other except block where "e" is
re-raised, referenced in the review).
In `@tests/simulator/test_diagnostics.py`:
- Around line 253-272: The test uses the parameter name dim while other tests
and helpers use ndim; rename the parameter and all local uses in
test_dump_elapsed_time_diags to ndim to match test_dump_diags and
unique_diag_dir_for_test_case (update the function signature, the for loop that
builds simInput entries, b0 construction, the call to
unique_diag_dir_for_test_case, and the assert checking simulation.cells). Keep
behavior identical—only rename the identifier for consistency.
- Around line 218-240: The test_matrix for test_dump_diags generates many heavy
3D cases; reduce CI load by gating expensive combinations (3D and/or high interp
orders) — update test_dump_diags to skip or mark those cases using pytest
markers or a skipif that checks an env var (e.g., CI_RUN_HEAVY) and reference
the existing helpers/identifiers: test_dump_diags, permute, supported_dimensions
(or ndim), interp, dump_all_diags, config, and unique_diag_dir_for_test_case;
implement a conditional that either skips when ndim==3 and patch
sizes/largest_patch_size lead to large grids or only runs those when an env
var/marker (e.g., "run_heavy_tests") is set so CI runs the lighter matrix by
default.
- Around line 5-11: Remove the redundant import by deleting "import copy" and
keep "from copy import deepcopy"; then replace the explicit
"copy.deepcopy(simArgs)" call with the bare "deepcopy(simArgs)" so all deep-copy
usage is consistent (search for copy.deepcopy in
tests/simulator/test_diagnostics.py and update it to use deepcopy).
In `@tests/simulator/test_initialization.py`:
- Around line 577-581: The loop currently uses an unused enumerate variable `pi`
and unpacks an unused `edges` from np.histogramdd; change the loop to "for patch
in datahier.level(0).patches:" (remove enumerate and `pi`) and call
np.histogramdd so you only keep the histogram result (e.g., H =
np.histogramdd(icells, bins=patch.box.shape)[0] or H, _ = np.histogramdd(...))
so `edges` is not assigned.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (71)
.github/workflows/cmake_ubuntu.ymlpyphare/pyphare/pharein/maxwellian_fluid_model.pypyphare/pyphare/simulator/simulator.pyres/amr/splitting.ymlres/cmake/def.cmakeres/cmake/options.cmakeres/sim/all.txtsrc/amr/data/field/coarsening/coarsen_weighter.hppsrc/amr/data/field/coarsening/electric_field_coarsener.hppsrc/amr/data/particles/refine/particles_data_split.hppsrc/amr/data/particles/refine/split.hppsrc/amr/data/particles/refine/split_1d.hppsrc/amr/data/particles/refine/split_2d.hppsrc/amr/data/particles/refine/split_3d.hppsrc/amr/data/particles/refine/splitter.hppsrc/amr/level_initializer/hybrid_level_initializer.hppsrc/amr/messengers/field_operate_transaction.hppsrc/amr/multiphysics_integrator.hppsrc/amr/resources_manager/resources_manager.hppsrc/amr/samrai.cppsrc/amr/solvers/solver_ppc.hppsrc/amr/tagging/default_hybrid_tagger_strategy.hppsrc/core/data/grid/gridlayout.hppsrc/core/data/ndarray/ndarray_vector.hppsrc/core/data/particles/particle.hppsrc/core/data/particles/particle_array.hppsrc/core/data/tensorfield/tensorfield.hppsrc/core/errors.hppsrc/core/numerics/ion_updater/ion_updater.hppsrc/core/utilities/meta/meta_utilities.hppsrc/core/utilities/mpi_utils.cppsrc/core/utilities/mpi_utils.hppsrc/core/utilities/point/point.hppsrc/core/utilities/types.hppsrc/diagnostic/detail/h5writer.hppsrc/diagnostic/diagnostic_model_view.hppsrc/python3/data_wrangler.hppsrc/simulator/simulator.hpptests/amr/data/particles/refine/test_split.cpptests/core/data/gridlayout/deriv.pytests/core/data/gridlayout/test_gridlayout.hpptests/core/data/ndarray/test_main.cpptests/diagnostic/CMakeLists.txttests/diagnostic/job_1d.py.intests/diagnostic/job_2d.py.intests/diagnostic/job_3d.py.intests/diagnostic/test-diagnostics_1d.cpptests/diagnostic/test-diagnostics_2d.cpptests/diagnostic/test-diagnostics_3d.cpptests/functional/harris/harris_2d.pytests/functional/harris/harris_3d.pytests/simulator/CMakeLists.txttests/simulator/__init__.pytests/simulator/advance/CMakeLists.txttests/simulator/advance/test_fields_advance_1d.pytests/simulator/advance/test_fields_advance_2d.pytests/simulator/advance/test_fields_advance_3d.pytests/simulator/advance/test_particles_advance_1d.pytests/simulator/advance/test_particles_advance_2d.pytests/simulator/advance/test_particles_advance_3d.pytests/simulator/initialize/CMakeLists.txttests/simulator/initialize/test_fields_init_2d.pytests/simulator/initialize/test_fields_init_3d.pytests/simulator/initialize/test_particles_init_3d.pytests/simulator/per_test.hpptests/simulator/refined_particle_nbr.pytests/simulator/refinement/test_2d_10_core.pytests/simulator/test_advance.pytests/simulator/test_diagnostics.pytests/simulator/test_initialization.pytests/simulator/test_restarts.py
💤 Files with no reviewable changes (2)
- tests/simulator/refined_particle_nbr.py
- src/amr/samrai.cpp
✅ Files skipped from review due to trivial changes (1)
- src/diagnostic/detail/h5writer.hpp
🚧 Files skipped from review as they are similar to previous changes (23)
- tests/simulator/advance/test_particles_advance_2d.py
- tests/simulator/advance/test_fields_advance_1d.py
- tests/simulator/per_test.hpp
- tests/simulator/advance/test_particles_advance_3d.py
- src/amr/data/field/coarsening/electric_field_coarsener.hpp
- tests/amr/data/particles/refine/test_split.cpp
- src/core/data/particles/particle_array.hpp
- src/core/utilities/point/point.hpp
- tests/diagnostic/job_3d.py.in
- tests/diagnostic/CMakeLists.txt
- src/simulator/simulator.hpp
- res/sim/all.txt
- tests/diagnostic/test-diagnostics_2d.cpp
- src/diagnostic/diagnostic_model_view.hpp
- src/amr/level_initializer/hybrid_level_initializer.hpp
- tests/diagnostic/test-diagnostics_1d.cpp
- tests/diagnostic/job_1d.py.in
- tests/core/data/gridlayout/deriv.py
- tests/simulator/advance/test_fields_advance_3d.py
- src/amr/data/field/coarsening/coarsen_weighter.hpp
- src/amr/solvers/solver_ppc.hpp
- src/core/utilities/mpi_utils.hpp
- src/amr/data/particles/refine/split.hpp
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
src/amr/data/particles/refine/split_1d.hpp (1)
127-141:⚠️ Potential issue | 🟡 MinorBug: Wrong dispatcher alias in constructor initialization.
Splitter<DimConst<1>, InterpConst<3>, RefinedParticlesConst<2>>inherits fromSplitPattern_1_3_2_Dispatcherbut the constructor initializesSplitPattern_1_2_2_Dispatcher. This compiles because both aliases resolve to the same type (PatternDispatcher<PinkPattern<DimConst<1>>>), but it's misleading and could cause issues if the types diverge in the future.🐛 Proposed fix
constexpr Splitter() - : SplitPattern_1_2_2_Dispatcher{{weight[0], delta[0]}} + : SplitPattern_1_3_2_Dispatcher{{weight[0], delta[0]}} { }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/amr/data/particles/refine/split_1d.hpp` around lines 127 - 141, The Splitter specialization for Splitter<DimConst<1>, InterpConst<3>, RefinedParticlesConst<2>> currently initializes the wrong base dispatcher alias in its constexpr constructor; change the initializer from SplitPattern_1_2_2_Dispatcher{{weight[0], delta[0]}} to use SplitPattern_1_3_2_Dispatcher{{weight[0], delta[0]}} so the constructor initializes the actual inherited dispatcher (refer to the Splitter struct and the aliases SplitPattern_1_3_2_Dispatcher and SplitPattern_1_2_2_Dispatcher).src/amr/data/particles/refine/split_2d.hpp (1)
119-126:⚠️ Potential issue | 🔴 CriticalThis pattern initialization is incorrect and should use array subscripts instead of literal integers.
The initialization
{{1, 2}, {2, 1}}uses literal integers, but the pattern constructors expect float values. Splitter<2,2,8> and Splitter<2,3,8> (which also dispatch 8 particles with PurplePattern and PinkPattern) correctly initialize as{{weight[0], delta[0]}, {weight[1], delta[1]}}. This should be corrected to match the established convention and pass the actual weight and delta array values instead of arbitrary integers.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/amr/data/particles/refine/split_2d.hpp` around lines 119 - 126, The Splitter default ctor currently initializes SplitPattern_2_1_8_Dispatcher with literal integers {{1, 2}, {2, 1}}; replace those literals with the actual weight/delta array subscripts so the dispatcher receives floats: use {{weight[0], delta[0]}, {weight[1], delta[1]}} (matching the pattern used in Splitter<2,2,8> and Splitter<2,3,8>) so the constructor for Splitter and the SplitPattern_2_1_8_Dispatcher get the correct float values from the static arrays delta and weight.src/core/data/ndarray/ndarray_vector.hpp (1)
368-443:⚠️ Potential issue | 🟠 MajorAdd mask bounds validation before unsigned index arithmetic.
At Line 371/386/417/440 and similar loops, expressions like
shape[d] - 1 - max_can underflow whenmax_is out of range, causing invalid loop ranges and potential out-of-bounds access.🛡️ Proposed fix
class NdArrayMask { public: + template<typename Array> + void validate_bounds_(Array const& array) const + { + auto const shape = array.shape(); + if (min_ > max_) + throw std::out_of_range("NdArrayMask: min > max"); + + if constexpr (Array::dimension == 1) + { + if (max_ >= shape[0]) throw std::out_of_range("NdArrayMask: out of bounds"); + } + else if constexpr (Array::dimension == 2) + { + if (max_ >= shape[0] || max_ >= shape[1]) + throw std::out_of_range("NdArrayMask: out of bounds"); + } + else if constexpr (Array::dimension == 3) + { + if (max_ >= shape[0] || max_ >= shape[1] || max_ >= shape[2]) + throw std::out_of_range("NdArrayMask: out of bounds"); + } + } + template<typename Array> void fill(Array& array, typename Array::type val) const { + validate_bounds_(array); if constexpr (Array::dimension == 1) fill1D(array, val); @@ template<typename Array> NO_DISCARD auto nCells(Array const& array) { + validate_bounds_(array); auto shape = array.shape(); std::size_t cells = 0;Also applies to: 453-468
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/data/ndarray/ndarray_vector.hpp` around lines 368 - 443, The loops in fill2D/fill3D (and the 1D variant) perform unsigned arithmetic like shape[d] - 1 - max_ which can underflow if min_/max_ are out of range; before any loop compute and validate per-dimension bounds using the shape() values (e.g. ensure 0 <= min_ <= max_ <= shape[d]-1), return or clamp if invalid, and then compute safe start/end indices (use signed/size_t as appropriate) and iterate over those computed indices instead of doing direct shape[d]-1-max_ in loop headers; reference the methods fill2D, fill3D and the member variables min_/max_ and array.shape() when locating where to add the checks.
♻️ Duplicate comments (2)
tests/simulator/test_advance.py (1)
806-809:⚠️ Potential issue | 🟠 MajorMixed return type (
listvsint) still present.The function returns
error_boxes(a list) when errors occur, butsuccessful_test_nbr(an int) otherwise. The caller at lines 873-880 expects an int and callsassertGreateron it. Iferror_boxesis returned, this will cause aTypeError.This was flagged in a previous review. The proposed fix was to always return
successful_test_nbrand assert thaterror_boxesis empty.🐛 Proposed fix
- if len(error_boxes): - return error_boxes - return successful_test_nbr + self.assertEqual(error_boxes, [], "level ghost subcycle interpolation errors") + return successful_test_nbr🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/simulator/test_advance.py` around lines 806 - 809, The function currently returns error_boxes (a list) when errors occur but an int (successful_test_nbr) otherwise, causing a TypeError for callers that expect an int; change the end of the function so it always returns successful_test_nbr and assert that error_boxes is empty before returning (e.g. replace the "if len(error_boxes): return error_boxes" branch with an assertion like "assert not error_boxes, f'Errors: {error_boxes}'" and then "return successful_test_nbr"), referencing the variables error_boxes and successful_test_nbr used in this diff.src/amr/data/particles/refine/split_3d.hpp (1)
66-87:⚠️ Potential issue | 🟠 MajorLimePattern vertical edge midpoints remain incorrect.
The
addSquare(8, 0)call generates face-axis positions{0,0,±δ}and{±δ,0,0}, but a cube's four vertical edge midpoints are at{±δ,0,±δ}. This affects all 12-particle and 27-particle splitters. The past review noted this was addressed, but the current code still shows the same pattern.🐛 Proposed fix for vertical edge midpoints
addSquare(0, delta); // top addSquare(4, -delta); // bottom - addSquare(8, 0); // middle + // vertical edge midpoints at {±δ, 0, ±δ} + Super::deltas_[8] = { delta, 0, delta}; + Super::deltas_[9] = { delta, 0, -delta}; + Super::deltas_[10] = {-delta, 0, delta}; + Super::deltas_[11] = {-delta, 0, -delta}; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/amr/data/particles/refine/split_3d.hpp` around lines 66 - 87, LimePattern currently populates the "middle" four particles incorrectly by calling addSquare(8, 0), which produces face-centers instead of the four vertical edge midpoints; replace that call with assignments that set Super::deltas_[8..11] to the four corner combinations {±delta, 0, ±delta} (i.e. {delta,0,delta}, {delta,0,-delta}, {-delta,0,delta}, {-delta,0,-delta}) or add a small helper (e.g., addVerticalEdges) that writes those values explicitly so the 12-particle and 27-particle splitters get the correct vertical edge midpoints; reference LimePattern<DimConst<3>>, the addSquare lambda, and Super::deltas_ when making the change.
🧹 Nitpick comments (14)
src/amr/data/field/coarsening/coarsen_weighter.hpp (1)
19-23: Clarify the historical note wording for readability.Line 22 is a bit awkward (
"not hard-coded 2"). A small rewording makes the intent clearer.✏️ Suggested doc tweak
- * This implementation dates from the early phase of the project where - * the refinement ratio was not hard-coded 2 and thus works for the general case + * This implementation dates from an early phase of the project where + * the refinement ratio was not fixed to 2, so it supports the general case.As per coding guidelines, "
**/*.hpp: Review the C++ code, point out issues relative to principles of clean code, expressiveness, and performance."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/amr/data/field/coarsening/coarsen_weighter.hpp` around lines 19 - 23, Update the historical note in the header comment in coarsen_weighter.hpp to improve clarity: change the awkward phrase "not hard-coded 2" to a clearer wording such as "not hard-coded to 2" or "not fixed at 2" and rephrase the sentence so it reads smoothly (e.g., "This implementation dates from an early phase of the project when the refinement ratio was not fixed at 2 and thus works for the general case"). Keep the rest of the comment intact and ensure punctuation and tense are consistent.pyphare/pyphare/pharesee/hierarchy/patchdata.py (1)
111-112: LGTM! Clean generalization for multi-dimensional support.The refactoring correctly handles arbitrary dimensions (1D, 2D, 3D) with a unified code path.
Optional: For consistency with the similar pattern in
boxm.select, you could usezipfor a slightly more Pythonic style:select = tuple(slice(l, u + 1) for l, u in zip(lower, upper)),
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyphare/pyphare/pharesee/hierarchy/patchdata.py` around lines 111 - 112, Replace the tuple comprehension that builds "select" using range(box.ndim) with a zip-based comprehension for consistency and readability: iterate over paired values from "lower" and "upper" (e.g., using zip(lower, upper)) to construct the slice tuple used to index "self.dataset" (the variables involved are select, lower, upper, box.ndim, and self.dataset).src/core/errors.hpp (1)
71-81: Pass strings by const reference to avoid unnecessary copies.The
keyandvalparameters are passed by value, causing a copy on each call. Since they're only read, pass by const reference for efficiency.♻️ Proposed fix
- void log(std::string const key, std::string const val) + void log(std::string const& key, std::string const& val) { if (!errors.count(key)) { std::cerr << key << " :" << val << std::endl; errors.emplace(key, val); error_count.emplace(key, 0); } - error_count[key] = error_count[key] + 1; + ++error_count[key]; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/errors.hpp` around lines 71 - 81, The log method currently takes key and val by value causing unnecessary copies; change the signature of log to take const std::string& key and const std::string& val (both in the declaration and definition for the function named log) and update any declarations/overloads so callers remain compatible; leave the body logic unchanged (it already treats them as read-only) so the compiler will now avoid copying the strings.src/core/utilities/mpi_utils.cpp (1)
57-65: Pass strings by const reference and consider stderr for error messages.Two minor suggestions:
- The parameters are passed by value—use const reference to avoid copies.
- The abort path uses
std::coutwhileErrors::log()writes tostd::cerr. Usingstd::cerrfor error messages is more conventional and ensures proper unbuffered output.♻️ Proposed fix
-void log_error(std::string const key, std::string const val) +void log_error(std::string const& key, std::string const& val) { `#if` PHARE_ERRORS_USE_MPI_ABORT - std::cout << "MPI ABORT: PHARE ERROR: " << key << " " << val << std::endl; + std::cerr << "MPI ABORT: PHARE ERROR: " << key << " " << val << std::endl; MPI_Abort(MPI_COMM_WORLD, -1); `#else` core::Errors::instance().log(key, val); `#endif` }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/utilities/mpi_utils.cpp` around lines 57 - 65, Change log_error to take its parameters by const reference to avoid copying (use const std::string& key, const std::string& val) and switch the MPI abort branch to write the error message to std::cerr instead of std::cout so it matches core::Errors::instance().log() and uses the conventional unbuffered error stream; keep the MPI_Abort(MPI_COMM_WORLD, -1) call as-is after logging.src/core/numerics/ion_updater/ion_updater.hpp (1)
159-162: Use an explicit functor instead of the captured lambda for this particle selector.Line 159 uses a captured lambda (
ghostBox = ghostBox) in a runtime particle-selection path. This pattern conflicts with project performance preferences for particle operations, which should use explicit functors instead of captured lambdas to avoid unnecessary overhead.Suggested refactor
+ struct OutsideGhostBoxPredicate + { + Box_t ghostBox; + template<typename Cell> + bool operator()(Cell const& cell) const { return !isIn(cell, ghostBox); } + }; + + struct OutsideGhostBoxSelector + { + Box_t ghostBox; + template<typename ParticleRange_t> + auto operator()(ParticleRange_t& particleRange) const + { + return particleRange.array().partition( + particleRange, OutsideGhostBoxPredicate{ghostBox}); + } + }; - - Selector_t const outsideGhostBox = [ghostBox = ghostBox](auto& particleRange) { - return particleRange.array().partition( - particleRange, [&](auto const& cell) { return !isIn(cell, ghostBox); }); - }; + Selector_t const outsideGhostBox = OutsideGhostBoxSelector{ghostBox};🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/numerics/ion_updater/ion_updater.hpp` around lines 159 - 162, Replace the captured lambda assigned to Selector_t outsideGhostBox with an explicit functor type: create a small struct (e.g., OutsideGhostBoxSelector) that stores ghostBox as a member, implements operator()(auto& particleRange) and returns particleRange.array().partition(particleRange, [&](auto const& cell){ return !isIn(cell, ghostBox); }); then construct Selector_t outsideGhostBox from that functor (or use the functor instance directly). Update references to use the new OutsideGhostBoxSelector to avoid the captured lambda in the hot particle-selection path.pyphare/pyphare/pharein/maxwellian_fluid_model.py (1)
364-364: Consider addingstrict=Trueto zip() for safety.The
zip()call here (and at line 278 in validate2d) lacks thestrict=parameter. Addingstrict=Truewould catch any accidental length mismatches between the tuples.Suggested change
- for b_i, b_name in zip((bx, by, bz), ("Bx", "By", "Bz")): + for b_i, b_name in zip((bx, by, bz), ("Bx", "By", "Bz"), strict=True):🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyphare/pyphare/pharein/maxwellian_fluid_model.py` at line 364, The zip call iterating over (bx, by, bz) and ("Bx", "By", "Bz") should use strict=True to ensure the two tuples have the same length; update the for loop in maxwellian_fluid_model.py (the loop that zips bx/by/bz with "Bx"/"By"/"Bz") to call zip(..., strict=True), and do the same for the zip in validate2d so mismatched lengths raise an error early.tests/simulator/test_advance.py (3)
308-316: Rename ambiguous variablelin the list comprehension.Ruff E741 flags the variable
las ambiguous (can be confused with1orI).♻️ Proposed fix
sizex, sizey = [ - (u - l) * d for u, l, d in zip(upper, lower, dl) + (u - lo) * d for u, lo, d in zip(upper, lower, dl) ]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/simulator/test_advance.py` around lines 308 - 316, The list comprehension in makerec uses a single-letter variable `l` which is ambiguous; rename it to a clearer identifier (e.g., `low`) to avoid Ruff E741 and improve readability. Update the comprehension in makerec where sizex, sizey are computed to use `for u, low, d in zip(upper, lower, dl)` and adjust any corresponding references in that expression, leaving the rest of makerec (origin calculation, print, and Rectangle return) unchanged.
656-658: Sameraisestyle issue - use bareraise.♻️ Proposed fix
- raise e + raise🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/simulator/test_advance.py` around lines 656 - 658, In the exception handler where the code prints the failing index (print("idx", list(idx))) and then re-raises the caught exception using "raise e", change it to use a bare "raise" to preserve the original traceback; locate the try/except block in tests/simulator/test_advance.py that references idx and replace the "raise e" re-raise with "raise" while keeping the print statement intact.
407-408: Use bareraiseto preserve the original traceback.Using
raise ere-raises with a new traceback starting here. Using bareraisepreserves the original exception context.♻️ Proposed fix
if self.rethrow_: - raise e + raise🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/simulator/test_advance.py` around lines 407 - 408, The current re-raise uses "raise e" which resets the traceback; update the exception re-raise in the block that checks self.rethrow_ (where variable e is caught) to use a bare "raise" so the original traceback and context are preserved (replace the "raise e" usage that references self.rethrow_ and e with a bare raise).tests/simulator/test_restarts.py (2)
469-470: Floating-point equality check may be fragile.Using
assertEqualfor floating-point comparison (0.007) relies on exact binary representation. While the comment at line 469 acknowledges this concern, consider usingassertAlmostEqualfor robustness against potential future changes.♻️ Suggested fix
- self.assertEqual(0.007, ph.restarts.restart_time(simput["restart_options"])) + self.assertAlmostEqual(0.007, ph.restarts.restart_time(simput["restart_options"]), places=10)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/simulator/test_restarts.py` around lines 469 - 470, The test uses exact float equality which is fragile: change the assertion self.assertEqual(0.007, ph.restarts.restart_time(simput["restart_options"])) to use a tolerant comparison such as self.assertAlmostEqual(0.007, ph.restarts.restart_time(simput["restart_options"]), places=7) or provide a small delta (e.g. delta=1e-9) so ph.restarts.restart_time(...) is compared with tolerance instead of exact binary equality.
489-489: Same floating-point equality concern for0.01assertion.For consistency with the earlier comment and defensive coding, consider using
assertAlmostEqualhere as well.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/simulator/test_restarts.py` at line 489, The test currently uses exact equality for a floating-point value; update the assertion in tests/simulator/test_restarts.py to use an approximate comparison by replacing self.assertEqual(0.01, ph.restarts.restart_time(simput["restart_options"])) with an assertAlmostEqual-style assertion so the call to ph.restarts.restart_time(simput["restart_options"]) is compared to 0.01 with an appropriate delta or number of places (e.g., self.assertAlmostEqual(..., 0.01, places=7) or a small delta) to avoid fragile float-equality failures.tests/functional/harris/harris_3d.py (1)
69-70: Rename ambiguous variablelto a more descriptive name.The parameter
lin functionS(y, y0, l)is flagged as ambiguous (E741) by Ruff because it can be confused with the digit1or uppercaseI. Consider renaming to something more descriptive likelength_scaleorscale_l.♻️ Proposed fix
- def S(y, y0, l): - return 0.5 * (1.0 + np.tanh((y - y0) / l)) + def S(y, y0, length_scale): + return 0.5 * (1.0 + np.tanh((y - y0) / length_scale))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/functional/harris/harris_3d.py` around lines 69 - 70, Rename the ambiguous parameter name `l` in function S(y, y0, l) to a descriptive identifier (e.g., `length_scale` or `scale_l`) throughout the function signature and all internal references; update any callers/tests that invoke S(...) to pass the new parameter name or positional argument accordingly so that usage of the symbol `l` is eliminated and the function remains behaviorally identical.tests/simulator/test_initialization.py (1)
577-581: Fix unused loop variable and unpacked variable.Static analysis correctly identifies unused variables:
piin the loop is never usededgesfromnp.histogramddis never used♻️ Proposed fix
- for pi, patch in enumerate(datahier.level(0).patches): + for _pi, patch in enumerate(datahier.level(0).patches): pd = patch.patch_datas["protons_particles"] icells = pd.dataset[patch.box].iCells - H, edges = np.histogramdd(icells, bins=patch.box.shape) + H, _edges = np.histogramdd(icells, bins=patch.box.shape) self.assertTrue((H == ppc).all())🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/simulator/test_initialization.py` around lines 577 - 581, The loop in tests/simulator/test_initialization.py currently uses an unused loop index `pi` and unpacks `edges` from `np.histogramdd` without using it; change the loop to "for patch in datahier.level(0).patches:" (or keep enumerate but use `_` for the unused index) and stop unpacking `edges` by either doing "H = np.histogramdd(icells, bins=patch.box.shape)[0]" or "H, _ = np.histogramdd(icells, bins=patch.box.shape)"; update references to `pi`/`edges` accordingly so only `patch`, `H`, and the call to `np.histogramdd` remain.tests/simulator/advance/test_fields_advance_3d.py (1)
105-120: Skipped test noted - track for future implementation.The test
test_field_level_ghosts_via_subcycles_and_coarser_interpolationis skipped with the note "should change to work on moments". This aligns with the docstring inAdvanceTestBase._test_field_level_ghosts_via_subcycles_and_coarser_interpolationwhich mentions that E and B are no longer time-refined since the divB correction.Would you like me to open an issue to track updating this test to work with moments (n, Vi, J) instead of E and B?
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/simulator/advance/test_fields_advance_3d.py` around lines 105 - 120, This test is intentionally skipped because fields are now handled via moments; open a tracking issue to update test_field_level_ghosts_via_subcycles_and_coarser_interpolation in tests/simulator/advance/test_fields_advance_3d.py to validate moments (n, Vi, J) instead of E and B, reference AdvanceTestBase._test_field_level_ghosts_via_subcycles_and_coarser_interpolation in the issue, and update the skip message to include the issue number (or add a TODO with the issue link) so future work implements the assertions against moments rather than E/B.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/core/data/ndarray/ndarray_vector.hpp`:
- Around line 203-207: The template parameter Mask for NdArrayVector::operator[]
should be stabilized to avoid deducing reference types; change the signature
from template<typename Mask> auto operator[](Mask&& mask) to take Mask by value
(template<typename Mask> auto operator[](Mask mask)) or use a
deduced-non-reference type like std::remove_cvref_t<Mask> so Mask in MaskedView
is always a plain type (e.g., MaskedView<..., Mask>) rather than U&; update the
MaskedView construction site in operator[] to forward this value and ensure the
member Mask const mask_ becomes a stable non-reference type.
---
Outside diff comments:
In `@src/amr/data/particles/refine/split_1d.hpp`:
- Around line 127-141: The Splitter specialization for Splitter<DimConst<1>,
InterpConst<3>, RefinedParticlesConst<2>> currently initializes the wrong base
dispatcher alias in its constexpr constructor; change the initializer from
SplitPattern_1_2_2_Dispatcher{{weight[0], delta[0]}} to use
SplitPattern_1_3_2_Dispatcher{{weight[0], delta[0]}} so the constructor
initializes the actual inherited dispatcher (refer to the Splitter struct and
the aliases SplitPattern_1_3_2_Dispatcher and SplitPattern_1_2_2_Dispatcher).
In `@src/amr/data/particles/refine/split_2d.hpp`:
- Around line 119-126: The Splitter default ctor currently initializes
SplitPattern_2_1_8_Dispatcher with literal integers {{1, 2}, {2, 1}}; replace
those literals with the actual weight/delta array subscripts so the dispatcher
receives floats: use {{weight[0], delta[0]}, {weight[1], delta[1]}} (matching
the pattern used in Splitter<2,2,8> and Splitter<2,3,8>) so the constructor for
Splitter and the SplitPattern_2_1_8_Dispatcher get the correct float values from
the static arrays delta and weight.
In `@src/core/data/ndarray/ndarray_vector.hpp`:
- Around line 368-443: The loops in fill2D/fill3D (and the 1D variant) perform
unsigned arithmetic like shape[d] - 1 - max_ which can underflow if min_/max_
are out of range; before any loop compute and validate per-dimension bounds
using the shape() values (e.g. ensure 0 <= min_ <= max_ <= shape[d]-1), return
or clamp if invalid, and then compute safe start/end indices (use signed/size_t
as appropriate) and iterate over those computed indices instead of doing direct
shape[d]-1-max_ in loop headers; reference the methods fill2D, fill3D and the
member variables min_/max_ and array.shape() when locating where to add the
checks.
---
Duplicate comments:
In `@src/amr/data/particles/refine/split_3d.hpp`:
- Around line 66-87: LimePattern currently populates the "middle" four particles
incorrectly by calling addSquare(8, 0), which produces face-centers instead of
the four vertical edge midpoints; replace that call with assignments that set
Super::deltas_[8..11] to the four corner combinations {±delta, 0, ±delta} (i.e.
{delta,0,delta}, {delta,0,-delta}, {-delta,0,delta}, {-delta,0,-delta}) or add a
small helper (e.g., addVerticalEdges) that writes those values explicitly so the
12-particle and 27-particle splitters get the correct vertical edge midpoints;
reference LimePattern<DimConst<3>>, the addSquare lambda, and Super::deltas_
when making the change.
In `@tests/simulator/test_advance.py`:
- Around line 806-809: The function currently returns error_boxes (a list) when
errors occur but an int (successful_test_nbr) otherwise, causing a TypeError for
callers that expect an int; change the end of the function so it always returns
successful_test_nbr and assert that error_boxes is empty before returning (e.g.
replace the "if len(error_boxes): return error_boxes" branch with an assertion
like "assert not error_boxes, f'Errors: {error_boxes}'" and then "return
successful_test_nbr"), referencing the variables error_boxes and
successful_test_nbr used in this diff.
---
Nitpick comments:
In `@pyphare/pyphare/pharein/maxwellian_fluid_model.py`:
- Line 364: The zip call iterating over (bx, by, bz) and ("Bx", "By", "Bz")
should use strict=True to ensure the two tuples have the same length; update the
for loop in maxwellian_fluid_model.py (the loop that zips bx/by/bz with
"Bx"/"By"/"Bz") to call zip(..., strict=True), and do the same for the zip in
validate2d so mismatched lengths raise an error early.
In `@pyphare/pyphare/pharesee/hierarchy/patchdata.py`:
- Around line 111-112: Replace the tuple comprehension that builds "select"
using range(box.ndim) with a zip-based comprehension for consistency and
readability: iterate over paired values from "lower" and "upper" (e.g., using
zip(lower, upper)) to construct the slice tuple used to index "self.dataset"
(the variables involved are select, lower, upper, box.ndim, and self.dataset).
In `@src/amr/data/field/coarsening/coarsen_weighter.hpp`:
- Around line 19-23: Update the historical note in the header comment in
coarsen_weighter.hpp to improve clarity: change the awkward phrase "not
hard-coded 2" to a clearer wording such as "not hard-coded to 2" or "not fixed
at 2" and rephrase the sentence so it reads smoothly (e.g., "This implementation
dates from an early phase of the project when the refinement ratio was not fixed
at 2 and thus works for the general case"). Keep the rest of the comment intact
and ensure punctuation and tense are consistent.
In `@src/core/errors.hpp`:
- Around line 71-81: The log method currently takes key and val by value causing
unnecessary copies; change the signature of log to take const std::string& key
and const std::string& val (both in the declaration and definition for the
function named log) and update any declarations/overloads so callers remain
compatible; leave the body logic unchanged (it already treats them as read-only)
so the compiler will now avoid copying the strings.
In `@src/core/numerics/ion_updater/ion_updater.hpp`:
- Around line 159-162: Replace the captured lambda assigned to Selector_t
outsideGhostBox with an explicit functor type: create a small struct (e.g.,
OutsideGhostBoxSelector) that stores ghostBox as a member, implements
operator()(auto& particleRange) and returns
particleRange.array().partition(particleRange, [&](auto const& cell){ return
!isIn(cell, ghostBox); }); then construct Selector_t outsideGhostBox from that
functor (or use the functor instance directly). Update references to use the new
OutsideGhostBoxSelector to avoid the captured lambda in the hot
particle-selection path.
In `@src/core/utilities/mpi_utils.cpp`:
- Around line 57-65: Change log_error to take its parameters by const reference
to avoid copying (use const std::string& key, const std::string& val) and switch
the MPI abort branch to write the error message to std::cerr instead of
std::cout so it matches core::Errors::instance().log() and uses the conventional
unbuffered error stream; keep the MPI_Abort(MPI_COMM_WORLD, -1) call as-is after
logging.
In `@tests/functional/harris/harris_3d.py`:
- Around line 69-70: Rename the ambiguous parameter name `l` in function S(y,
y0, l) to a descriptive identifier (e.g., `length_scale` or `scale_l`)
throughout the function signature and all internal references; update any
callers/tests that invoke S(...) to pass the new parameter name or positional
argument accordingly so that usage of the symbol `l` is eliminated and the
function remains behaviorally identical.
In `@tests/simulator/advance/test_fields_advance_3d.py`:
- Around line 105-120: This test is intentionally skipped because fields are now
handled via moments; open a tracking issue to update
test_field_level_ghosts_via_subcycles_and_coarser_interpolation in
tests/simulator/advance/test_fields_advance_3d.py to validate moments (n, Vi, J)
instead of E and B, reference
AdvanceTestBase._test_field_level_ghosts_via_subcycles_and_coarser_interpolation
in the issue, and update the skip message to include the issue number (or add a
TODO with the issue link) so future work implements the assertions against
moments rather than E/B.
In `@tests/simulator/test_advance.py`:
- Around line 308-316: The list comprehension in makerec uses a single-letter
variable `l` which is ambiguous; rename it to a clearer identifier (e.g., `low`)
to avoid Ruff E741 and improve readability. Update the comprehension in makerec
where sizex, sizey are computed to use `for u, low, d in zip(upper, lower, dl)`
and adjust any corresponding references in that expression, leaving the rest of
makerec (origin calculation, print, and Rectangle return) unchanged.
- Around line 656-658: In the exception handler where the code prints the
failing index (print("idx", list(idx))) and then re-raises the caught exception
using "raise e", change it to use a bare "raise" to preserve the original
traceback; locate the try/except block in tests/simulator/test_advance.py that
references idx and replace the "raise e" re-raise with "raise" while keeping the
print statement intact.
- Around line 407-408: The current re-raise uses "raise e" which resets the
traceback; update the exception re-raise in the block that checks self.rethrow_
(where variable e is caught) to use a bare "raise" so the original traceback and
context are preserved (replace the "raise e" usage that references self.rethrow_
and e with a bare raise).
In `@tests/simulator/test_initialization.py`:
- Around line 577-581: The loop in tests/simulator/test_initialization.py
currently uses an unused loop index `pi` and unpacks `edges` from
`np.histogramdd` without using it; change the loop to "for patch in
datahier.level(0).patches:" (or keep enumerate but use `_` for the unused index)
and stop unpacking `edges` by either doing "H = np.histogramdd(icells,
bins=patch.box.shape)[0]" or "H, _ = np.histogramdd(icells,
bins=patch.box.shape)"; update references to `pi`/`edges` accordingly so only
`patch`, `H`, and the call to `np.histogramdd` remain.
In `@tests/simulator/test_restarts.py`:
- Around line 469-470: The test uses exact float equality which is fragile:
change the assertion self.assertEqual(0.007,
ph.restarts.restart_time(simput["restart_options"])) to use a tolerant
comparison such as self.assertAlmostEqual(0.007,
ph.restarts.restart_time(simput["restart_options"]), places=7) or provide a
small delta (e.g. delta=1e-9) so ph.restarts.restart_time(...) is compared with
tolerance instead of exact binary equality.
- Line 489: The test currently uses exact equality for a floating-point value;
update the assertion in tests/simulator/test_restarts.py to use an approximate
comparison by replacing self.assertEqual(0.01,
ph.restarts.restart_time(simput["restart_options"])) with an
assertAlmostEqual-style assertion so the call to
ph.restarts.restart_time(simput["restart_options"]) is compared to 0.01 with an
appropriate delta or number of places (e.g., self.assertAlmostEqual(..., 0.01,
places=7) or a small delta) to avoid fragile float-equality failures.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (72)
.github/workflows/cmake_ubuntu.ymlpyphare/pyphare/pharein/maxwellian_fluid_model.pypyphare/pyphare/pharesee/hierarchy/patchdata.pypyphare/pyphare/simulator/simulator.pyres/amr/splitting.ymlres/cmake/def.cmakeres/cmake/options.cmakeres/sim/all.txtsrc/amr/data/field/coarsening/coarsen_weighter.hppsrc/amr/data/field/coarsening/electric_field_coarsener.hppsrc/amr/data/particles/refine/particles_data_split.hppsrc/amr/data/particles/refine/split.hppsrc/amr/data/particles/refine/split_1d.hppsrc/amr/data/particles/refine/split_2d.hppsrc/amr/data/particles/refine/split_3d.hppsrc/amr/data/particles/refine/splitter.hppsrc/amr/level_initializer/hybrid_level_initializer.hppsrc/amr/messengers/field_operate_transaction.hppsrc/amr/multiphysics_integrator.hppsrc/amr/resources_manager/resources_manager.hppsrc/amr/samrai.cppsrc/amr/solvers/solver_ppc.hppsrc/amr/tagging/default_hybrid_tagger_strategy.hppsrc/core/data/grid/gridlayout.hppsrc/core/data/ndarray/ndarray_vector.hppsrc/core/data/particles/particle.hppsrc/core/data/particles/particle_array.hppsrc/core/data/tensorfield/tensorfield.hppsrc/core/errors.hppsrc/core/numerics/ion_updater/ion_updater.hppsrc/core/utilities/meta/meta_utilities.hppsrc/core/utilities/mpi_utils.cppsrc/core/utilities/mpi_utils.hppsrc/core/utilities/point/point.hppsrc/core/utilities/types.hppsrc/diagnostic/detail/h5writer.hppsrc/diagnostic/diagnostic_model_view.hppsrc/python3/data_wrangler.hppsrc/simulator/simulator.hpptests/amr/data/particles/refine/test_split.cpptests/core/data/gridlayout/deriv.pytests/core/data/gridlayout/test_gridlayout.hpptests/core/data/ndarray/test_main.cpptests/diagnostic/CMakeLists.txttests/diagnostic/job_1d.py.intests/diagnostic/job_2d.py.intests/diagnostic/job_3d.py.intests/diagnostic/test-diagnostics_1d.cpptests/diagnostic/test-diagnostics_2d.cpptests/diagnostic/test-diagnostics_3d.cpptests/functional/harris/harris_2d.pytests/functional/harris/harris_3d.pytests/simulator/CMakeLists.txttests/simulator/__init__.pytests/simulator/advance/CMakeLists.txttests/simulator/advance/test_fields_advance_1d.pytests/simulator/advance/test_fields_advance_2d.pytests/simulator/advance/test_fields_advance_3d.pytests/simulator/advance/test_particles_advance_1d.pytests/simulator/advance/test_particles_advance_2d.pytests/simulator/advance/test_particles_advance_3d.pytests/simulator/initialize/CMakeLists.txttests/simulator/initialize/test_fields_init_2d.pytests/simulator/initialize/test_fields_init_3d.pytests/simulator/initialize/test_particles_init_3d.pytests/simulator/per_test.hpptests/simulator/refined_particle_nbr.pytests/simulator/refinement/test_2d_10_core.pytests/simulator/test_advance.pytests/simulator/test_diagnostics.pytests/simulator/test_initialization.pytests/simulator/test_restarts.py
💤 Files with no reviewable changes (2)
- tests/simulator/refined_particle_nbr.py
- src/amr/samrai.cpp
✅ Files skipped from review due to trivial changes (1)
- src/diagnostic/detail/h5writer.hpp
🚧 Files skipped from review as they are similar to previous changes (25)
- src/core/data/particles/particle_array.hpp
- tests/core/data/gridlayout/test_gridlayout.hpp
- res/sim/all.txt
- tests/diagnostic/test-diagnostics_1d.cpp
- src/simulator/simulator.hpp
- src/amr/data/particles/refine/particles_data_split.hpp
- tests/simulator/per_test.hpp
- src/diagnostic/diagnostic_model_view.hpp
- tests/diagnostic/job_1d.py.in
- src/amr/data/particles/refine/split.hpp
- tests/core/data/gridlayout/deriv.py
- tests/diagnostic/CMakeLists.txt
- tests/simulator/advance/test_fields_advance_2d.py
- res/amr/splitting.yml
- tests/simulator/advance/test_particles_advance_1d.py
- src/core/utilities/mpi_utils.hpp
- tests/simulator/initialize/test_fields_init_2d.py
- src/core/utilities/meta/meta_utilities.hpp
- src/amr/level_initializer/hybrid_level_initializer.hpp
- tests/simulator/init.py
- res/cmake/options.cmake
- src/python3/data_wrangler.hpp
- res/cmake/def.cmake
- .github/workflows/cmake_ubuntu.yml
- tests/functional/harris/harris_2d.py
| import matplotlib.pyplot as plt | ||
| from matplotlib.patches import Rectangle | ||
|
|
||
| box = ovrlp_box |
There was a problem hiding this comment.
all this exceptions plotting can probably live somewhere else to be reusable
| @@ -1,11 +1,12 @@ | |||
| #!/usr/bin/env python3 | |||
|
|
|||
There was a problem hiding this comment.
pretty sure all these *.in files can just be .py
|
|
||
| NO_DISCARD auto begin() { return std::begin(components_); } | ||
| NO_DISCARD auto cbegin() const { return std::cbegin(components_); } | ||
| NO_DISCARD auto begin() const { return std::cbegin(components_); } |
There was a problem hiding this comment.
I guess we never looped over a const tensorfield until now
without this change, for (auto const& xyz : em.E) in ion_updater particle ghost box check fails to compile
There was a problem hiding this comment.
src/core/numerics/ion_updater/ion_updater.hpp:262:17: error: passing ‘const PHARE::core::TensorField<PHARE::core::Field<1, PHARE::core::HybridQuantity::Scalar, double>, PHARE::core::HybridQuantity, 1>’ as ‘this’ argument discards qualifiers [-fpermissive]
262 | for (auto const& xyz : em.E)
| NdArrayVector<dim> outer{{size0, size0, size0}, 0.}; | ||
| NdArrayVector<dim> inner{{size0, size0, size0}, 0.}; | ||
|
|
||
| // Fill outer's Mask{1} region (second ghost layer) with 1.0 | ||
| outer[Mask{1}] = 1.0; | ||
|
|
||
| // Copy from outer's Mask{1} to inner's Mask{0} | ||
| // This exercises all 6 face sections of operator>> | ||
| outer[Mask{1}] >> inner[Mask{0}]; |
There was a problem hiding this comment.
there was a logical issue that the rabbit spotted, but I don't think this test makes sense in terms of naming
template<typename Array, typename Mask>
void operator>>(MaskedView<Array, Mask>&& inner, MaskedView<Array, Mask>&& outer)
is setting the outer from the inner, but this test says it's setting the inner from the outer :\
as you can probably tell from the comments I didn't write this!
No description provided.