Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 6 additions & 2 deletions pyphare/pyphare/pharein/simulation.py
Original file line number Diff line number Diff line change
Expand Up @@ -401,12 +401,16 @@
grid.nbrGhosts(kwargs["interp_order"], x) for x in ["primal", "dual"]
)

interp = kwargs["interp_order"]
max_ghosts = get_max_ghosts()
small_invalid_patch_size = phare_utilities.np_array_ify(max_ghosts, ndim)
largest_patch_size = kwargs.get("largest_patch_size", None)

# to prevent primal ghost overlaps of non adjacent patches, we need smallest_patch_size+=1
smallest_patch_size = phare_utilities.np_array_ify(max_ghosts, ndim) + 1
# to prevent primal ghost box overlaps of non adjacent patches, we need smallest_patch_size * 2 + 1
smallest_patch_size = phare_utilities.np_array_ify(max_ghosts, ndim) * 2 + 1
Comment thread Dismissed
# TORM next lines after https://github.com/llnl/SAMRAI/issues/311
min_per_interp = [6, 9, 9] # SAMRAI BORDER BUG
smallest_patch_size = phare_utilities.np_array_ify(min_per_interp[interp - 1], ndim)
Comment on lines +409 to +413
Copy link
Copy Markdown

@coderabbitai coderabbitai bot Feb 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Dead assignment on Line 410 — value is immediately overwritten on Line 413.

The smallest_patch_size computed on Line 410 is never used because it's unconditionally reassigned on Line 413. This was also flagged by CodeQL. If the Line 410 computation is kept as documentation of the "correct" formula (for when the SAMRAI bug is fixed), consider commenting it out to make the intent clearer and avoid confusion.

Suggested cleanup
-    # to prevent primal ghost box overlaps of non adjacent patches, we need smallest_patch_size * 2 + 1
-    smallest_patch_size = phare_utilities.np_array_ify(max_ghosts, ndim) * 2 + 1
+    # to prevent primal ghost box overlaps of non adjacent patches, we'd need smallest_patch_size * 2 + 1
+    # smallest_patch_size = phare_utilities.np_array_ify(max_ghosts, ndim) * 2 + 1
     # TORM next lines after https://github.com/llnl/SAMRAI/issues/311
     min_per_interp = [6, 9, 9]  # SAMRAI BORDER BUG
     smallest_patch_size = phare_utilities.np_array_ify(min_per_interp[interp - 1], ndim)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# to prevent primal ghost box overlaps of non adjacent patches, we need smallest_patch_size * 2 + 1
smallest_patch_size = phare_utilities.np_array_ify(max_ghosts, ndim) * 2 + 1
# TORM next lines after https://github.com/llnl/SAMRAI/issues/311
min_per_interp = [6, 9, 9] # SAMRAI BORDER BUG
smallest_patch_size = phare_utilities.np_array_ify(min_per_interp[interp - 1], ndim)
# to prevent primal ghost box overlaps of non adjacent patches, we'd need smallest_patch_size * 2 + 1
# smallest_patch_size = phare_utilities.np_array_ify(max_ghosts, ndim) * 2 + 1
# TORM next lines after https://github.com/llnl/SAMRAI/issues/311
min_per_interp = [6, 9, 9] # SAMRAI BORDER BUG
smallest_patch_size = phare_utilities.np_array_ify(min_per_interp[interp - 1], ndim)
🧰 Tools
🪛 GitHub Check: CodeQL

[warning] 409-409: Variable defined multiple times
This assignment to 'smallest_patch_size' is unnecessary as it is redefined before this value is used.

🤖 Prompt for AI Agents
In `@pyphare/pyphare/pharein/simulation.py` around lines 409 - 413, The first
assignment to smallest_patch_size (using
phare_utilities.np_array_ify(max_ghosts, ndim) * 2 + 1) is dead because it is
overwritten unconditionally by the SAMRAI workaround assignment using
min_per_interp; remove or comment out the first assignment to avoid confusion,
or wrap it behind a clear conditional/feature flag (eg. use a constant like
USE_SAMRAI_WORKAROUND) so that smallest_patch_size is assigned from
phare_utilities.np_array_ify(max_ghosts, ndim) * 2 + 1 when the SAMRAI bug is
resolved; references to smallest_patch_size, phare_utilities.np_array_ify,
max_ghosts, min_per_interp, interp and ndim should remain to locate the change.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in future it will only require removing code, this is clear.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@PhilipDeegan, understood! The current structure makes the future fix straightforward—just remove the SAMRAI workaround lines and the correct computation will take effect. Thanks for clarifying.


✏️ Learnings added
Learnt from: PhilipDeegan
Repo: PHAREHUB/PHARE PR: 1146
File: pyphare/pyphare/pharein/simulation.py:409-413
Timestamp: 2026-02-09T17:54:56.848Z
Learning: In Python file pyphare/pyphare/pharein/simulation.py, PhilipDeegan prefers to keep the correct/intended implementation uncommented above temporary workarounds (even if it results in dead code), as this makes future fixes clearer—simply removing the workaround section activates the correct code.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.


🧠 Learnings used
Learnt from: PhilipDeegan
Repo: PHAREHUB/PHARE PR: 784
File: tests/simulator/test_restarts.py:333-339
Timestamp: 2024-10-09T08:32:15.667Z
Learning: PhilipDeegan has indicated a preference for minimal and efficient code, even after making changes to get a function working. There may be opportunities to remove or optimize parts of the code such as sleep/wait/advances.

Learnt from: PhilipDeegan
Repo: PHAREHUB/PHARE PR: 784
File: tests/simulator/test_restarts.py:333-339
Timestamp: 2024-07-26T22:04:34.160Z
Learning: PhilipDeegan has indicated a preference for minimal and efficient code, even after making changes to get a function working. There may be opportunities to remove or optimize parts of the code such as sleep/wait/advances.

Learnt from: PhilipDeegan
Repo: PHAREHUB/PHARE PR: 1068
File: src/amr/data/field/coarsening/electric_field_coarsener.hpp:1-2
Timestamp: 2025-09-17T13:35:11.533Z
Learning: PhilipDeegan prefers header guard names that include the full directory path structure, following the pattern PHARE_[PATH_WITH_UNDERSCORES]_HPP. For example, a file at src/amr/data/field/coarsening/electric_field_coarsener.hpp should use PHARE_AMR_DATA_FIELD_COARSENING_ELECTRIC_FIELD_COARSENER_HPP as the header guard.

Learnt from: PhilipDeegan
Repo: PHAREHUB/PHARE PR: 888
File: src/hdf5/detail/h5/h5_file.hpp:78-79
Timestamp: 2024-10-09T08:32:15.667Z
Learning: Avoid commenting on draft PRs for the user PhilipDeegan.

Learnt from: PhilipDeegan
Repo: PHAREHUB/PHARE PR: 888
File: src/hdf5/detail/h5/h5_file.hpp:78-79
Timestamp: 2024-09-05T17:02:58.784Z
Learning: Avoid commenting on draft PRs for the user PhilipDeegan.

Learnt from: PhilipDeegan
Repo: PHAREHUB/PHARE PR: 888
File: pyphare/pyphare/pharesee/hierarchy/hierarchy_utils.py:566-568
Timestamp: 2024-09-09T13:57:02.285Z
Learning: Avoid commenting on draft PRs for PhilipDeegan unless explicitly requested.

if "smallest_patch_size" in kwargs and kwargs["smallest_patch_size"] is not None:
smallest_patch_size = phare_utilities.np_array_ify(
kwargs["smallest_patch_size"], ndim
Expand Down
4 changes: 2 additions & 2 deletions pyphare/pyphare/simulator/simulator.py
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ class Simulator:

These arguments have good default, change them at your own risk.

* **print_one_line**: (``bool``), default True, will print simulator info per advance on one line (erasing the previous)
* **print_one_line**: (``bool``), default False, will print simulator info per advance on one line (erasing the previous)
* **auto_dump**: (``bool``), if True (default), will dump diagnostics automatically at requested timestamps
* **post_advance**: (``Function``), default None. A python function to execute after each advance()
* **log_to_file**: if True (default), will log prints made from C++ code per MPI rank to the .log directory
Expand All @@ -91,7 +91,7 @@ def __init__(self, simulation, auto_dump=True, **kwargs):
self.post_advance = kwargs.get("post_advance", None)
self.initialized = False
self.print_eol = "\n"
if kwargs.get("print_one_line", True):
if kwargs.get("print_one_line", False):
self.print_eol = "\r"
self.print_eol = kwargs.get("print_eol", self.print_eol)
self.log_to_file = kwargs.get("log_to_file", True)
Expand Down
54 changes: 28 additions & 26 deletions src/amr/data/field/field_data.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -27,8 +27,12 @@ namespace amr
typename PhysicalQuantity = decltype(std::declval<Grid_t>().physicalQuantity())>
class FieldData : public SAMRAI::hier::PatchData
{
using Super = SAMRAI::hier::PatchData;
using Super = SAMRAI::hier::PatchData;

public:
using value_type = Grid_t::value_type;

private:
using SetEqualOp = core::Equals<value_type>;

public:
Expand Down Expand Up @@ -98,7 +102,7 @@ namespace amr
* The data will be copied from the interior and ghost of the source to the interior and
* ghost of the destination, where there is an overlap in the underlying index space
*/
void copy(const SAMRAI::hier::PatchData& source) final
void copy(SAMRAI::hier::PatchData const& source) final
{
PHARE_LOG_SCOPE(3, "FieldData::copy");

Expand Down Expand Up @@ -154,8 +158,8 @@ namespace amr
* give the necessary transformation to apply to the source, to perform the copy (ie :
* translation for periodics condition)
*/
void copy(const SAMRAI::hier::PatchData& source,
const SAMRAI::hier::BoxOverlap& overlap) final
void copy(SAMRAI::hier::PatchData const& source,
SAMRAI::hier::BoxOverlap const& overlap) final
{
PHARE_LOG_SCOPE(3, "FieldData::copy");

Expand All @@ -172,7 +176,7 @@ namespace amr
/*** \brief This form should not be called since we cannot derive from FieldData
*/
void copy2([[maybe_unused]] SAMRAI::hier::PatchData& destination,
[[maybe_unused]] const SAMRAI::hier::BoxOverlap& overlap) const final
[[maybe_unused]] SAMRAI::hier::BoxOverlap const& overlap) const final
{
throw std::runtime_error("Error cannot cast the PatchData to FieldData");
}
Expand All @@ -193,7 +197,7 @@ namespace amr
/*** \brief Compute the maximum amount of memory needed to hold FieldData information on
* the specified overlap
*/
std::size_t getDataStreamSize(const SAMRAI::hier::BoxOverlap& overlap) const final
std::size_t getDataStreamSize(SAMRAI::hier::BoxOverlap const& overlap) const final
{
return getDataStreamSize_(overlap);
}
Expand All @@ -205,7 +209,7 @@ namespace amr
* overlap, and put it on the stream.
*/
void packStream(SAMRAI::tbox::MessageStream& stream,
const SAMRAI::hier::BoxOverlap& overlap) const final
SAMRAI::hier::BoxOverlap const& overlap) const final
{
PHARE_LOG_SCOPE(3, "packStream");

Expand Down Expand Up @@ -243,14 +247,14 @@ namespace amr
* by the overlap, and fill the data where is needed.
*/
void unpackStream(SAMRAI::tbox::MessageStream& stream,
const SAMRAI::hier::BoxOverlap& overlap) final
SAMRAI::hier::BoxOverlap const& overlap) final
{
unpackStream(stream, overlap, field);
}

template<typename Operator = SetEqualOp>
void unpackStream(SAMRAI::tbox::MessageStream& stream,
const SAMRAI::hier::BoxOverlap& overlap, Grid_t& dst_grid)
SAMRAI::hier::BoxOverlap const& overlap, Grid_t& dst_grid)
{
PHARE_LOG_SCOPE(3, "unpackStream");

Expand Down Expand Up @@ -308,9 +312,14 @@ namespace amr
}


void sum(SAMRAI::hier::PatchData const& src, SAMRAI::hier::BoxOverlap const& overlap);
void unpackStreamAndSum(SAMRAI::tbox::MessageStream& stream,
SAMRAI::hier::BoxOverlap const& overlap);
template<typename Operation>
void operate(SAMRAI::hier::PatchData const& src, SAMRAI::hier::BoxOverlap const& overlap);


template<typename Operation>
void unpackStreamAnd(SAMRAI::tbox::MessageStream& stream,
SAMRAI::hier::BoxOverlap const& overlap);


GridLayoutT gridLayout;
Grid_t field;
Expand Down Expand Up @@ -367,9 +376,6 @@ namespace amr
{
SAMRAI::hier::BoxContainer const& boxList = overlap.getDestinationBoxContainer();

SAMRAI::hier::IntVector const zeroOffset{
SAMRAI::hier::IntVector::getZero(SAMRAI::tbox::Dimension{dimension})};

if (transformation.getBeginBlock() == transformation.getEndBlock())
{
for (auto const& box : boxList)
Expand Down Expand Up @@ -434,28 +440,24 @@ namespace PHARE::amr


template<typename GridLayoutT, typename Grid_t, typename PhysicalQuantity>
void FieldData<GridLayoutT, Grid_t, PhysicalQuantity>::unpackStreamAndSum(
template<typename Operation>
void FieldData<GridLayoutT, Grid_t, PhysicalQuantity>::unpackStreamAnd(
SAMRAI::tbox::MessageStream& stream, SAMRAI::hier::BoxOverlap const& overlap)
{
using PlusEqualOp = core::PlusEquals<value_type>;

unpackStream<PlusEqualOp>(stream, overlap, field);
unpackStream<Operation>(stream, overlap, field);
}



template<typename GridLayoutT, typename Grid_t, typename PhysicalQuantity>
void FieldData<GridLayoutT, Grid_t, PhysicalQuantity>::sum(SAMRAI::hier::PatchData const& src,
SAMRAI::hier::BoxOverlap const& overlap)
template<typename Operation>
void FieldData<GridLayoutT, Grid_t, PhysicalQuantity>::operate(
SAMRAI::hier::PatchData const& src, SAMRAI::hier::BoxOverlap const& overlap)
{
using PlusEqualOp = core::PlusEquals<value_type>;

TBOX_ASSERT_OBJDIM_EQUALITY2(*this, src);

auto& fieldOverlap = dynamic_cast<FieldOverlap const&>(overlap);
auto& fieldSource = dynamic_cast<FieldData const&>(src);

copy_<PlusEqualOp>(fieldSource, fieldOverlap, field);
copy_<Operation>(fieldSource, fieldOverlap, field);
}


Expand Down
27 changes: 14 additions & 13 deletions src/amr/data/tensorfield/tensor_field_data.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,10 @@ class TensorFieldData : public SAMRAI::hier::PatchData
[&](auto i) { return Grid_t{compNames[i], qts[i], layout.allocSize(qts[i])}; });
}

public:
using value_type = Grid_t::value_type;

private:
using SetEqualOp = core::Equals<value_type>;

public:
Expand Down Expand Up @@ -336,10 +339,12 @@ class TensorFieldData : public SAMRAI::hier::PatchData
return patchData->grids;
}

void sum(SAMRAI::hier::PatchData const& src, SAMRAI::hier::BoxOverlap const& overlap);
void unpackStreamAndSum(SAMRAI::tbox::MessageStream& stream,
SAMRAI::hier::BoxOverlap const& overlap);

template<typename Operation>
void operate(SAMRAI::hier::PatchData const& src, SAMRAI::hier::BoxOverlap const& overlap);
template<typename Operation>
void unpackStreamAnd(SAMRAI::tbox::MessageStream& stream,
SAMRAI::hier::BoxOverlap const& overlap);


GridLayoutT gridLayout;
Expand Down Expand Up @@ -481,30 +486,26 @@ class TensorFieldData : public SAMRAI::hier::PatchData




template<std::size_t rank, typename GridLayoutT, typename Grid_t, typename PhysicalQuantity>
void TensorFieldData<rank, GridLayoutT, Grid_t, PhysicalQuantity>::unpackStreamAndSum(
template<typename Operation>
void TensorFieldData<rank, GridLayoutT, Grid_t, PhysicalQuantity>::unpackStreamAnd(
SAMRAI::tbox::MessageStream& stream, SAMRAI::hier::BoxOverlap const& overlap)
{
using PlusEqualOp = core::PlusEquals<value_type>;

unpackStream<PlusEqualOp>(stream, overlap, grids);
unpackStream<Operation>(stream, overlap, grids);
}



template<std::size_t rank, typename GridLayoutT, typename Grid_t, typename PhysicalQuantity>
void TensorFieldData<rank, GridLayoutT, Grid_t, PhysicalQuantity>::sum(
template<typename Operation>
void TensorFieldData<rank, GridLayoutT, Grid_t, PhysicalQuantity>::operate(
SAMRAI::hier::PatchData const& src, SAMRAI::hier::BoxOverlap const& overlap)
{
using PlusEqualOp = core::PlusEquals<value_type>;

TBOX_ASSERT_OBJDIM_EQUALITY2(*this, src);

auto& fieldOverlap = dynamic_cast<TensorFieldOverlap_t const&>(overlap);
auto& fieldSource = dynamic_cast<TensorFieldData const&>(src);

copy_<PlusEqualOp>(fieldSource, fieldOverlap, *this);
copy_<Operation>(fieldSource, fieldOverlap, *this);
}


Expand Down
13 changes: 6 additions & 7 deletions src/amr/level_initializer/hybrid_level_initializer.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -128,6 +128,7 @@ namespace solver
ions.computeChargeDensity();
ions.computeBulkVelocity();
}
hybMessenger.fillIonBorders(ions, level, initDataTime);

// on level i>0, this relies on 'prepareStep' having been called on when
// level i-1 was initialized (at the end of this function)
Expand All @@ -145,38 +146,36 @@ namespace solver
// this only needs to be done for the root level
// since otherwise initLevel has done it already
// TODO NICO comment! E is regridded, we only needed J for E

if (!isRegriddingL0)
if (isRootLevel(levelNumber))
{
auto& B = hybridModel.state.electromag.B;
auto& J = hybridModel.state.J;

for (auto& patch : level)
for (auto& patch : rm.enumerate(level, B, J))
{
auto _ = hybridModel.resourcesManager->setOnPatch(*patch, B, J);
auto layout = PHARE::amr::layoutFromPatch<GridLayoutT>(*patch);
auto __ = core::SetLayout(&layout, ampere_);
ampere_(B, J);

hybridModel.resourcesManager->setTime(J, *patch, 0.);
rm.setTime(J, *patch, 0.);
}
hybMessenger.fillCurrentGhosts(J, level, 0.);

auto& electrons = hybridModel.state.electrons;
auto& E = hybridModel.state.electromag.E;

for (auto& patch : level)
for (auto& patch : rm.enumerate(level, B, E, J, electrons))
{
auto layout = PHARE::amr::layoutFromPatch<GridLayoutT>(*patch);
auto _
= hybridModel.resourcesManager->setOnPatch(*patch, B, E, J, electrons);
electrons.update(layout);
auto& Ve = electrons.velocity();
auto& Ne = electrons.density();
auto& Pe = electrons.pressure();
auto __ = core::SetLayout(&layout, ohm_);
ohm_(Ne, Ve, Pe, B, J, E);
hybridModel.resourcesManager->setTime(E, *patch, 0.);
rm.setTime(E, *patch, 0.);
}

hybMessenger.fillElectricGhosts(E, level, 0.);
Expand Down
Loading
Loading