Conversation
- fine_boundary_represents_var for copy **before** refinement - PatchData NaN initialized on construction - fix tests failing as result of above - comment a field refinement test (useless, wrong refinement op for E,B) - debug plots for advance field overlap test - copy done before refinement (boolean false in variable) - overwrite_interior false also for refinement is default for FieldFillPattern - J manually init to zero in model init, fine init and regrid init (Jx unused in ampere but used in Ohm with its now NaN values) - Grid/NdarrayVector take default value overrides (for test) - UsableTensorField is default constructed with zero init. - TensorFieldData - with refluxing
📝 WalkthroughWalkthroughIntroduces tensor-field support across AMR (data, geometry, overlap, variables, factories), refactors field coarsen/refine/time-interpolate operators to tensor-aware forms, replaces magnetic-field coarsener with electric-field coarsener, adds magnetic regrider, changes ghost-filling APIs to pass PatchLevel, and adds reflux/flux-sum workflow in solvers/integrator. Broad IWYU/include cleanups and test updates. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant MPI as MultiPhysicsIntegrator
participant HM as HybridMessenger
participant HMS as HybridMessengerStrategy
participant Sol as Solver
participant PH as PatchHierarchy
participant Lv as PatchLevel
rect rgb(245,248,255)
note right of MPI: StandardLevelSynchronization
MPI->>HM: reflux(coarserLevel, fineLevel, syncTime)
HM->>HMS: reflux(coarserLevel, fineLevel, syncTime)
HMS-->>HM: done
HM-->>MPI: done
MPI->>Sol: reflux(model, coarseLevel, syncTime)
Sol-->>MPI: done
end
rect rgb(242,255,245)
note right of MPI: AdvanceLevel subcycle
MPI->>Sol: prepareStep(model, Lv, currentTime)
Sol-->>MPI: done
par per substep
MPI->>Sol: advanceLevel(...)
Sol-->>MPI: done
MPI->>Sol: accumulateFluxSum(model, Lv, coef)
Sol-->>MPI: done
end
MPI->>Sol: resetFluxSum(model, Lv)
Sol-->>MPI: done
end
sequenceDiagram
autonumber
participant LI as LevelInitializer
participant HM as HybridMessenger
participant HMS as HybridMessengerStrategy
participant Lv as PatchLevel
rect rgb(255,249,240)
note right of LI: Ghost fills now use Level
LI->>HM: fillElectricGhosts(E, Lv, time)
HM->>HMS: fillElectricGhosts(E, Lv, time)
HMS-->>HM: done
HM-->>LI: done
LI->>HM: fillCurrentGhosts(J, Lv, time)
HM->>HMS: fillCurrentGhosts(J, Lv, time)
HMS-->>HM: done
HM-->>LI: done
end
Estimated code review effort🎯 5 (Critical) | ⏱️ ~180 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Pre-merge checks and finishing touches❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✨ Finishing touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
| #include <SAMRAI/hier/Box.h> | ||
|
|
||
| #include "amr/resources_manager/amr_utils.hpp" |
| template<typename Dst> | ||
| void linear_time_interpolate(Dst& fieldDest, auto& fieldSrcOld, auto& fieldSrcNew, auto&&... args) | ||
| void linear_time_interpolate(Dst& fieldDest, auto const& fieldSrcOld, auto const& fieldSrcNew, | ||
| auto&&... args) |
There was a problem hiding this comment.
Actionable comments posted: 22
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (14)
src/amr/tagging/hybrid_tagger.hpp (2)
65-67: Null checks missing before dereferencing PatchData/tag bufferIf tag_index is invalid or the data is not CellData, pd will be null and dereferenced. Similarly, getPointer() can return null. Guard both to fail fast with a clear message.
Apply this diff:
- auto pd = dynamic_cast<SAMRAI::pdat::CellData<int>*>(patch.getPatchData(tag_index).get()); - auto tags = pd->getPointer(); + auto* pd = dynamic_cast<SAMRAI::pdat::CellData<int>*>(patch.getPatchData(tag_index).get()); + if (!pd) + throw std::runtime_error("HybridTagger: PatchData at tag_index is not CellData<int> (or is null)"); + auto* tags = pd->getPointer(); + if (!tags) + throw std::runtime_error("HybridTagger: CellData<int>::getPointer() returned null");
94-104: Tag copy is 2D‑only; generalize to N‑D or flatten for correctnessWhen dimension != 2, no tags are copied, which is incorrect for 1D/3D configurations. Use a flat copy over the total number of cells (contiguous layouts) to cover all dimensions.
Apply this diff:
- if constexpr (HybridModel::dimension == 2) - { - for (auto iTag_x = 0u; iTag_x < nbrCells[0]; ++iTag_x) - { - for (auto iTag_y = 0u; iTag_y < nbrCells[1]; ++iTag_y) - { - tagsv(iTag_x, iTag_y) = tagsvF(iTag_x, iTag_y); - } - } - } + { + const auto n = core::product(nbrCells); + // Both buffers are contiguous in row-major order. + std::copy_n(tags, n, hybridModel.tags[key]->data()); + }src/amr/wrappers/integrator.hpp (1)
107-121: Guard against divide-by-zero when normalizing workloads.max_value can be 0.0 (e.g., no patches or zero workloads), leading to division by zero and NaNs. Add an early guard.
Apply this diff:
PHARE_LOG_SCOPE(1, "Integrator::_should_rebalance_now::automatic"); - auto workLoads = core::mpi::collect(computeNonUniformWorkLoadForLevel0()); - auto const max_value = *std::max_element(workLoads.begin(), workLoads.end()); + auto workLoads = core::mpi::collect(computeNonUniformWorkLoadForLevel0()); + auto const max_value = *std::max_element(workLoads.begin(), workLoads.end()); + if (max_value <= 0.0) + { + // Nothing to normalize; treat as balanced and back off. + rebalance_coarsest_auto_back_off_by *= lb_info_.next_rebalance_backoff_multiplier; + rebalance_coarsest_auto_back_off = rebalance_coarsest_auto_back_off_by; + if (rebalance_coarsest_auto_back_off > lb_info_.max_next_rebalance) + rebalance_coarsest_auto_back_off_by = rebalance_coarsest_auto_back_off + = lb_info_.max_next_rebalance; + return false; + } for (auto& workload : workLoads) workload /= max_value; auto const min_value = *std::min_element(workLoads.begin(), workLoads.end()); assert(min_value <= 1);src/core/utilities/mpi_utils.hpp (3)
64-96: Broadcast root hardcoded to 0; vectors/strings use reserve() then write into data() ⇒ UB.
all_get_frommust broadcast fromrank_(not 0). Forstd::string/std::vector, you mustresize()beforeMPI_Bcastso the buffer is valid. Current code risks undefined behavior and wrong sender.template<typename Fn, typename... Args> auto all_get_from(int const& rank_, Fn&& fn, Args&&... args) { using Data = std::decay_t<std::invoke_result_t<Fn&, Args&...>>; Data var; auto local_rank = rank(); if (local_rank == rank_) var = fn(args...); - void* data = &var; - - int count = 1; // default - MPI_Datatype sendtype; - if constexpr (std::is_same_v<std::string, Data> or core::is_std_vector_v<Data>) - { - sendtype = mpi_type_for<typename Data::value_type>(); - count = all_get_from(rank_, [&]() { return var.size(); }); - if (local_rank != rank_) - var.reserve(count); - data = var.data(); - } - else - sendtype = mpi_type_for<Data>(); - - MPI_Bcast( - data, - count, - sendtype, - 0, - MPI_COMM_WORLD - ); + // Scalars + if constexpr (!(std::is_same_v<std::string, Data> || core::is_std_vector_v<Data>)) + { + MPI_Bcast(&var, 1, mpi_type_for<Data>(), rank_, MPI_COMM_WORLD); + return var; + } + + // Strings / vectors + MPI_Datatype dtype = mpi_type_for<typename Data::value_type>(); + int count = 0; + if (local_rank == rank_) + count = static_cast<int>(var.size()); + MPI_Bcast(&count, 1, MPI_INT, rank_, MPI_COMM_WORLD); + + // Ensure writable buffer + var.resize(count); + void* data = var.data(); + MPI_Bcast(data, count, dtype, rank_, MPI_COMM_WORLD); return var; }
53-56: Wrong MPI type for std::uint8_t.
MPI_UNSIGNED_SHORTis 16-bit. UseMPI_UNSIGNED_CHAR(orMPI_UINT8_Tif guaranteed).- else if constexpr (std::is_same_v<std::uint8_t, Data>) - return MPI_UNSIGNED_SHORT; + else if constexpr (std::is_same_v<std::uint8_t, Data>) + return MPI_UNSIGNED_CHAR;
195-198: memcpy uses element count instead of bytes.Third arg must be bytes; current code truncates for
sizeof(T) != 1.- std::memcpy(&values[i], &datas[maxMPISize * i], maxMPISize); + std::memcpy(&values[i], &datas[maxMPISize * i], maxMPISize * sizeof(Data));src/core/data/grid/grid.hpp (1)
86-91: Copy constructor drops data; either deep‑copy or delete copy.This ctor allocates same shape but does not copy payload, which is a surprising and dangerous deviation from standard copy semantics.
Apply one of the following:
Option A (deep copy):
- Grid(Grid const& source) // let field_ default - : Super{source.shape()} - , name_{source.name()} - , qty_{source.physicalQuantity()} - { - } + Grid(Grid const& source) + : Super{source.shape()} + , name_{source.name()} + , qty_{source.physicalQuantity()} + { + this->copyData(source); + }Option B (forbid copies explicitly and rely on moves):
- Grid(Grid const& source) // let field_ default - : Super{source.shape()} - , name_{source.name()} - , qty_{source.physicalQuantity()} - { - } + Grid(Grid const& source) = delete;src/amr/data/field/refine/electric_field_refiner.hpp (1)
14-16: Missing for std::isnanCompilation may rely on transitive includes; add explicitly.
#include <cstddef> +#include <cmath>src/amr/data/field/field_data.hpp (3)
31-33: Dependent type needs typename; this won’t compileAdd typename to value_type alias.
- using value_type = Grid_t::value_type; + using value_type = typename Grid_t::value_type;
218-239: Reserve uses sizeof(double); use value_type consistentlyReserve count should be in elements of value_type.
- std::vector<value_type> buffer; - buffer.reserve(getDataStreamSize_(overlap) / sizeof(double)); + std::vector<value_type> buffer; + buffer.reserve(getDataStreamSize_(overlap) / sizeof(value_type));
262-267: Packing/unpacking buffer type mismatch (double vs value_type)Unpacking uses vector but packing uses vector<value_type>. This can corrupt data when value_type != double.
- std::vector<double> buffer(getDataStreamSize(overlap) / sizeof(value_type), 0.); + std::vector<value_type> buffer(getDataStreamSize(overlap) / sizeof(value_type), + value_type{0});src/amr/data/field/coarsening/electric_field_coarsener.hpp (1)
37-45: Undefined identifier ‘refinementRatio’ and ignored ctor ratio — breaks build and logic.The ctor discards the ratio and the body uses an undefined symbol. Store the ratio and use it to compute fineStartIndex.
-class ElectricFieldCoarsener +class ElectricFieldCoarsener { public: - ElectricFieldCoarsener(std::array<core::QtyCentering, dimension> const centering, + ElectricFieldCoarsener(std::array<core::QtyCentering, dimension> const centering, SAMRAI::hier::Box const& sourceBox, SAMRAI::hier::Box const& destinationBox, - SAMRAI::hier::IntVector const& /*ratio*/) + SAMRAI::hier::IntVector const& ratio) : centering_{centering} , sourceBox_{sourceBox} , destinationBox_{destinationBox} + , ratio_{ratio} { } ... - for (auto i = std::size_t{0}; i < dimension; ++i) + for (auto i = std::size_t{0}; i < dimension; ++i) { - fineStartIndex[i] = coarseIndex[i] * refinementRatio; + fineStartIndex[i] = coarseIndex[i] * ratio_(static_cast<int>(i)); } ... private: std::array<core::QtyCentering, dimension> const centering_; SAMRAI::hier::Box const sourceBox_; SAMRAI::hier::Box const destinationBox_; + SAMRAI::hier::IntVector const ratio_;Also applies to: 57-60, 147-150
src/amr/multiphysics_integrator.hpp (1)
524-532: Avoid negative indexing for root level (iLevel == 0)
subcycleStartTimes_[iLevel - 1]andsubcycleEndTimes_[iLevel - 1]are undefined for level 0; they create bogus map entries.- fromCoarser.firstStep(model, *level, hierarchy, currentTime, - subcycleStartTimes_[iLevel - 1], - subcycleEndTimes_[iLevel - 1]); + auto prevCoarserTime = (iLevel == 0) ? currentTime : subcycleStartTimes_[iLevel - 1]; + auto newCoarserTime = (iLevel == 0) ? newTime : subcycleEndTimes_[iLevel - 1]; + fromCoarser.firstStep(model, *level, hierarchy, currentTime, + prevCoarserTime, newCoarserTime);src/amr/resources_manager/resources_manager.hpp (1)
536-545: Missingtypenameon dependent type; breaks compilation
ResourceResolver_t::typeis dependent; needstypename.- using ResourcesType = ResourceResolver_t::type; + using ResourcesType = typename ResourceResolver_t::type;
| if constexpr (dimension == 1) | ||
| { | ||
| // in 1D div(B) is automatically satisfied so using this coarsening | ||
| // opertor is probably not better than the default one, but we do that | ||
| // for a kind of consistency... | ||
| // coarse flux is equal to fine flux and we're 1D so there is flux partitioned | ||
| // only for By and Bz, Bx is equal to the fine value | ||
|
|
||
| if (centering_[dirX] == core::QtyCentering::primal) // bx | ||
| { | ||
| coarseField(coarseIndex[dirX]) = fineField(fineStartIndex[dirX]); | ||
| } | ||
| else if (centering_[dirX] == core::QtyCentering::dual) // by and bz | ||
| if (centering_[dirX] == core::QtyCentering::dual) // ex | ||
| { | ||
| coarseField(coarseIndex[dirX]) | ||
| = 0.5 * (fineField(fineStartIndex[dirX] + 1) + fineField(fineStartIndex[dirX])); | ||
| } | ||
| else if (centering_[dirX] == core::QtyCentering::primal) // ey, ez | ||
| { | ||
| coarseField(coarseIndex[dirX]) = fineField(fineStartIndex[dirX]); | ||
| } | ||
| } |
There was a problem hiding this comment.
Coarsening assumes ratio==2 via 0.5 average; generalize to any refinement ratio along the dual axis.
Average over ratio_(X) fine edges for Ex in 1D; inject for primal. Current code is incorrect for ratios ≠ 2.
- if (centering_[dirX] == core::QtyCentering::dual) // ex
- {
- coarseField(coarseIndex[dirX])
- = 0.5 * (fineField(fineStartIndex[dirX] + 1) + fineField(fineStartIndex[dirX]));
- }
+ if (centering_[dirX] == core::QtyCentering::dual) // Ex
+ {
+ auto const rx = ratio_(dirX);
+ double acc = 0.;
+ for (int sx = 0; sx < rx; ++sx)
+ acc += fineField(fineStartIndex[dirX] + sx);
+ coarseField(coarseIndex[dirX]) = acc / rx;
+ }
else if (centering_[dirX] == core::QtyCentering::primal) // ey, ez
{
coarseField(coarseIndex[dirX]) = fineField(fineStartIndex[dirX]);
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if constexpr (dimension == 1) | |
| { | |
| // in 1D div(B) is automatically satisfied so using this coarsening | |
| // opertor is probably not better than the default one, but we do that | |
| // for a kind of consistency... | |
| // coarse flux is equal to fine flux and we're 1D so there is flux partitioned | |
| // only for By and Bz, Bx is equal to the fine value | |
| if (centering_[dirX] == core::QtyCentering::primal) // bx | |
| { | |
| coarseField(coarseIndex[dirX]) = fineField(fineStartIndex[dirX]); | |
| } | |
| else if (centering_[dirX] == core::QtyCentering::dual) // by and bz | |
| if (centering_[dirX] == core::QtyCentering::dual) // ex | |
| { | |
| coarseField(coarseIndex[dirX]) | |
| = 0.5 * (fineField(fineStartIndex[dirX] + 1) + fineField(fineStartIndex[dirX])); | |
| } | |
| else if (centering_[dirX] == core::QtyCentering::primal) // ey, ez | |
| { | |
| coarseField(coarseIndex[dirX]) = fineField(fineStartIndex[dirX]); | |
| } | |
| } | |
| if constexpr (dimension == 1) | |
| { | |
| if (centering_[dirX] == core::QtyCentering::dual) // Ex | |
| { | |
| auto const rx = ratio_(dirX); | |
| double acc = 0.; | |
| for (int sx = 0; sx < rx; ++sx) | |
| acc += fineField(fineStartIndex[dirX] + sx); | |
| coarseField(coarseIndex[dirX]) = acc / rx; | |
| } | |
| else if (centering_[dirX] == core::QtyCentering::primal) // ey, ez | |
| { | |
| coarseField(coarseIndex[dirX]) = fineField(fineStartIndex[dirX]); | |
| } | |
| } |
| if (centering_[dirX] == core::QtyCentering::dual | ||
| and centering_[dirY] == core::QtyCentering::primal) // ex | ||
| { | ||
| coarseField(coarseIndex[dirX], coarseIndex[dirY]) | ||
| = 0.5 | ||
| * (fineField(fineStartIndex[dirX], fineStartIndex[dirY]) | ||
| + fineField(fineStartIndex[dirX], fineStartIndex[dirY] + 1)); | ||
| + fineField(fineStartIndex[dirX] + 1, fineStartIndex[dirY])); | ||
| } |
There was a problem hiding this comment.
2D Ex: average across X over ratio_(X), not a hardcoded 0.5 of two points.
- if (centering_[dirX] == core::QtyCentering::dual
- and centering_[dirY] == core::QtyCentering::primal) // ex
+ if (centering_[dirX] == core::QtyCentering::dual
+ and centering_[dirY] == core::QtyCentering::primal) // Ex
{
- coarseField(coarseIndex[dirX], coarseIndex[dirY])
- = 0.5
- * (fineField(fineStartIndex[dirX], fineStartIndex[dirY])
- + fineField(fineStartIndex[dirX] + 1, fineStartIndex[dirY]));
+ auto const rx = ratio_(dirX);
+ double acc = 0.;
+ for (int sx = 0; sx < rx; ++sx)
+ acc += fineField(fineStartIndex[dirX] + sx, fineStartIndex[dirY]);
+ coarseField(coarseIndex[dirX], coarseIndex[dirY]) = acc / rx;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (centering_[dirX] == core::QtyCentering::dual | |
| and centering_[dirY] == core::QtyCentering::primal) // ex | |
| { | |
| coarseField(coarseIndex[dirX], coarseIndex[dirY]) | |
| = 0.5 | |
| * (fineField(fineStartIndex[dirX], fineStartIndex[dirY]) | |
| + fineField(fineStartIndex[dirX], fineStartIndex[dirY] + 1)); | |
| + fineField(fineStartIndex[dirX] + 1, fineStartIndex[dirY])); | |
| } | |
| if (centering_[dirX] == core::QtyCentering::dual | |
| and centering_[dirY] == core::QtyCentering::primal) // Ex | |
| { | |
| auto const rx = ratio_(dirX); | |
| double acc = 0.; | |
| for (int sx = 0; sx < rx; ++sx) | |
| acc += fineField(fineStartIndex[dirX] + sx, fineStartIndex[dirY]); | |
| coarseField(coarseIndex[dirX], coarseIndex[dirY]) = acc / rx; | |
| } |
🤖 Prompt for AI Agents
In src/amr/data/field/coarsening/electric_field_coarsener.hpp around lines 80 to
87, the coarse-field assignment for the 2D Ex case currently uses a hardcoded
0.5 average of two points; replace that with a proper average across
ratio_[dirX] fine cells. Compute the sum of fineField over fineStartIndex[dirX]
.. fineStartIndex[dirX] + ratio_[dirX] - 1 (at the same fineStartIndex[dirY])
and divide by static_cast<double>(ratio_[dirX]) (or the appropriate scalar
type), or implement an explicit loop accumulating the values, then assign
coarseField = sum / ratio_[dirX]; ensure you reference ratio_[dirX] (not a
hardcoded 2) and preserve types/centering logic.
| else if (centering_[dirX] == core::QtyCentering::primal | ||
| and centering_[dirY] == core::QtyCentering::dual) // ey | ||
| { | ||
| coarseField(coarseIndex[dirX], coarseIndex[dirY]) | ||
| = 0.5 | ||
| * (fineField(fineStartIndex[dirX], fineStartIndex[dirY]) | ||
| + fineField(fineStartIndex[dirX] + 1, fineStartIndex[dirY])); | ||
| + fineField(fineStartIndex[dirX], fineStartIndex[dirY] + 1)); | ||
| } |
There was a problem hiding this comment.
2D Ey: average across Y over ratio_(Y).
- else if (centering_[dirX] == core::QtyCentering::primal
- and centering_[dirY] == core::QtyCentering::dual) // ey
+ else if (centering_[dirX] == core::QtyCentering::primal
+ and centering_[dirY] == core::QtyCentering::dual) // Ey
{
- coarseField(coarseIndex[dirX], coarseIndex[dirY])
- = 0.5
- * (fineField(fineStartIndex[dirX], fineStartIndex[dirY])
- + fineField(fineStartIndex[dirX], fineStartIndex[dirY] + 1));
+ auto const ry = ratio_(dirY);
+ double acc = 0.;
+ for (int sy = 0; sy < ry; ++sy)
+ acc += fineField(fineStartIndex[dirX], fineStartIndex[dirY] + sy);
+ coarseField(coarseIndex[dirX], coarseIndex[dirY]) = acc / ry;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| else if (centering_[dirX] == core::QtyCentering::primal | |
| and centering_[dirY] == core::QtyCentering::dual) // ey | |
| { | |
| coarseField(coarseIndex[dirX], coarseIndex[dirY]) | |
| = 0.5 | |
| * (fineField(fineStartIndex[dirX], fineStartIndex[dirY]) | |
| + fineField(fineStartIndex[dirX] + 1, fineStartIndex[dirY])); | |
| + fineField(fineStartIndex[dirX], fineStartIndex[dirY] + 1)); | |
| } | |
| else if (centering_[dirX] == core::QtyCentering::primal | |
| and centering_[dirY] == core::QtyCentering::dual) // Ey | |
| { | |
| auto const ry = ratio_(dirY); | |
| double acc = 0.; | |
| for (int sy = 0; sy < ry; ++sy) | |
| acc += fineField(fineStartIndex[dirX], fineStartIndex[dirY] + sy); | |
| coarseField(coarseIndex[dirX], coarseIndex[dirY]) = acc / ry; | |
| } |
🤖 Prompt for AI Agents
In src/amr/data/field/coarsening/electric_field_coarsener.hpp around lines
88-95, the 2D Ey case currently averages only two fine cells with a fixed factor
0.5 but must average across the full refinement ratio in Y (ratio_(Y)). Replace
the fixed two-point average with a sum over k=0..ratio_(Y)-1 of
fineField(fineStartIndex[dirX], fineStartIndex[dirY] + k) divided by
static_cast<Real>(ratio_(dirY)) (or 1.0/ratioY multiplied after the sum);
implement this with a small loop or std::accumulate to compute the sum and then
assign coarseField(coarseIndex[dirX], coarseIndex[dirY]) = sum * (1.0/ratioY).
Ensure you use the correct ratio variable for dirY and proper indexing types.
| if (centering_[dirX] == core::QtyCentering::dual | ||
| and centering_[dirY] == core::QtyCentering::primal | ||
| and centering_[dirZ] == core::QtyCentering::primal) // ex | ||
| { | ||
| coarseField(coarseIndex[dirX], coarseIndex[dirY], coarseIndex[dirZ]) | ||
| = 0.5 | ||
| * (fineField(fineStartIndex[dirX], fineStartIndex[dirY], fineStartIndex[dirZ]) | ||
| + fineField(fineStartIndex[dirX] + 1, fineStartIndex[dirY], | ||
| fineStartIndex[dirZ])); | ||
| } |
There was a problem hiding this comment.
3D Ex: average across X over ratio_(X).
- if (centering_[dirX] == core::QtyCentering::dual
+ if (centering_[dirX] == core::QtyCentering::dual
and centering_[dirY] == core::QtyCentering::primal
and centering_[dirZ] == core::QtyCentering::primal) // ex
{
- coarseField(coarseIndex[dirX], coarseIndex[dirY], coarseIndex[dirZ])
- = 0.5
- * (fineField(fineStartIndex[dirX], fineStartIndex[dirY], fineStartIndex[dirZ])
- + fineField(fineStartIndex[dirX] + 1, fineStartIndex[dirY],
- fineStartIndex[dirZ]));
+ auto const rx = ratio_(dirX);
+ double acc = 0.;
+ for (int sx = 0; sx < rx; ++sx)
+ acc += fineField(fineStartIndex[dirX] + sx, fineStartIndex[dirY],
+ fineStartIndex[dirZ]);
+ coarseField(coarseIndex[dirX], coarseIndex[dirY], coarseIndex[dirZ]) = acc / rx;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (centering_[dirX] == core::QtyCentering::dual | |
| and centering_[dirY] == core::QtyCentering::primal | |
| and centering_[dirZ] == core::QtyCentering::primal) // ex | |
| { | |
| coarseField(coarseIndex[dirX], coarseIndex[dirY], coarseIndex[dirZ]) | |
| = 0.5 | |
| * (fineField(fineStartIndex[dirX], fineStartIndex[dirY], fineStartIndex[dirZ]) | |
| + fineField(fineStartIndex[dirX] + 1, fineStartIndex[dirY], | |
| fineStartIndex[dirZ])); | |
| } | |
| if (centering_[dirX] == core::QtyCentering::dual | |
| and centering_[dirY] == core::QtyCentering::primal | |
| and centering_[dirZ] == core::QtyCentering::primal) // ex | |
| { | |
| auto const rx = ratio_(dirX); | |
| double acc = 0.; | |
| for (int sx = 0; sx < rx; ++sx) | |
| acc += fineField(fineStartIndex[dirX] + sx, fineStartIndex[dirY], | |
| fineStartIndex[dirZ]); | |
| coarseField(coarseIndex[dirX], coarseIndex[dirY], coarseIndex[dirZ]) = acc / rx; | |
| } |
🤖 Prompt for AI Agents
In src/amr/data/field/coarsening/electric_field_coarsener.hpp around lines
109-118 the code always averages exactly two fine cells in X using a hardcoded
0.5, but in 3D Ex coarsening you must average across ratio_[dirX] fine cells.
Replace the two-point average with a loop that accumulates
fineField(fineStartIndex[dirX] + i, fineStartIndex[dirY], fineStartIndex[dirZ])
for i = 0 .. ratio_[dirX]-1 and then assign coarseField = sum / ratio_[dirX];
ensure you use the integer ratio_ array (ratio_[dirX]) and appropriate numeric
type for the division to avoid integer truncation.
| else if (centering_[dirX] == core::QtyCentering::primal | ||
| and centering_[dirY] == core::QtyCentering::dual | ||
| and centering_[dirZ] == core::QtyCentering::primal) // ey | ||
| { | ||
| coarseField(coarseIndex[dirX], coarseIndex[dirY], coarseIndex[dirZ]) | ||
| = 0.5 | ||
| * (fineField(fineStartIndex[dirX], fineStartIndex[dirY], fineStartIndex[dirZ]) | ||
| + fineField(fineStartIndex[dirX], fineStartIndex[dirY] + 1, | ||
| fineStartIndex[dirZ])); | ||
| } |
There was a problem hiding this comment.
3D Ey: average across Y over ratio_(Y).
- else if (centering_[dirX] == core::QtyCentering::primal
+ else if (centering_[dirX] == core::QtyCentering::primal
and centering_[dirY] == core::QtyCentering::dual
and centering_[dirZ] == core::QtyCentering::primal) // ey
{
- coarseField(coarseIndex[dirX], coarseIndex[dirY], coarseIndex[dirZ])
- = 0.5
- * (fineField(fineStartIndex[dirX], fineStartIndex[dirY], fineStartIndex[dirZ])
- + fineField(fineStartIndex[dirX], fineStartIndex[dirY] + 1,
- fineStartIndex[dirZ]));
+ auto const ry = ratio_(dirY);
+ double acc = 0.;
+ for (int sy = 0; sy < ry; ++sy)
+ acc += fineField(fineStartIndex[dirX], fineStartIndex[dirY] + sy,
+ fineStartIndex[dirZ]);
+ coarseField(coarseIndex[dirX], coarseIndex[dirY], coarseIndex[dirZ]) = acc / ry;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| else if (centering_[dirX] == core::QtyCentering::primal | |
| and centering_[dirY] == core::QtyCentering::dual | |
| and centering_[dirZ] == core::QtyCentering::primal) // ey | |
| { | |
| coarseField(coarseIndex[dirX], coarseIndex[dirY], coarseIndex[dirZ]) | |
| = 0.5 | |
| * (fineField(fineStartIndex[dirX], fineStartIndex[dirY], fineStartIndex[dirZ]) | |
| + fineField(fineStartIndex[dirX], fineStartIndex[dirY] + 1, | |
| fineStartIndex[dirZ])); | |
| } | |
| else if (centering_[dirX] == core::QtyCentering::primal | |
| and centering_[dirY] == core::QtyCentering::dual | |
| and centering_[dirZ] == core::QtyCentering::primal) // ey | |
| { | |
| auto const ry = ratio_(dirY); | |
| double acc = 0.; | |
| for (int sy = 0; sy < ry; ++sy) | |
| acc += fineField(fineStartIndex[dirX], fineStartIndex[dirY] + sy, | |
| fineStartIndex[dirZ]); | |
| coarseField(coarseIndex[dirX], coarseIndex[dirY], coarseIndex[dirZ]) = acc / ry; | |
| } |
🤖 Prompt for AI Agents
In src/amr/data/field/coarsening/electric_field_coarsener.hpp around lines 119
to 128, the Ey coarsening uses a hardcoded average of two fine cells (0.5 and
+1) which only works when ratio_[dirY]==2; change it to sum the fineField values
across the full refinement ratio in Y and divide by ratio_[dirY]. Implement a
small loop over ry from 0 to ratio_[dirY]-1, accumulate
fineField(fineStartIndex[dirX], fineStartIndex[dirY]+ry, fineStartIndex[dirZ])
into a local sum, then set coarseField(...) = sum /
static_cast<Real>(ratio_[dirY]) (keeping types consistent).
| template<FloatingPoint U = DataType, typename... Nodes> | ||
| explicit NdArrayVector(Nodes... nodes) | ||
| : nCells_{nodes...} | ||
| , data_((... * nodes), static_cast<U>(std::nan(""))) | ||
| { | ||
| static_assert(sizeof...(Nodes) == dim); | ||
| } |
There was a problem hiding this comment.
Safer size computation and NaN default for FP ctor.
- (... * nodes) multiplies in the type of Nodes and can overflow int; prefer std::size_t.
- Use std::numeric_limits::quiet_NaN() instead of std::nan("").
- explicit NdArrayVector(Nodes... nodes)
- : nCells_{nodes...}
- , data_((... * nodes), static_cast<U>(std::nan("")))
+ explicit NdArrayVector(Nodes... nodes)
+ : nCells_{static_cast<std::uint32_t>(nodes)...}
+ , data_((std::size_t{1} * ... * static_cast<std::size_t>(nodes)),
+ std::numeric_limits<U>::quiet_NaN())
{
static_assert(sizeof...(Nodes) == dim);
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| template<FloatingPoint U = DataType, typename... Nodes> | |
| explicit NdArrayVector(Nodes... nodes) | |
| : nCells_{nodes...} | |
| , data_((... * nodes), static_cast<U>(std::nan(""))) | |
| { | |
| static_assert(sizeof...(Nodes) == dim); | |
| } | |
| template<FloatingPoint U = DataType, typename... Nodes> | |
| explicit NdArrayVector(Nodes... nodes) | |
| : nCells_{static_cast<std::uint32_t>(nodes)...} | |
| , data_((std::size_t{1} * ... * static_cast<std::size_t>(nodes)), | |
| std::numeric_limits<U>::quiet_NaN()) | |
| { | |
| static_assert(sizeof...(Nodes) == dim); | |
| } |
| template<FloatingPoint U = DataType> | ||
| explicit NdArrayVector(std::array<std::uint32_t, dim> const& ncells, | ||
| type const& value = static_cast<U>(std::nan(""))) | ||
| : nCells_{ncells} | ||
| , data_(std::accumulate(ncells.begin(), ncells.end(), 1, std::multiplies<int>()), value) | ||
| { | ||
| } |
There was a problem hiding this comment.
Fix accumulate type and NaN default in array‑shape ctor.
Initial value is int and can overflow; use std::size_t. Also prefer quiet_NaN().
- explicit NdArrayVector(std::array<std::uint32_t, dim> const& ncells,
- type const& value = static_cast<U>(std::nan("")))
+ explicit NdArrayVector(std::array<std::uint32_t, dim> const& ncells,
+ type const& value = std::numeric_limits<U>::quiet_NaN())
: nCells_{ncells}
- , data_(std::accumulate(ncells.begin(), ncells.end(), 1, std::multiplies<int>()), value)
+ , data_(std::accumulate(ncells.begin(), ncells.end(), std::size_t{1},
+ std::multiplies<std::size_t>()),
+ value)
{
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| template<FloatingPoint U = DataType> | |
| explicit NdArrayVector(std::array<std::uint32_t, dim> const& ncells, | |
| type const& value = static_cast<U>(std::nan(""))) | |
| : nCells_{ncells} | |
| , data_(std::accumulate(ncells.begin(), ncells.end(), 1, std::multiplies<int>()), value) | |
| { | |
| } | |
| template<FloatingPoint U = DataType> | |
| explicit NdArrayVector(std::array<std::uint32_t, dim> const& ncells, | |
| type const& value = std::numeric_limits<U>::quiet_NaN()) | |
| : nCells_{ncells} | |
| , data_(std::accumulate(ncells.begin(), ncells.end(), std::size_t{1}, | |
| std::multiplies<std::size_t>()), | |
| value) | |
| { | |
| } |
🤖 Prompt for AI Agents
In src/core/data/ndarray/ndarray_vector.hpp around lines 250 to 256, the
array-shape ctor uses std::accumulate with an int initial value (risking
overflow) and static_cast<U>(std::nan("")) for the default NaN; change the
accumulate call to use a std::size_t initial value and
std::multiplies<std::size_t> (so the product is computed in size_t), and replace
the NaN default with std::numeric_limits<U>::quiet_NaN(). Ensure needed headers
are present (e.g., <limits>) if not already included.
| // IWYU pragma: private, include "core/logger.hpp" | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
IWYU private mapping looks good; verify umbrella include and direct includes.
Good addition. Please ensure:
- core/logger.hpp includes this header (directly or via an umbrella).
- No files include logger_defaults.hpp directly anymore; if they do, switch them to core/logger.hpp to honor the private mapping.
Run:
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "1) Find any direct includes of logger_defaults.hpp (should be none):"
rg -nP '#\s*include\s*["<][^">]*logger_defaults\.hpp[">]' -C2 || true
echo
echo "2) Locate core/logger.hpp candidates:"
fd -a 'logger.hpp' | rg -n 'core/logger\.hpp' || true
echo
echo "3) Check if core/logger.hpp includes logger_defaults.hpp:"
fd -a 'core/logger.hpp' | xargs -r -I{} rg -n 'logger_defaults\.hpp' {} || trueLength of output: 790
Replace direct includes of logger_defaults.hpp with core/logger.hpp
core/logger.hpp already includes core/utilities/logger/logger_defaults.hpp; found a direct include in src/amr/messengers/field_sum_transaction.hpp — change that to #include "core/logger.hpp".
🤖 Prompt for AI Agents
In src/core/utilities/logger/logger_defaults.hpp around lines 1-2: a direct
include of this header has been used elsewhere (specifically
src/amr/messengers/field_sum_transaction.hpp); replace that direct include with
#include "core/logger.hpp" so the public logger API is used; update the include
line in src/amr/messengers/field_sum_transaction.hpp to #include
"core/logger.hpp" and remove any #include
"core/utilities/logger/logger_defaults.hpp" occurrences.
There was a problem hiding this comment.
🧩 Analysis chain
Looking at the current implementation, I can see a few issues:
- Missing core/logger.hpp: The IWYU pragma points to "core/logger.hpp" but this file doesn't exist in the expected location
- Direct includes still present: There are still direct includes of logger_defaults.hpp in the codebase
- Incomplete IWYU setup: The pragma is correctly added, but the umbrella header it references doesn't exist
Let me check what logger headers actually exist:
@PhilipDeegan Looking at the current implementation, I can see the IWYU pragma has been added correctly to logger_defaults.hpp, but there are some issues with the setup:
Current State:
- ✅ IWYU pragma correctly added:
// IWYU pragma: private, include "core/logger.hpp" - ❌ The referenced
core/logger.hppdoesn't exist - ❌ Direct includes of
logger_defaults.hppstill exist (e.g., infield_sum_transaction.hpp)
Issues Found:
- The IWYU pragma points to
core/logger.hpp, but this file doesn't exist in the codebase - Files are still directly including
logger_defaults.hppinstead of using a public API
Recommendations:
Either:
- Create the missing umbrella header: Create
src/core/logger.hppthat includeslogger_defaults.hppand serves as the public API, or - Update the pragma: Change the pragma to point to an existing umbrella header, or
- Remove the pragma: If there's no intention to create a public logger API, remove the IWYU pragma
The current implementation is incomplete - the pragma references a non-existent header, making it ineffective.
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== Checking IWYU pragma implementation ==="
echo
echo "1) Verify the IWYU pragma in logger_defaults.hpp:"
head -5 src/core/utilities/logger/logger_defaults.hpp
echo
echo "2) Check if core/logger.hpp exists and includes logger_defaults.hpp:"
if [ -f "src/core/logger.hpp" ]; then
echo "core/logger.hpp exists"
rg -n 'logger_defaults\.hpp' src/core/logger.hpp || echo "logger_defaults.hpp not found in core/logger.hpp"
else
echo "core/logger.hpp does not exist"
fi
echo
echo "3) Find all direct includes of logger_defaults.hpp:"
rg -nP '#\s*include\s*["<][^">]*logger_defaults\.hpp[">]' --type cpp --type c -C1
echo
echo "4) Check field_sum_transaction.hpp specifically:"
if [ -f "src/amr/messengers/field_sum_transaction.hpp" ]; then
echo "Contents of field_sum_transaction.hpp includes:"
rg -n '#\s*include' src/amr/messengers/field_sum_transaction.hpp | head -10
else
echo "field_sum_transaction.hpp not found"
fi
echo
echo "5) Look for any logger.hpp files in the codebase:"
fd -a 'logger.hpp'
Length of output: 1207
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== Finding all logger-related headers ==="
echo
echo "1) Find all logger headers:"
fd -t f -e hpp -e h . | rg -i logger
echo
echo "2) Check if there's a top-level logger include:"
fd -t f -e hpp -e h . | rg 'logger\.hpp$'
echo
echo "3) Check the structure of includes in logger_defaults.hpp:"
cat src/core/utilities/logger/logger_defaults.hpp
echo
echo "4) Look for any files that might serve as the public logger API:"
fd -t f -e hpp -e h . | xargs rg -l 'logger_defaults\.hpp' 2>/dev/null || true
Length of output: 1443
@PhilipDeegan After reanalyzing with the script results, I need to correct my previous assessment. The IWYU pragma implementation is actually properly set up:
Current State (Corrected):
- ✅ IWYU pragma correctly added:
// IWYU pragma: private, include "core/logger.hpp" - ✅ The referenced
core/logger.hppdoes exist atsrc/core/logger.hpp - ✅
core/logger.hppproperly includeslogger_defaults.hpp(line 34) - ❌ Only one file still directly includes
logger_defaults.hpp:src/amr/messengers/field_sum_transaction.hpp
The Issue:
The IWYU pragma is working correctly. There's just one remaining violation where field_sum_transaction.hpp line 4 has:
#include <core/utilities/logger/logger_defaults.hpp>The Fix:
Simply update src/amr/messengers/field_sum_transaction.hpp line 4 to:
#include "core/logger.hpp"The IWYU pragma implementation is actually correct - it was my initial analysis that was wrong. The original CodeRabbit suggestion to replace the direct include with the public API was spot-on.
🧠 Learnings used
Learnt from: PhilipDeegan
PR: PHAREHUB/PHARE#784
File: tests/simulator/test_restarts.py:333-339
Timestamp: 2024-10-09T08:32:15.667Z
Learning: PhilipDeegan has indicated a preference for minimal and efficient code, even after making changes to get a function working. There may be opportunities to remove or optimize parts of the code such as sleep/wait/advances.
Learnt from: PhilipDeegan
PR: PHAREHUB/PHARE#784
File: tests/simulator/test_restarts.py:333-339
Timestamp: 2024-07-26T22:04:34.160Z
Learning: PhilipDeegan has indicated a preference for minimal and efficient code, even after making changes to get a function working. There may be opportunities to remove or optimize parts of the code such as sleep/wait/advances.
| nesting_buffer=1, | ||
| ) |
There was a problem hiding this comment.
🛠️ Refactor suggestion
🧩 Analysis chain
Fix type mismatch for nesting_buffer.
C++ consumes AMR.nesting_buffer as a vector; here it’s an int. This will likely fail conversion at runtime. Use a per-dimension vector (2D test).
Apply this diff:
- strict=True,
- nesting_buffer=1,
+ strict=True,
+ nesting_buffer=[1, 1],Run to confirm no remaining int usage:
🏁 Script executed:
#!/bin/bash
# Find nesting_buffer occurrences and expected types.
rg -n -C2 'nesting_buffer' tests src | sed -n '1,200p'
rg -n -C2 'proper_nesting_buffer' src | sed -n '1,200p'Length of output: 8416
Fix nesting_buffer type: pass per-dimension vector (e.g. [1, 1])
C++ wrapper expects amr["nesting_buffer"] → std::vector (src/amr/wrappers/hierarchy.hpp:350); many tests/defaults pass an int — update to vectors of length == dimension.
Files to update (examples found): tests/functional/harris/harris_2d.py (lines ~41–44), tests/functional/shock/shock.py, tests/simulator/test_run.py, tests/simulator/test_init_periodicity.py, tests/simulator/init.py, tests/simulator/test_validation.py (multiple cases).
Apply this diff in the 2D test (and same pattern elsewhere):
- strict=True,
- nesting_buffer=1,
+ strict=True,
+ nesting_buffer=[1, 1],📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| nesting_buffer=1, | |
| ) | |
| strict=True, | |
| nesting_buffer=[1, 1], | |
| ) |
🤖 Prompt for AI Agents
In tests/functional/harris/harris_2d.py around lines 43–44, the nesting_buffer
is passed as an int (1) but the C++ wrapper expects a per-dimension
std::vector<int>; change nesting_buffer to a list with one entry per spatial
dimension (for 2D use [1, 1]) so the length equals the test's dimension and
matches the C++ expectation; apply the same pattern to other test files noted
(replace scalar nesting_buffer values with vectors of length == dimension).
| # if self.rethrow_: | ||
| # raise e | ||
| # return diff_boxes(slice1, slice2, box) | ||
|
|
There was a problem hiding this comment.
Don't swallow assertion: re-raise to fail the test.
Masking failures here can let regressions slip. Re-raise after diagnostics.
Apply this diff:
- # if self.rethrow_:
- # raise e
- # return diff_boxes(slice1, slice2, box)
+ if getattr(self, "rethrow_", True):
+ raise eCommittable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In tests/simulator/test_advance.py around lines 402 to 405, the test code
currently swallows AssertionError by commenting out the re-raise and returning a
diagnostic value; change the block so that after logging or performing any
diagnostic work you immediately re-raise the caught exception (e.g., using
"raise") so the test fails as intended; remove the return that masks the
assertion and ensure rethrowing happens unconditionally when self.rethrow_ is
True or after diagnostics.
| auto& bx = FieldDataT::getField(fine, bx_id_); | ||
| auto& by = FieldDataT::getField(fine, by_id_); | ||
| auto& bz = FieldDataT::getField(fine, bz_id_); | ||
| auto& fields = TensorFieldDataT::getFields(fine, b_id_); |
Check notice
Code scanning / CodeQL
Unused local variable Note
| template<typename Dst> | ||
| void linear_time_interpolate(Dst& fieldDest, auto const& fieldSrcOld, auto const& fieldSrcNew, | ||
| auto&&... args) | ||
| { |
Check notice
Code scanning / CodeQL
Unused local variable Note
| overwriteInteriorTFfillPattern); | ||
|
|
||
| magneticRefinePatchStrategy_.registerIDs(*bx_id, *by_id, *bz_id); | ||
| auto&& [e_id] = resourcesManager_->getIDsList(hybridInfo->modelElectric); |
Check notice
Code scanning / CodeQL
Unused local variable Note
| auto ex_id = resourcesManager_->getID(hybridInfo->modelElectric.xName); | ||
| auto ey_id = resourcesManager_->getID(hybridInfo->modelElectric.yName); | ||
| auto ez_id = resourcesManager_->getID(hybridInfo->modelElectric.zName); | ||
| auto&& [e_reflux_id] = resourcesManager_->getIDsList(hybridInfo->refluxElectric); |
Check notice
Code scanning / CodeQL
Unused local variable Note
|
|
||
| fluxSumE_.zero(); | ||
| } | ||
| } |
Check notice
Code scanning / CodeQL
Unused local variable Note
|
|
||
|
|
||
| TYPED_TEST(SimulatorTest, knowsWhichSolverisOnAGivenLevel) | ||
| TYPED_TEST(SimulatorTest, knowsWhichSolverIsOnAGivenLevel) |
Check notice
Code scanning / CodeQL
Unused static variable Note test
| // auto& mhdModel = *sim.getMHDModel(); | ||
| // |
Check notice
Code scanning / CodeQL
Commented-out code Note test
| // if (isInMHDdRange(iLevel)) | ||
| // { | ||
| // auto Bid = mhdModel.resourcesManager->getIDs(mhdModel.state.B); | ||
| // auto Vid = mhdModel.resourcesManager->getIDs(mhdModel.state.V); | ||
| // | ||
| // std::array<std::vector<int> const*, 2> allIDs{{&Bid, &Vid}}; | ||
| // | ||
| // for (auto& idVec : allIDs) | ||
| // { | ||
| // for (auto& id : *idVec) | ||
| // { | ||
| // auto level = hierarchy.getPatchLevel(iLevel); | ||
| // auto patch = level->begin(); | ||
| // EXPECT_TRUE(patch->checkAllocated(id)); | ||
| // } | ||
| // } | ||
| // } |
Check notice
Code scanning / CodeQL
Commented-out code Note test
|
|
||
|
|
||
| TYPED_TEST(SimulatorTest, returnsCorrecMessengerForEachLevel) | ||
| TYPED_TEST(SimulatorTest, returnsCorrectMessengerForEachLevel) |
Check notice
Code scanning / CodeQL
Unused static variable Note test
| // magPatchGhostsRefineSchedules[levelNumber]->fillData(initDataTime); | ||
| // elecPatchGhostsRefineSchedules[levelNumber]->fillData(initDataTime); |
There was a problem hiding this comment.
If we do not switch to the samrai version https://github.com/nicolasaunai/SAMRAI/tree/feature/srcmask
we will see divb in harris on merge (likely)
new version of #1068 after PR comments.
fine_boundary_represents_var changed so to have copy of overlaped data before refinement of non overlaped data when SAMRAI execute a schedule.
PatchData NaN initialized on construction,
fix tests failing as result of above
comment a field refinement test (useless, wrong refinement op for E,B)
overwrite_interior false also for refinement is default for FieldFillPattern
J manually init to zero in model init, fine init and regrid init (Jx unused in ampere but used in Ohm with its now NaN values)
Grid/NdarrayVector take default value overrides (for test)
UsableTensorField is default constructed with zero init.
TensorFieldData override of PatchData, now is how vectors (B, E, ...) are seen from SAMRAI's viewpoint. This allows having all components of a vector when executing a schedule (e.g. refinement) instead of registering N components individually to the same RefineAlgorithm (which does not work well in case of a single PatchData type).
Refluxing is now re-calculating the magnetic field via faraday equation after coarsening of finer accumulated electric fields. Thus B is not coarsened anymore and divB is preserved at the coarse-fine interface.
improve in-code documentation/comments in particular for messenger/refluxing/regriding stuff.
improved debug plots for advance field overlap test (see plot below)