Skip to content

Commit

Permalink
Merge pull request #911 from parthenon-hpc-lab/lroberts36/add-multi-grid
Browse files Browse the repository at this point in the history
Geometric Multigrid
  • Loading branch information
lroberts36 authored Nov 7, 2023
2 parents 0279e1d + 45a6b27 commit 0e0e06d
Show file tree
Hide file tree
Showing 58 changed files with 3,441 additions and 784 deletions.
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@
## Current develop

### Added (new features/APIs/variables/...)
- [[PR 911]](https://github.com/parthenon-hpc-lab/parthenon/pull/911) Add infrastructure for geometric multi-grid
- [[PR 971]](https://github.com/parthenon-hpc-lab/parthenon/pull/971) Add UserWorkBeforeLoop
- [[PR 907]](https://github.com/parthenon-hpc-lab/parthenon/pull/907) PEP1: Allow subclassing StateDescriptor
- [[PR 932]](https://github.com/parthenon-hpc-lab/parthenon/pull/932) Add GetOrAddFlag to metadata
Expand Down
54 changes: 51 additions & 3 deletions doc/sphinx/src/boundary_communication.rst
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
.. _boundary_communication:

Boundary communication-in-one concepts
======================================

Expand All @@ -22,20 +24,60 @@ array of all the kinds of caches needed for the various kinds of
boundary operations performed.

We note that this infrastructure is more general than just ghost halos.
The same machinery could, for example, be used to prolongate or restrict
an entire meshblock.
The same machinery is used for communicating the interiors of meshblocks
that are restricted and/or prolongated between geometric multi-grid levels.
Additionally, with the ownership model for non-face fields, the basic
communication infrastructure could deal with flux correction as well
(although currently flux correction uses a somewhat separate code path).

Buffer subsets
--------------

Sometimes it is desirable, for example for custom prolongation and
restriction, to loop over a subset of the ghost sub-halos, rather than
restriction or to communicate on a single geometric multi-grid level,
to loop over a subset of the ghost sub-halos, rather than
all halos at once. This is enabled by the ``buffer_subsets`` and
``buffer_subsets_h`` arrays, which are contained in ``BvarsSubCache_t``.
The ``buffer_subsets`` array is a matrix, where the rows index the
subset, and the columns point to the indices in the ``bnd_info`` array
containing the subset of sub-halos you wish to operate on.

To communicate across a particular boundary type, the templated
boundary communication routines (see :boundary_comm_tasks:`boundary_comm_tasks`.)
should be instantiated with the desired ``BoundaryType``, i.e.

.. code:: cpp
SendBoundBufs<BoundaryType::gmg_restrict_send>(md);
The different ``BoundaryType``s are:
- ``any``: Communications are performed between all leaf blocks (i.e. the
standard Parthenon grid that does not include multi-grid related blocks).
- ``local``: Communications are performed between all leaf blocks that
are on the current rank. *Currently, this option should not be used
because there are possibly associated bugs. This and nonlocal would
only be used as a potential performance enhancement, calling both
should result in the same communication as just calling
BoundaryType::any.*
- ``nonlocal``: Communications are performed between all leaf blocks that
are on different ranks than the current rank. *Currently, this option
should not be used because there are possibly associated bugs.*
- ``flxcor_send`` and ``flxcor_recv``: Used only for flux correction
routines, currently cannot be passed to regular boundary communication
routines.
- ``gmg_same``: Communicates ghost halos between blocks in the
same geometric multi-grid level.
- ``gmg_restrict_send`` and ``gmg_restrict_recv``: For restricting
block interiors between geometric multi-grid levels, i.e. inter-grid
communication. *It is probably not necessary to have separate
communicators for sending and receiving, but for now this is the way
if was written*
- ``gmg_prolongate_send`` and ``gmg_prolongate_recv``: For prolongating
block interiors between geometric multi-grid levels, i.e. inter-grid
communication. *It is probably not necessary to have separate
communicators for sending and receiving, but for now this is the way
if was written*

.. _sparse boundary comm:

Sparse boundary communication
Expand Down Expand Up @@ -327,6 +369,8 @@ generalizes to more realistic problems not being run with all ranks on
the same node. See ``InitializeBufferCache(...)`` for how to choose the
ordering.*

.. _boundary_comm_tasks:

Boundary Communication Tasks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Expand Down Expand Up @@ -426,3 +470,7 @@ cacheing if desired.
- ``LoadAndSendFluxCorrections(std::shared_ptr<MeshData<Real>>&)``
- ``ReceiveFluxCorrections(std::shared_ptr<MeshData<Real>>&)``
- ``SetFluxCorrections(std::shared_ptr<MeshData<Real>>&)``

*Now that non-cell-centered fields are implemented in Parthenon, the
flux correction tasks can be unified with the boundary communication
above.*
61 changes: 61 additions & 0 deletions doc/sphinx/src/mesh/mesh.rst
Original file line number Diff line number Diff line change
Expand Up @@ -50,3 +50,64 @@ member.
time-integration advance. The default behavior calls to each package's
(StateDesrcriptor's) ``PreStepDiagnostics`` method which, in turn,
delegates to a ``std::function`` member that defaults to a no-op.

Multi-grid Grids Stored in ``Mesh``
-----------------------------------

If the parameter ``parthenon/mesh/multigrid`` is set to ``true``, the ``Mesh``
constructor and AMR routines populate both
``std::vector<LogicalLocMap_t> Mesh::gmg_grid_locs`` and
``std::vector<BlockList_t> gmg_block_lists``, where each entry into the vectors
describes one level of the of the geometric multi-grid (GMG) mesh. For refined
meshes, each GMG level only includes blocks that are at a given logical level
(starting from the finest logical level on the grid and including both internal
and leaf nodes in the refinement tree) as well as leaf blocks on the next coarser
level that are neighbors to finer blocks, which implies that below the root grid
level the blocks may not cover the entire mesh. For levels above the root grid,
blocks may change shape so that they only cover the domain of the root grid. Note
that leaf blocks may be contained in multiple blocklists, and the lists all point
to the same block (not a separate copy). To be explicit, when ``parthenon/mesh/multigrid`` is set to ``true`` blocks corresponding to *all* internal nodes of the refinement tree are created, in addition to the leaf node blocks that are normally created.

*GMG Implementation Note:*
The reason for including two levels in the GMG block lists is for dealing with
accurately setting the boundary values of the fine blocks. Convergence can be poor
or non-exististent if the fine-coarse boundaries of a given level are not
self-consistently updated (since the boundary prolongation from the coarse grid to
the fine grid also depends on interior values of the fine grid that are being updated
by a smoothing operation). This means that each smoothing step, boundary communication
must occur between all blocks corresponding to all internal and leaf nodes at a given
level in the tree and with all leaf nodes at the next coarser level which abut blocks
at the current level. Therefore, the GMG block lists contain blocks at two levels, but
smoothing operations etc. should usually only occur on the subset of those blocks that
are at the fine level.

To work with these GMG levels, ``MeshData`` objects containing these blocks can
be recovered from a ``Mesh`` pointer using

.. code:: c++
auto &md = pmesh->gmg_mesh_data[level].GetOrAdd(level, "base", partition_idx);

This ``MeshData`` will include blocks at the current level and possibly some
blocks at the next coarser level. Often, one will only want to operate on blocks
on the finer level (the coarser blocks are required mainly for boundary
communication). To make packs containing only a subset of blocks from a
GMG ``MeshData`` pointer ``md``, one would use

.. code:: c++
int nblocks = md->NumBlocks();
std::vector<bool> include_block(nblocks, true);
for (int b = 0; b < nblocks; ++b)
include_block[b] =
(md->grid.logical_level == md->GetBlockData(b)->GetBlockPointer()->loc.level());

auto desc = parthenon::MakePackDescriptor<in, out>(md.get());
auto pack = desc.GetPack(md.get(), include_block);

In addition to creating the ``LogicalLocation`` and block lists for the GMG levels,
``Mesh`` fills neigbor arrays in ``MeshBlock`` for intra- and inter-GMG block list
communication (i.e. boundary communication and internal prolongation/restriction,
respectively). Communication within and between GMG levels can be done by calling
boundary communication routines with the boundary tags ``gmg_same``,
``gmg_restrict_send``, ``gmg_restrict_recv``, ``gmg_prolongate_send``,
``gmg_prolongate_recv`` (see :boundary_communication:`boundary_communication`).

17 changes: 14 additions & 3 deletions doc/sphinx/src/solvers.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,20 @@
Solvers
=======

Parthenon does not yet provide an exhaustive set of solvers. Currently,
a few basic building blocks are provided and we hope to develop more
capability in the future.
Parthenon does not yet provide an exhaustive set of plug and play solvers.
Nevertheless, the building blocks required for implementing Krylov subspace
methods (i.e. global reductions for vector dot products) like CG, BiCGStab,
and GMRES are available. An example of a Parthenon based implementation of
BiCGStab can be found in ``examples/poisson_gmg``. Additionally, the
infrastructure required for implementing multigrid solvers is also
included in Parthenon. The requisite hierarchy of grids is produced if
``parthenon/mesh/multigrid=true`` is set in the parameter input. An example
of a multi-grid based linear solver in Parthenon is also given in
``examples/poisson_gmg`` (and also an example of using multi-grid as a
preconditioner for BiCGStab). We plan to build wrappers that simplify the
use of these methods in down stream codes in the future. Note that the
example code does not currently rely on the Stencil and SparseMatrixAccessor
code described below.

Stencil
-------
Expand Down
1 change: 1 addition & 0 deletions example/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -19,4 +19,5 @@ add_subdirectory(particles)
add_subdirectory(particle_leapfrog)
add_subdirectory(particle_tracers)
add_subdirectory(poisson)
add_subdirectory(poisson_gmg)
add_subdirectory(sparse_advection)
27 changes: 27 additions & 0 deletions example/poisson_gmg/CMakeLists.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
#=========================================================================================
# (C) (or copyright) 2023. Triad National Security, LLC. All rights reserved.
#
# This program was produced under U.S. Government contract 89233218CNA000001 for Los
# Alamos National Laboratory (LANL), which is operated by Triad National Security, LLC
# for the U.S. Department of Energy/National Nuclear Security Administration. All rights
# in the program are reserved by Triad National Security, LLC, and the U.S. Department
# of Energy/National Nuclear Security Administration. The Government is granted for
# itself and others acting on its behalf a nonexclusive, paid-up, irrevocable worldwide
# license in this material to reproduce, prepare derivative works, distribute copies to
# the public, perform publicly and display publicly, and to permit others to do so.
#=========================================================================================

get_property(DRIVER_LIST GLOBAL PROPERTY DRIVERS_USED_IN_TESTS)
if( "poisson-gmg-example" IN_LIST DRIVER_LIST OR NOT PARTHENON_DISABLE_EXAMPLES)
add_executable(
poisson-gmg-example
poisson_driver.cpp
poisson_driver.hpp
poisson_package.cpp
poisson_package.hpp
main.cpp
parthenon_app_inputs.cpp
)
target_link_libraries(poisson-gmg-example PRIVATE Parthenon::parthenon)
lint_target(poisson-gmg-example)
endif()
93 changes: 93 additions & 0 deletions example/poisson_gmg/main.cpp
Original file line number Diff line number Diff line change
@@ -0,0 +1,93 @@
//========================================================================================
// (C) (or copyright) 2023. Triad National Security, LLC. All rights reserved.
//
// This program was produced under U.S. Government contract 89233218CNA000001 for Los
// Alamos National Laboratory (LANL), which is operated by Triad National Security, LLC
// for the U.S. Department of Energy/National Nuclear Security Administration. All rights
// in the program are reserved by Triad National Security, LLC, and the U.S. Department
// of Energy/National Nuclear Security Administration. The Government is granted for
// itself and others acting on its behalf a nonexclusive, paid-up, irrevocable worldwide
// license in this material to reproduce, prepare derivative works, distribute copies to
// the public, perform publicly and display publicly, and to permit others to do so.
//========================================================================================

#include "bvals/boundary_conditions_generic.hpp"
#include "parthenon_manager.hpp"

#include "poisson_driver.hpp"

using namespace parthenon;
using namespace parthenon::BoundaryFunction;
// We need to register FixedFace boundary conditions by hand since they can't
// be chosen in the parameter input file. FixedFace boundary conditions assume
// Dirichlet booundary conditions on the face of the domain and linearly extrapolate
// into the ghosts to ensure the linear reconstruction on the block face obeys the
// chosen boundary condition. Just setting the ghost zones of CC variables to a fixed
// value results in poor MG convergence because the effective BC at the face
// changes with MG level.
template <CoordinateDirection DIR, BCSide SIDE>
auto GetBoundaryCondition() {
return [](std::shared_ptr<MeshBlockData<Real>> &rc, bool coarse) -> void {
using namespace parthenon;
using namespace parthenon::BoundaryFunction;
GenericBC<DIR, SIDE, BCType::FixedFace, variable_names::any>(rc, coarse, 0.0);
};
}

int main(int argc, char *argv[]) {
using parthenon::ParthenonManager;
using parthenon::ParthenonStatus;
ParthenonManager pman;

// Redefine parthenon defaults
pman.app_input->ProcessPackages = poisson_example::ProcessPackages;
pman.app_input->MeshProblemGenerator = poisson_example::ProblemGenerator;

// call ParthenonInit to initialize MPI and Kokkos, parse the input deck, and set up
auto manager_status = pman.ParthenonInitEnv(argc, argv);
if (manager_status == ParthenonStatus::complete) {
pman.ParthenonFinalize();
return 0;
}
if (manager_status == ParthenonStatus::error) {
pman.ParthenonFinalize();
return 1;
}
// Now that ParthenonInit has been called and setup succeeded, the code can now
// make use of MPI and Kokkos

// Set boundary conditions
pman.app_input->boundary_conditions[parthenon::BoundaryFace::inner_x1] =
GetBoundaryCondition<X1DIR, BCSide::Inner>();
pman.app_input->boundary_conditions[parthenon::BoundaryFace::inner_x2] =
GetBoundaryCondition<X2DIR, BCSide::Inner>();
pman.app_input->boundary_conditions[parthenon::BoundaryFace::inner_x3] =
GetBoundaryCondition<X3DIR, BCSide::Inner>();
pman.app_input->boundary_conditions[parthenon::BoundaryFace::outer_x1] =
GetBoundaryCondition<X1DIR, BCSide::Outer>();
pman.app_input->boundary_conditions[parthenon::BoundaryFace::outer_x2] =
GetBoundaryCondition<X2DIR, BCSide::Outer>();
pman.app_input->boundary_conditions[parthenon::BoundaryFace::outer_x3] =
GetBoundaryCondition<X3DIR, BCSide::Outer>();
pman.ParthenonInitPackagesAndMesh();

// This needs to be scoped so that the driver object is destructed before Finalize
bool success = true;
{
// Initialize the driver
poisson_example::PoissonDriver driver(pman.pinput.get(), pman.app_input.get(),
pman.pmesh.get());

// This line actually runs the simulation
auto driver_status = driver.Execute();
if (driver_status != parthenon::DriverStatus::complete ||
driver.final_rms_residual > 1.e-10 || driver.final_rms_error > 1.e-12)
success = false;
}
// call MPI_Finalize and Kokkos::finalize if necessary
pman.ParthenonFinalize();
if (Globals::my_rank == 0) printf("success: %i\n", success);

// MPI and Kokkos can no longer be used
return static_cast<int>(!success);
}
Loading

0 comments on commit 0e0e06d

Please sign in to comment.