-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Geometric Multigrid #911
Geometric Multigrid #911
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for addressing all the comments. I'm happy to see this merged (and documented in the other PR ;) )
Thanks @pgrete, I just enabled auto-merge. Documentation definitely needs to be added about logical location, although I thought there was a good deal of documentation on this PR relative to my baseline ;). It does look like check PR dependencies is failing now, but maybe other PRs are seeing this too? (@bprather, @Yurlungur) |
I think the check dependencies machinery is broken |
@Yurlungur, ok thanks, I will just force merge once everything else passes. |
The dependencies check has been fixed. I re-ran it and it passes, but it looks like some others are still failing. |
Looks like the auto-merge didn't take so I merged develop and tried it again. Speak now etc etc EDIT Oops saw the note on the fix. Thanks @lroberts36 |
I took off the auto-merge because #966 needs to be merged into this first. |
…id-mpi Multigrid MPI Bug Fix
Thanks @bprather, I had forgotten to finish this up. I just merged in the bugfix, I hadn't realized it was approved. I am turning back on auto-merge. |
Solution and residual of Poisson equation after fifteen iterations of MG-BiCGStab with a step function, circular right hand side centered at zero,
PR Summary
This PR adds the infrastructure required for implementing geometric multi-grid (GMG) solvers in Parthenon. Additionally, it adds an example that solves the Poisson equation$\nabla \cdot D \nabla u = f$ (possibly with rapidly spatially varying $D$ ) using both raw FAS multigrid v-cycles with FAC like grids and using this same multigrid solver as a pre-conditioner for the Krylov subspace method BiCGStab. Good convergence is seen on both uniform and non-uniform grids (including flux correction) and the MG-BiCGStab method provides rapid convergence on problems with rapidly varying $D$ (which cannot be efficiently solved by either BiCGStab or MG V-cylces alone). The implementation of a GMG solver task list and its use as a preconditioner is shown in the
poisson_gmg
example, but I have not included generic solver routines within Parthenon itself. Generic solvers probably could be provided, since a user probably only needs to provide a method for applying there matrix to a vector and a method for smoothing (e.g. for doing Jacobi iterations).In terms of infrastructure, multigrid generally requires a hierarchy of increasingly coarsened meshes, a way to restrict a field from a finer mesh to the next coarser mesh, a way to prolongate data from a coarser mesh to a finer mesh, and a mechanism for intra-level communication between blocks at each GMG level.
To do this in Parthenon, a hierarchy of de-refined block meshes are created from the base mesh during initialization and after each AMR step1. The blocks in these coarsened meshes are assigned a global id (
gid
) by iterating through them in Morton order, but starting from the largestgid
of the next finest mesh. This gives each block in the GMG hierarchy a unique identifier. Once these de-refined block lists are built, a same level neighbor block finding algorithm is performed on each level2, the parent block on the next grid up in the GMG hierarchy is found, and all daughter blocks on the next grid down in the GMG hierarchy are found. All of these relationships are stored as arrays ofNeighborBlock
s inMeshBlock
. The coarsened levels can go above the root grid of the simulation and the blocks change in size at these coarser levels where required to cover only the root grid. At least for cubic grids and other grids that have a number of root blocks and zones that are a large enough power of two, the coarsening will continue to negative grid levels and possibly to a block with a single zone.With these GMG levels, neighbor relationships and unique ids in hand, the pre-existing same level boundary communication routines can automatically handle inter-level communication on each level of the GMG hierarchy. With a small addition to
ForEachBoundary
template3 and a different choice of prolongation and restriction regions, the pre-existing communication infrastructure can also be repurposed for prolongation and restriction from one GMG grid to another4. GMG prolongation and restriction only occurs for a variable if it hasMetadata::GMGProlongate
orMetadat::GMGRestrict
set, respectively.There is a zoo of different ways to implement GMG (different smoothers, different types of cycles, using it as a preconditioner, etc.), and I have only tested a reasonably small subset. Getting GMG to work on a uniform grid was fairly straightforward, although it did require being careful about how the boundary conditions are defined (mainly that the boundary condition was defined at the surface of the simulation domain and not just imposed at the cell center of the first ghost zone, since that position moves as you go to coarser levels). I eventually only focused on cell-centered fields but all of the infrastructure should work for other topological types, they would just required different ways of dealing with physical boundary conditions. Here, I also experimented with MG preconditioned CG and found that it worked fairly well. For refined meshes, I abandoned CG since it is very difficult to formulate partially refined problems as a symmetric matrix (I think this would rely on some very specialized prolongation and restriction operators).
For refined meshes in my implementation, each GMG level only includes blocks that are at the current level, which implies that at fine levels the blocks may not cover the entire mesh. Dealing with accurately setting the boundary values of these blocks is a little tricky, and I found that convergence is poor or non-exististent if the fine-coarse boundaries of a given level are not self-consistently updated (since the boundary prolongation from the coarse grid to the fine grid also depends on interior values of the fine grid that are being updated by a smoothing operation). This means that each smoothing step, boundary communication must occur between all blocks corresponding to all internal and leaf nodes at a given level in the tree and with all leaf nodes at the next coarser level which abut blocks at the current level. Therefore, the GMG block lists contain blocks at two levels, but smoothing operations etc. only occur on the subset of those blocks that are at the fine level. This is effected by using the new functionality in
SparsePack
that allows choosing a subset of blocks in aMeshData
to include in aSparsePack
. I think this general structure is how most codes implement the refined grids, with blocks only on a single level, but the Athena++ implementation differs I think. It may be that in the future we determine there is some better choice of GMG grids, which will require changes to the underlying infrastructure as written.This PR will close #858.
Description of changes:
NeighborBlock
: AddedNeighborBlock
constructor (to be used in place ofNeighborBlock::SetNeighbor
, although the old method still gets called some places in the code). Also added aRegionSize
member variable to hold information about the size of the neighbor block, since in the GMG grid hierarchy all blocks are not the same size.NeighborBlock
instances can refer to intra-level neighbors (i.e.MeshBlock::neighbors
,MeshBlock::gmg_same_neighbors
, andMeshBlock::gmg_composite_finer_neighbors
) or to refer inter-level neighbors (i.e. parents or daughters on other GMG composite grids,MeshBlock::gmg_coarser_neighbors
orMeshBlock::gmg_finer_neighbors
.CalcIndices
(which is used byBndInfo
andProResInfo
for calculating buffer packing indices): Refactored a little to clarify the different types of index ranges it can provide, which is determined by the new enumeratorIndexRangeType
which is passed toCalcIndices
. Additionally,CalcIndices
is extended to work for prolongation and restriction of the interiors of blocks and to be able to handle when inter-grid transfers involve different block sizes. For inter-grid prolongation, the index range of the entire coarse buffer of the receiving zone is filled so that prolongation on the receiver can occur without a communication step. For this to work correctly, the ghosts of the coarse sending block must be filled before the inter-grid communication is called.ProResInfo::Get*
andBndInfo:Get*
: Minor refactoring to clarifly index range types.BoundaryType
:gmg_same
which encapsulates intra-grid communication on a GMG composite grid,gmg_restrict_send
andgmg_restrict_recv
which are responsible for inter-GMG grid restriction, andgmg_prolongate_send
andgmg_prolongate_recv
which are responsible or inter-GMG grid prolongation. TheForEachBoundary
template loop has been updated to loop over these intergrid boundaries correctly, so communication routines for these new boundary types can be called, for exampleSendBoundBufs<gmg_restrict_send>
can be called on a finer GMG level andReceiveBoundBufs<gmg_restrict_recv>
can be called onMeshData
for a coarser level to restrict from fine to coarse. Small changes were required for choosing which version ofRebuildBufferCache
to call so that the correct prolongation range is used for inter-gmg communication. Small changes toBuildBoundaryBuffers
were also required to make it work for the newBoundaryType
s. TheTagMap
object was also updated to be able to include more than one type of boundary (since we need MPI tags for all of the different types of boundary communicators).DataCollection
: Added overloads so thatMeshData
corresponding to the blocks in a particular GMG level can be requested and have the its grid type identified.MeshData
: Added a member object that stores the type of grid that the blocks in aMeshData
are associated with.Metadata
: Added flagsGMGRestrict
andGMGProlongate
to denote fields which should be prolongated or restricted during GMG inter-level communication.SparsePack
: Added a method to get theIndexShape
of a variable on device from aParArray3D
, since now it is possible that the size of blocks in aMeshData
object, and hence aSparsePack
, may not all be the same size. This feature doesn't really end up being used though currently (it was used during testing).LogicalLocation
: Extended to deal with "negative" levels. A lot of functionality moved to a cpp file instead of a header. A couple of bugs relating to periodic grids were fixed (although I honestly don't remember the details). There may be more efficient ways to perform some of the tasks done byLogicalLocation
. If there are performance issues related toLogicalLocation
, there is likely room to speed things up.Mesh::SetSameLevelNeighbors
: Provides the same functionality asSearchAndSetNeighbors
but using maps ofLogicalLocation
instead of the oldMeshBlockTree
object (I have tested fairly extensively that it gives the same results as the old code). It also extendsSearchAndSetNeighbors
to deal with two-level GMG composite grids where only communication into the boundaries of blocks on the finer of the two levels is required. Currently, the code finds neighbors using bothSearchAndSetNeighbors
andSetSameLevelNeighbors
for the main grid.SearchAndSetNeighbors
sets boundary information contained inMeshBlock::pbval
whileSetSameLevelNeighbors
sets the neighbors contained inMeshBlock::neighbors
, the latter is what is used now by all of the boundary communication routines for fields but I think swarms rely onpbval
still. Nevertheless, things are pretty close to the point that we can removepbval
and a lot of the code inbvals_base.cpp
.Mesh::BuildGMGHierarchy
,Mesh::gmg_grid_locs
, andMesh::gmg_block_lists
: Build and store the logical locations and block lists associated with each GMG composite-grid.Mesh::BuildGMGHierarchy
also finds and sets the inter-grid neighbors for each block.Mesh::multigrid
: Added flag for turning on and off multigridparthenon/mesh/multigrid=true/false
. If false, none of the internal nodes of the tree will be populated with blocks and there should be no change in how the code behaves from previous versions (although there was refactoring of internals, e.g. inMesh
and usingSetSameLevelNeighbors
).MeshGenerator
: Removed unused code.Mesh
constructor.Mesh::SetBlockSizeAndBoundaries
andMesh::GetBlockSize
to deal with negative levels where the block size changes.MeshBlock
: Updated index shape to correctly include ghosts for non-symmetry directions when the block size has gone to one (i.e. the coarsest GMG grid). Added vectors of neighbors that are used in communication (seeForEachBoundary
).PR Checklist
Footnotes
Rather than storing these GMG meshes in their own octree like the base mesh, they are stored in a
std::map
fromLogicalLocation
to a(gid, rank)
pair in the fieldMesh::gmg_grid_locs
. The different level block lists are stored inMesh::gmg_block_lists
and thenMeshData
associated with each level of the GMG hierarchy are stored inMesh::gmg_mesh_data
. We may want to think about if we want to have a separate data collection for each grid, or store them all in the already existingMesh::mesh_data
. Themap
is used because it is default ordered by Morton number, so it is easy to iterate through and assigngid
s. That being said, it may be more performant to switch to astd::unordered_map
and just put the logical locations into a vector whengid
s need to be assigned, sort the vector, then assigngid
s in thestd::unordered_map
. ↩Neighbor finding is currently provided by
BoundaryBase::SearchAndSetNeighbros
, but a refactored version of the algorithm using the newLogicalLocation
infrastructure (SetSameLevelNeighbors
, which might be a bad name since the level here refers to GMG level). A lot of the trickiness in this PR was updating theLogicalLocation
functionality to return things like block offsets when periodic boundary conditions are present. ↩The current
ForEachBoundary
just iterates over every block in aMeshData
object, then every variable present on that block, then every neighbor block of that block (skipping some of these depending on the template parameters provided). Now, if GMG boundary template parameters are passed toForEachBoundary
it does the same set of iterations but replacing the list of same grid neighbor blocks with either the list of parent or daughter blocks in the GMG hierarchy. ↩Communication buffers are stored in a map with a key made up of a sending gid, a receiving gid, and an array encoding the relative offset of the blocks. This naturally extends to storing multiple grids, but does mean that new communication buffers are created for each level. In particular, the GMG prolongation and restriction buffers are the same size as the coarse buffer for each block, which feels a little redundant and maybe has an impact on the memory footprint of simulations. ↩