Skip to content

Commit

Permalink
Merge pull request #2491 from devitocodes/subdomains_vs_hardcoded_bcs
Browse files Browse the repository at this point in the history
docs: Elaborate on SubDomains for BCs w MPI
  • Loading branch information
mloubout authored Nov 25, 2024
2 parents b95b323 + 9656baf commit f54378d
Show file tree
Hide file tree
Showing 5 changed files with 53 additions and 11 deletions.
35 changes: 33 additions & 2 deletions FAQ.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@
- [Can I control the MPI domain decomposition](#can-i-control-the-mpi-domain-decomposition)
- [How should I use MPI on multi-socket machines](#how-should-I-use-MPI-on-multi-socket-machines)
- [How do I make sure my code is "MPI safe"](#how-do-i-make-sure-my-code-is-MPI-safe)
- [My code is generating different results when running with MPI](#my-code-is-generating-different-results-when-running-with-MPI)
- [Why does my Operator kernel die suddenly](#why-does-my-operator-kernel-die-suddenly)
- [Can I manually modify the C code generated by Devito and test these modifications](#can-i-manually-modify-the-c-code-generated-by-devito-and-test-these-modifications)
- [How do I find the source of my bug quickly and get support](#how-do-i-find-the-source-of-my-bug-quickly-and-get-support)
Expand Down Expand Up @@ -127,7 +128,6 @@ Yes, just set the environment variable `TMPDIR` to your favorite location.

[top](#Frequently-Asked-Questions)


## I create an Operator, look at the generated code, and the equations appear in a different order than I expected.

The Devito compiler computes a topological ordering of the input equations based on data dependency analysis. Heuristically, some equations might be moved around to improve performance (e.g., data locality). Therefore, the order of the equations in the generated code might be different than that used as input to the Operator.
Expand Down Expand Up @@ -677,8 +677,39 @@ NUMA node1 CPU(s): 32-63

There are a few things you may want to check

* To refer to the actual ("global") shape of the domain, you should always use `grid.shape` (or analogously through a `Function`, `f.grid.shape`). And unless you know well what you're doing, you should never use the function shape, namely `f.shape` or `f.data.shape`, as that will return the "local" domain shape, that is the data shape after domain decomposition, which might differ across the various MPI ranks.
* To refer to the actual ("global") shape of the domain, you should always use `grid.shape` (or analogously through a `Function`, `f.grid.shape`). And unless you know well what you're doing, you should never use the function shape, namely `f.shape` or `f.data.shape`, as that will return the "local" domain shape, that is the data shape after domain decomposition, which might differ across the various MPI ranks.


[top](#Frequently-Asked-Questions)

## My code is generating different results when running with MPI

There are a few things you may want to check

* Users are expected to execute the MPI launch command, specifying the desired number of ranks and other
possible arguments: mpirun, mpiexec, srun, e.g.: `mpirun -np <num processes> [options] <executable> [arguments]`, and get exactly the same results. However some pre- and post-processing may be rank-specific (e.g., plotting on a given MPI rank only).

* In case you used hardcoded boundary conditions to model your problem, consider switching to `SubDomain`s. Devito's support for distributed NumPy arrays enables domain decomposition without requiring changes to user code. The data is physically distributed, but from the user’s
perspective, it remains a logically centralized entity. Users can interact with data using familiar indexing schemes (e.g., slicing) without concern about the underlying layout.
You can find related tutorials [here:](https://github.com/devitocodes/devito/tree/master/examples/userapi).

For example, instead of
```python
t = grid.stepping_dim
x, y = grid.dimensions
op = Operator([Eq(u.forward, u + 1),
Eq(u[t+1, x, 0], 0),
Eq(u[t+1, x, 2], 0),
Eq(u[t+1, 0, y], 0),
Eq(u[t+1, 2, y], 0)])
```

one should use a semantically equivalent computation can be expressed exploiting SubDomains.

```python
u.data[:] = 0
op = Operator(Eq(u.forward, u + 1, subdomain=grid.interior))
```

[top](#Frequently-Asked-Questions)

Expand Down
1 change: 1 addition & 0 deletions devito/operator/operator.py
Original file line number Diff line number Diff line change
Expand Up @@ -93,6 +93,7 @@ class Operator(Callable):
... Eq(u[t+1, 2, y], 0)])
A semantically equivalent computation can be expressed exploiting SubDomains.
This is the only way for expressing BCs, when running with MPI.
>>> u.data[:] = 0
>>> op = Operator(Eq(u.forward, u + 1, subdomain=grid.interior))
Expand Down
9 changes: 6 additions & 3 deletions devito/types/dense.py
Original file line number Diff line number Diff line change
Expand Up @@ -1372,9 +1372,12 @@ def __init_finalize__(self, *args, **kwargs):

# Check we won't allocate too much memory for the system
available_mem = virtual_memory().available
if np.dtype(self.dtype).itemsize * self.size > available_mem:
warning("Trying to allocate more memory for symbol %s " % self.name +
"than available on physical device, this will start swapping")
required_mem = np.dtype(self.dtype).itemsize * self.size
if required_mem > available_mem:
warning("Trying to allocate more memory (%s) "
% humanbytes(required_mem) + "for symbol %s " % self.name +
"than available (%s) " % humanbytes(available_mem) +
"on physical device, this will start swapping")
if not isinstance(self.time_order, int):
raise TypeError("`time_order` must be int")

Expand Down
5 changes: 5 additions & 0 deletions devito/types/grid.py
Original file line number Diff line number Diff line change
Expand Up @@ -496,6 +496,11 @@ class SubDomain(AbstractSubDomain):
--------
Domain : An example of preset SubDomain.
Interior : An example of preset Subdomain.
Notes
-----
SubDomains are the only way to harness the benefits of domain decomposition,
especially when defining BCs.
"""

def __subdomain_finalize__(self, grid, **kwargs):
Expand Down
14 changes: 8 additions & 6 deletions examples/userapi/04_boundary_conditions.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"This tutorial aims to demonstrate how users can implement various boundary conditions in Devito, building on concepts introduced in previous tutorials. Over the course of this notebook we will go over the implementation of both free surface boundary conditions and perfectly-matched layers (PMLs) in the context of the first-order acoustic wave equation. This tutorial is based on a simplified version of the method outlined in Liu and Tao's 1997 paper (https://doi.org/10.1121/1.419657).\n",
"This tutorial aims to demonstrate how users can implement various boundary conditions in Devito, building on concepts introduced in previous tutorials. More specifically, this tutorial will use `SubDomain`s to model boundary conditions. `SubDomain`s are the recommended way to model BCs in order to run with MPI and distributed memory parallelism, as they align with Devito's support for distributed NumPy arrays with zero changes needed to the user code.\n",
"\n",
"Over the course of this notebook we will go over the implementation of both free surface boundary conditions and perfectly-matched layers (PMLs) in the context of the first-order acoustic wave equation. This tutorial is based on a simplified version of the method outlined in Liu and Tao's 1997 paper (https://doi.org/10.1121/1.419657).\n",
"\n",
"We will set up our domain with PMLs along the left, right, and bottom edges, and free surface boundaries at the top as shown below.\n",
"\n",
Expand Down Expand Up @@ -85,7 +87,7 @@
" def define(self, dimensions):\n",
" x, y = dimensions\n",
" return {x: ('right', self.PMLS), y: y}\n",
" \n",
"\n",
"class Base(SubDomain): # Base PML region\n",
" name = 'base'\n",
" def __init__(self, PMLS):\n",
Expand All @@ -102,7 +104,7 @@
" super().__init__()\n",
" self.PMLS = PMLS\n",
" self.S_O = S_O\n",
" \n",
"\n",
" def define(self, dimensions):\n",
" x, y = dimensions\n",
" return {x: ('middle', self.PMLS, self.PMLS), y: ('left', self.S_O//2)}\n",
Expand Down Expand Up @@ -394,7 +396,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"We next add our free surface boundary conditions. Note that this implementation is based off that found in `examples/seismic/acoustic/operators.py`."
"Next, we will add our free surface boundary conditions. Note that this implementation is based on that found in `examples/seismic/acoustic/operators.py`."
]
},
{
Expand Down Expand Up @@ -536,9 +538,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"It is important to remember that the ordering of equations when an `Operator` is created dictates the order of loops within the generated c code. As such, the `v` equations all need to be placed before the `p` ones otherwise the operator will attempt to use the updated `v` fields before they have been updated.\n",
"It is important to remember that the ordering of equations when an `Operator` is created dictates the order of loops within the generated c code. As such, the `v` equations need to be placed before the `p` ones; otherwise, the operator will attempt to use the updated `v` fields before they have been updated.\n",
"\n",
"Now let's plot the wavefield."
"Now, let's plot the wavefield."
]
},
{
Expand Down

0 comments on commit f54378d

Please sign in to comment.