Skip to content

Useful code background

K Clough edited this page Jun 23, 2024 · 16 revisions

Data structure

The data for the variable values at any timestep is stored in a state. The structure of the state is that it has all the data for the first variable (u) from the inner ghost cell to the outer ghost cell, then all the data for the second variable (v), and so on.

The ordering of the variables is specified in uservariables. Note that spherical symmetry implies that the only non zero values for the rank 2 tensors are the diagonal elements, and for vectors only the r component.

As a simple example, if our grid only had N=4 points at [-r/2, r/2, 3r/2, 5r/2] then the state would be a long vector like:

$$[u(-r/2), u(r/2), u(3r/2), u(5r/2), v(-r/2), v(r/2), v(3r/2), v(5r/2), \phi(-r/2).... \alpha(3r/2), \alpha(5r/2)]$$

We will pack and unpack this state vector to assist in making the code more readable, but we can always just index into the variable we want at some position.

When we obtain a solution, this contains a series of states, one for each timestep that we requested in setting up the example. The diagnostics for the Hamiltonian constraint is a good example of how to index into this solution.

Grid structure

Grid points and ghost cells

To set up a Grid object we need 3 inputs:

test_grid = Grid(max_r, num_points, log_factor)

The first, max_r tells it how big the grid should be, num_points is the total number of points (including ghost cells) and the log_factor controls the spacing of the grid, as detailed below. To select a grid with a fixed spacing this can simply be set to 1 (or omitted in which case it defaults to 1).

There are 3 ghost cells at each end of the grid, which are not evolved but are filled according to some user specified rules. For the points at the inner boundary the values can be directly copied from the corresponding evolved points in the positive $r>0$ part of the grid (but taking into account their parity change under $r$ -> $-r$, as specified in uservariables).

The outer boundary cells are filled by assuming some predefined fall off in the values with $r$, again specified in uservariables.

If we set log_factor to a value that is not 1, the grid points have a spacing that varies with position, with $dx$ being the spacing at the lowest true (evolved) point in the grid, which is placed at $dx/2$.

Subsequent points are located at $dr/2 + c ~ dr$, $dr/2 + c^2 ~ dr$ etc, with $c$ the specified log_factor. For consistency the inner ghost cells then fall at $-dr/2$, $-dr/2 - dr/c$, $-dr/2 - dr/c^2$.

Spatial derivatives - finite differencing

We calculate derivatives on the grid using finite difference methods. This simply means that for the first derivative with respect to r, we do something like:

$$\frac{\partial \phi}{\partial r} \approx \frac{(\phi(r+dr) - \phi(r-dr))}{2dr} $$

For this the "stencil" is $[-1/2dr, 0, 1/2dr]$, where the central value is the weighting of the current grid point, and the other two are those applied to the values of the variable either side.

By including more surrounding points (by increasing the complexity of the "stencil") we can get higher accuracy and the cost of greater complexity and a need for more ghost cells. The coefficients of the stencil can be obtained from fitting a Lagrange polynomial to the points and evaluating its derivatives in terms of the function values and their spacing.

These stencils can be packaged into a matrix that is applied to the whole vector of values for a given variable at a certain time, to return the values of the derivatives at each point.

See the Derivatives class for examples.

Time integration of "RHS" calculation

We make use of the highly optimised python solve_ivp class to perform the time integration of our solution. This essentially just takes the initial state, and given a function which computes the right hand side (the "rhs", which is $\partial_t \phi$) does something like:

$$\phi(t + dt) = \phi(t) + \partial_t \phi ~ dt$$

Again in practise it uses a higher order method (RK4, but checking and adjusting the timestep using RK5), but you probably don't need to worry too much about the detail of that. The time stepping is adaptive so sometimes you may seem the time increase and then decrease again, as the method aims to maintain a certain error. Once the solution is obtained it is interpolated onto the points that you asked for in setting up the time vector (note therefore that the vector of times t that you ask for does not determine the time stepping frequency, only the sampling of the outputs).