-
Notifications
You must be signed in to change notification settings - Fork 17
Running KHARMA
To run a particular problem with KHARMA, you can usually simply invoke kharma.host
for CPU compiles, and kharma.cuda
for Nvidia GPUs:
$ ./kharma.host -i pars/orszag_tang.par
KHARMA benefits from certain runtime environment variables controlling CPU pinning and occupancy, which I've attempted to include in the short wrapper run.sh
. The interpretation of these variables can differ between compilers and machines, YMMV.
Note that some MPI libraries may require that kharma.{host,cuda}
be run with mpirun -n 1 ./kharma.host
even when invoking a single process. NVHPC in particular simply hangs if invoked without an external MPI environment.
KHARMA takes no compile-time options, so all the parameters for a simulation are provided by a single input "deck." Sample input files corresponding to standard tests and astrophysical systems are included in pars/
. Note that the convention is to end a parameter filename with .par
.
KHARMA will attempt to guess many parameters if they are not specified, e.g. boundary conditions and coordinate sizes for simulations in spherical polar coordinates, or interior boundary locations for black hole simulations based on keeping 5 zones inside the event horizon. Most of the core inferences are done in the function FixParameters
in kharma.cpp
, and most default values are specified in the Initialize
functions of their respective packages, e.g. in grmhd/grmhd.cpp
.
Any parameter can be overridden from the command line, useful for scaling and automation. Simply pass the full parameter path and value, e.g., "GRMHD/my_par=new_val" or "parthenon/output0/dt=
Nearly all problems will write out the state of the simulation every once in a while, for plotting or analysis after a simulation is complete. The cadence and contents of these "dump" files are specified in the parameter file; all outputs are usually listed together at the end. The fluid state is generally output to HDF5 files named problem_name.out0.NNNNN.phdf
, with the sequence number NNNNN
counting up incrementally from 00000
. Additionally, some problems compute reductions (total energy, accretion rate, etc.) and output these to a text file, problem_name.hst
.
Output files are split first by mesh block ID, then vector index if present, then by cell in k
, j
, i
order. Since data is split by block, the global mesh must be reassembled by the code reading the file, mapping each mesh blocd's data to its place in the whole. Since reimplementing this is a pain, it's recommended to copy or use one of the existing implementations:
- Parthenon provides a small python package, parthenon_tools, designed to read these output files and produce a few basic plots.
- For reading KHARMA output specifically, the pyharm package provides reading, calculation, and plotting tools.
In addition to the analysis outputs, most long-running problems will output "restart" or "checkpoint" files. These are named similarly to the science outputs, with the pattern problem_name.out1.NNNNN.rhdf
(note the changed file extension as well as the output number).
A simulation can be resumed very simply from such a file. E.g. to restart a torus simulation from the 200th restart (by default corresponding to 20,000M of simulation time):
$ ./kharma.cuda -r torus.out1.00200.rhdf
Note that -r
is incompatible with -i
. All of the original problem parameters are saved to .rhdf
files, and Parthenon will read them automatically when restarting a simulation, so no parameter file is necessary.
Parameters can be overridden when restarting, using exactly the same syntax as when running a new problem. Needless to say, be very careful with this -- many potential changes (e.g., mesh size) will at best produce gibberish, at worst segmentation faults.