-
Notifications
You must be signed in to change notification settings - Fork 129
HDF5 Format
Athena++ uses its own HDF5 format when writing .athdf
files. Below are the full specifications of the file format within the HDF5 framework. Both 32-bit and 64-bit numeric datatypes (ints and floats) are used, and they are always stored in big-endian order. "Strings" here refer to variable length char arrays with at most 20 characters (excluding terminating null).
-
NumCycles
- scalar 32-bit int
- cycle number at which file is written
-
Time
- scalar 64-bit float
- simulation time at which file is written
-
Coordinates
- scalar string
- name of coordinate system (matches configure option and
.cpp
file)
-
RootGridX1
/RootGridX2
/RootGridX3
- triples of 64-bit floats
- minimum, maximum, and geometric ratio in x1/x2/x3-direction
-
RootGridSize
- triple of 32-bit ints
- numbers of cells at root level in x1-, x2-, and x3-directions
-
NumMeshBlocks
- scalar 32-bit int
- total number of MeshBlocks in the simulation
-
MeshBlockSize
- triple of 32-bit ints
- numbers of cells in each MeshBlock in x1-, x2-, and x3-directions
-
MaxLevel
- scalar 32-bit int
- highest level of mesh refinement present, with root being 0
-
NumVariables
- array of 32-bit ints
- length is number of cell-centered datasets in file
- each entry is number of scalar variables in corresponding dataset
-
DatasetNames
- array of strings
- length matches that of
NumVariables
- each entry is name of cell-centered dataset
- order matches that of
NumVariables
-
VariableNames
- array of strings
- length equals sum of entries in
NumVariables
- each entry is name of cell-centered variable
- order matches that of
DatasetNames
, with variables ordered by their index within the dataset
Let "NBlocks" be the value of the NumMeshBlocks
attribute. Let "nx1," "nx2," and "nx3" be the values in the MeshBlockSize
attribute.
-
Levels
- (NBlocks) array of 32-bit ints
- Refinement levels of MeshBlocks, with root being 0
-
LogicalLocations
- (NBlocks)×(3) array of 64-bit ints
- For each MeshBlock, the offsets in the x1-, x2-, and x3-directions from the minimum edge of the grid
- Counting is done as though entire grid is at the refinement level of the MeshBlock in question
-
x1f
/x2f
/x3f
- (NBlocks)×(nx1/nx2/nx3+1) array of 32-bit floats
- Values of interface locations along x1/x2/x3-direction
-
x1v
/x2v
/x3v
- (NBlocks)×(nx1/nx2/nx3) array of 32-bit floats
- Values of cell centers along x1/x2/x3-direction
- Cell-centered datasets
- One for each entry in
DatasetNames
- Each one is an (NVars)×(NBlocks)×(nx3)×(nx2)×(nx1) array of 32-bit floats
- NVars varies between datasets and is given by corresponding entry in
NumVariables
- One for each entry in
The following names of datasets and variables may be output depending on what is requested via the variable
argument in the <output>
block. (Variables and datasets will not be included if they are not included in the simulation due to the selected physics.)
Output Variable | Dataset Names | Variable Names |
---|---|---|
prim | prim | rho, press, vel1, vel2, vel3 |
B | Bcc1, Bcc2, Bcc3 | |
cons | cons | dens, Etot, mom1, mom2, mom3 |
B | Bcc1, Bcc2, Bcc3 | |
d | hydro | rho |
p | hydro | press |
v | hydro | vel1, vel2, vel3 |
D | hydro | dens |
E | hydro | Etot |
m | hydro | mom1, mom2, mom3 |
bcc | B | Bcc1, Bcc2, Bcc3 |
uov | uov | user_out_var0, user_out_var1, ... |
When Configuring Athena+++ with both the -mpi
and -hdf5
flags, it is important that the HDF5 library was built with MPI support. The solver will not compile if only the serial HDF5 routines are available.
If you are building HDF5 from source, the HDF5 library source code contains both serial HDF5 and Parallel HDF5 (PHDF5). Simply specify the MPI compiler when configuring the installer and pass the following flag:
CC=mpicc ./configure --enable-parallel ...
A POSIX compliant file system and an MPI library with MPI-I/O are requirements for Parallel HDF5. A parallel file system is necessary for good performance. To check if an installed HDF5 library was linked with the MPI library, check the output of h5cc -showconfig
.
On the Princeton University Research Computing (RC) clusters, the following compilers and libraries are currently recommended for compiling Athena++ with MPI and Parallel HDF5:
intel/18.0/64/18.0.2.199
intel-mpi/intel/2018.2/64
hdf5/intel-17.0/intel-mpi/1.10.0
They are loaded using the Environment Modules package via module load intel-mpi/intel/2018.2/64
, for example.
Note, there is no need to load an HDF5 module for serial HDF5 on the RC clusters; the library is already loaded in the compiler and linker search paths.
Last updated on 6/9/18
Getting Started
User Guide
- Configuring
- Compiling
- The Input File
- Problem Generators
- Boundary Conditions
- Coordinate Systems and Meshes
- Running the Code
- Outputs
- Using MPI and OpenMP
- Static Mesh Refinement
- Adaptive Mesh Refinement
- Load Balancing
- Special Relativity
- General Relativity
- Passive Scalars
- Shearing Box
- Diffusion Processes
- General Equation of State
- FFT
- Multigrid
- High-Order Methods
- Super-Time-Stepping
- Orbital Advection
- Rotating System
- Reading Data from External Files
- Non-relativistic Radiation Transport
- Cosmic Ray Transport
- Units and Constants
Programmer Guide