A package to fully run the comparison between data and model to assess model skill.
You can run the analysis as a Python package or with a command-line interface.
There are three steps to follow for a set of model-data validation, which is for one variable:
- Make a catalog for your model output.
- Make a catalog for your data.
- Run the comparison.
These steps will save files into a user application directory cache. See the demos for more details.
Project based on the cookiecutter science project template.
NOTE: Make sure you have Anaconda or Miniconda installed.
Create a conda environment called "omsa" that includes the package ocean-model-skill-assessor
:
$ conda create -n omsa -c conda-forge ocean-model-skill-assessor
Note that installing the packages is faster if you first install mamba
to your base Python and then use "mamba" in place of all instances of "conda".
Activate your new Python environment to use it with
$ conda activate omsa
Also install cartopy
to be able to plot maps:
$ conda install -c conda-forge cartopy
From conda-forge
:
$ conda install -c conda-forge ocean-model-skill-assessor
From PyPI:
$ pip install ocean-model-skill-assessor
To plot a map of the model domain with data locations, you'll need to additionally install cartopy
. If you used conda
above:
$ conda install -c conda-forge cartopy
If you installed from PyPI, check out the instructions for installing cartopy
here.
To also develop this package, install additional packages with:
$ conda install --file requirements-dev.txt
To then check code before committing and pushing it to github, locally run
$ pre-commit run --all-files