The goal of this repository is to have a central codebase in which agreed upon metrics are applied to different global ocean forecast models, in order to have a fair comparison.
Model | μ-score (0d) | μ-score (3d) | μ-score (5d) | % Correct Mag (0d) | % Correct Mag (3d) | % Correct Mag (5d) |
---|---|---|---|---|---|---|
GLO12 SSH | 0.818 | 0.816 | 0.814 | 71.77 | 70.96 | 70.58 |
GLO12 SLA | 0.912 | 0.906 | 0.902 | 72.72 | 72.09 | 71.78 |
DUACS | 0.939 | 0.939 | 0.939 | 76.51 | 76.29 | 76.20 |
4DVarNet | 0.936 | 0.931 | 0.924 | 72.96 | 72.53 | 69.63 |
U-Net-17M | 0.932 | 0.927 | 0.924 | 72.86 | 70.08 | 67.89 |
U-Net-70M | 0.931 | 0.924 | 0.920 | 71.85 | 69.45 | 67.43 |
XiHE SSH | 0.818 | 0.780 | 0.779 | 71.77 | 64.67 | 63.95 |
XiHE SLA | 0.912 | 0.843 | 0.842 | 72.72 | 67.15 | 66.53 |
GloNet SSH | 0.821 | 0.825 | 0.823 | 74.96 | 74.98 | 74.60 |
GloNet SLA | 0.906 | 0.913 | 0.911 | 75.82 | 75.91 | 75.30 |
conda create -n <your_env> python=3.12
conda activate <your_env>
pip install -r requirements.txt
This repository works using metrics configuration files located in config/metrics/. You can see how to create your own configuration file here.
The repository is then used like so:
make sure you execute code from inside the repo
cd MultiModel-OceanGobalEval
python main.py metrics=metrics_config_template
This code will:
- download the reference data specified in
metrics_config_template.yaml
- pre-process your model according to the model_type specified in
metrics_config_template.yaml
- compute metrics specified in
metrics_config_template.yaml
The initial metrics codebase is comprised of code from the ocean data challenges gihtub repo.