Developing model evaluation notebooks allowing users to compare model estimates to SnowEx data and other datasets.
Name | Personal Goals / Can help with | Role |
---|---|---|
Valerie Bevan | ||
Cassia Cai | ||
Evi Ofekeze | ||
Steven Pestana | Project Facilitator/Helper | |
Justin Pflug | ||
Engela Sthapit | ||
Sveta Stuefer | ||
Melissa Wrzesien | Project Lead |
Model evaluation can be a time consuming step consisting of downloading multiple validation datasets and post-processing the datasets to make sure they match your model resolution, spatial extent, and time period. This notebook allows users to compare the provided model output (from SnowModel)
This toolbox demos how to do raster-to-raster and raster-to-grid comparisons for SWE and snow depth.
Model data is from a NASA Land Information System simulation using Glen Liston's SnowModel. Model output is available for 4 water years from the NASA AWS bucket, though we demonstrate a subsample of the full output. Evaluation data is from the snowexsql database. Data used from the SnowEx database include:
- GPR SWE
- Magnaprobe depth
- Snow pit depth and SWE
- Lidar depth and SWE
Tasks to accomplish:
- Grid-to-point comparison: compare the model output (in a gridded raster) to a point dataset like snow pit snow depths
- Grid-to-grid comparison: compare model output to another gridded product, like ASO lidar
- Determine which evaluation metric is most appropriate for each comparison
Such a notebook will allow for asking/answering questions such as:
- Does the model over/underestimate compared to field observations?
- Does the model uncertainty correlate with land cover features (elevation, vegetation type)?
Existing modeling evaluation methods traditionally require a user to download data from NSIDC or another DAAC. For the SnowEx snow pits, as an example, and user would have to search for them on NSIDC, download the data, and select the model grid cells that contain each snow pit. Once all the preprocessing steps are complete, only then could the user start to evaluate their model. This notebook provides the tools to make this an easier process.
Future work could make this notebook more generalizable so that users can upload their own model output for evaluation. Future developments could also include more datasets for comparison, whether those in the SnowEx database or other products (snow reanalysis, meteorological data, SNOTEL, etc).