This tool applies Climate Risk and Vulnerability Assessment (CRVA) method, developed in Non-paper Guidelines for Project Managers: Making vulnerable investments climate resilient.
The tool was used in a master Thesis, publically available.
- Functions
- Download and format data
- Bias correction
- Application for the CRVA tool
- StudyCase-Gorongosa_Mozambique
- What to still implement
- Commands and explanations
This folder contains all the functions used during the process.
To study future risk, climate projections are used. Here, NEX-GDDP-CMIP6 dataset, produced by NASA is used because it is a dataset bias corrected and downscaled to the resolution 0.25 degree. The process to obtain this dataset was to bias corrected and downscale CMIP6 data. The dataset contains 9 climate variables, for several experiments and models (Detailed paper about the dataset).
For this tool, NEX-GDDP-CMIP6 data were downloaded with Download_NEX-GDDP-CMIP6.py, using the csv file made available by NASA here page 16.
Once the data are downloaded, it is needed to reformate them from nc files (with one file for ssp and model) to csv files with every models and SSP for every point with CSV_NEX-GDDP-CMIP6_one_lat_lon.
Climate projections often have biases. To deal with them, it is possible to perform bias correction. For this tool, bias correction applied is BCSD method (Wood 2004), implemented thanks to scikit-downscale package. In the end, only the quantile mapping step is performed, and not the downscaling. The BC_NEX-GDDP-CMIP6 (folder 2-BiasCorrection) applies functions BCSD_Temperature_return_anoms_to_apply and BCSD_Precipitation_return_anoms_to_apply from Bias_correction_function (folder 0-Functions) on respectively temperature and precipitation NEX-GDDP-CMIP6 dataset.
In folder Archives-BiasCorrectionTests, other bias correction tests are gathered.
Application of the method is all in CRVA_data_analyst. The application of the tool was done with data from the Gorongosa study.
Before performing any indicators, data are imported thanks to functions in Functions_ImportData (folder 0-Functions).
In folder 0-Functions, Functions_Indicators contains the indicators that can be used and that are explained in the table below.
Indicator available in the Functions_Indicators in folder 0-Functions
Evaluate the evolution of Net Precipitation would be useful for some infrastructure. The work was started and is in the folder InProcess-NetPrecipitation.
Sensitivity is based often on expert judgement. It is evaluated outside of the tool. It should be summarized in a matrix format (as example below), and added to the tool with the function sensitivity; this function is defined in Functions_ImportData (folder 0-Functions).
In this step, two sets of data are compared; one from the past and form the future. Every functions in this step is in Functions_Indicators in (folder 0-Functions).
- Calculate statistics of the period with function df_stat_distr
- Change between past and future statistics with function changes_in_indicators
- Based on change between past and future, categorization in low, medium, or high Exposure with function level_exposure
Crossing information from sensitivity and exposure with function, the final vulnerability to climate variables can be known with function vulnerability from Functions_Indicators in (folder 0-Functions).
The severity, as the sensitivity, is often based on expert judgement. It is not applied in the tool.
The likelihood is applied in this tool, with the function likelihood_accross_models_and_ssps Functions_likelihood in (folder 0-Functions)
Crossing information from severity and likelihood was not done in this tool.
For the study case in Gorongosa, the tool uses some observation data from NOAA. Some of them were absurd and were therefore treated with Treat Data tas NOAA Station and pr meteorological station Gorongosa.
To confirm the use of NEX-GDDP-CMIP6 dataset as representative of the location of interest, modelled temperature and precipitation were compared to observation data in Compare NOAA station and NEXGDDP CMIP6 data in Chimoio.
BCSD is a bias correction method (Wood 2004), performing first quantile mapping, and then downscaling. It was performed on NEX-GDDP-CMIP6 at Gorongosa location with the package scikit-downscale, with its functions BcsdPrecipitation and BcsdTemperature. Results were compared with observation data from NOAA website in Compare NOAA station and BC NEXGDDP CMIP6 data.
- Implement the tool for other climate variables (example sea level rise)
- Use other sources of modeled data: CMIP6 or CORDEX
- Use data combining observation (satellite and in-situ) and model data as observation data (such as the ones produced with algorithm CRU or CHIRPS)
- Select models performing better with advanced envelope-based selection approach
- Concerning bias correction, need to check stationarity of the biases if quantile mapping is to be applied (Cannon and al. 2015, Nahar 2017). But [quantile delta mapping] (https://journals.ametsoc.org/view/journals/clim/28/17/jcli-d-14-00754.1.xml) or scaled delta mapping looks more adapted to bias corrected precipitation data. Always for precipitation data, a parametric distribution should be used instead of a non-parametric one (Bum Kim K and al. 2014, (Heo and al. 2019](https://www.mdpi.com/2073-4441/11/7/1475), Cannon and al. 2015).
- Other indicator could be implemented. The more relevant one could be the ones looking into consecutive days of hot temperature, leading to heatwaves (Zacharias and al. 2015)
- The indicator '100-year event' could looks for a better distribution of the dataset, instead of directly taking a right-skewed Gumbel distribution function. This last one is often used for precipitation data (Moccia and al. 2020)
- In the exposure step, adapt the thresholds used to categorize the level of exposure
- Calculate yearly probability event with likelihood
- Looking into risks of two hazards occurring at the same moment or to risk of consecutives events (Marleen 2020) could also be a next step
This paragraph contains all the packages that were installed during the project. Thy are cited in the chronological order where they were installed.
- Environment initially created as following .conda create -n geodata -c conda-forge python=3.10.6 geopandas=0.12.1 pandas=1.5. pysheds=0.3.3 rasterstats=0.17.0 rasterio=1.3.3 numpy=1.23.4 seaborn matplotlib netcdf4=1.6.1
- pip install jupyter notebook: to use this environment in jupyter notebook
- pip install numpy matplotlib: to use those 2 packages on the environment. Use for many different purposes (numbers and plots)
- pip3 install -U matplotlib upgrade of matplotlib to have access to colors for matplotlib
- python -m pip install rioxarray
- conda install cdsapi: to download copernicus data
- conda install basemap, then pip install basemap-data: to map nc files (more infos on stackoverflow 1 and stackoverflow 2)
- pip install bias-correction: to perform bias correction, infos on module here
- pip install python-cmethods: other package to perform Bias correction (python-cmethods Github here)
- conda install --channel conda-forge pysal: to map projects on a map (Pysal library and Installation Pysal)
- conda install -c conda-forge r-nasaaccess: to have access to have access to climate and earth observation data, but not use in the end (Github scikit downscale)
- Attempt to install scikit learn and scikit-downscale, install all the following dependencies, but package scikit-downscale still not working : scipy, scikit-learn, dask, docopt, zarr , ipython, sphinx, numpydoc, sphinx_rtd_theme, pangeo-notebook, mpl-probscale, pydap, gcsfs, pwlf, sphinx-gallery, mlinsights. In the end, clone scikit downscale repository from the website of the package. This worked
- pip install h5netcdf: to manage those types of files (Gitbud h5netcdf)
- pip install geopy: to calculate distance between two geographical points (stackoverflow explanation of geopy and python library geopy)
- pip install Nio :, to increase speed of reading NEtCDF files (stackoverflow explanation)
Some errors occurred during the project. Here are the command performed to deal with those errors.
- Data attribute error: Upgrade pandas to deal with it (stackoverflow explanations)
- Problem ValueError: did not find a match in any of xarray's currently installed IO backends. To resolve that: python -m pip install xarray python -m pip install "xarray[io]" python -m pip install git+https://github.com/pydata/xarray.git
- Raising error RuntimeWarning: Engine 'cfgrib' loading failed: try to install conda install -c conda-forge python-eccodes, did not work, conflicting packages apparently try pip install ecmwflibs, with import ecmwflibs import eccodes in the script installed, worked