-%
-% .. only:: html
-%
-% .. figure:: /auto_examples/images/thumb/sphx_glr_plot_dem_subtraction_thumb.png
-% :alt: DEM subtraction
-%
-% :ref:`sphx_glr_auto_examples_plot_dem_subtraction.py`
diff --git a/doc/source/spatialstats.md b/doc/source/spatialstats.md
deleted file mode 100644
index 497d4c39..00000000
--- a/doc/source/spatialstats.md
+++ /dev/null
@@ -1,379 +0,0 @@
----
-file_format: mystnb
-jupytext:
- formats: md:myst
- text_representation:
- extension: .md
- format_name: myst
-kernelspec:
- display_name: xdem-env
- language: python
- name: xdem
----
-(spatialstats)=
-
-# Spatial statistics
-
-Spatial statistics, also referred to as [geostatistics](https://en.wikipedia.org/wiki/Geostatistics), are essential
-for the analysis of observations distributed in space.
-To analyze DEMs, xDEM integrates spatial statistics tools specific to DEMs described in recent literature,
-in particular in [Hugonnet et al. (2022)](https://doi.org/10.1109/jstars.2022.3188922) and
-[Rolstad et al. (2009)](https://doi.org/10.3189/002214309789470950). The implementation of these methods relies
-partially on the package [scikit-gstat](https://mmaelicke.github.io/scikit-gstat/index.html).
-
-The spatial statistics tools can be used to assess the precision of DEMs (see the definition of precision in {ref}`intro`).
-In particular, these tools help to:
-
-> - account for elevation heteroscedasticity (e.g., varying precision with terrain slope),
-> - quantify the spatial correlation of errors in DEMs (e.g., native spatial resolution, instrument noise),
-> - estimate robust errors for observations analyzed in space (e.g., average or sum of elevation, or of elevation changes),
-> - propagate errors between spatial ensembles at different scales (e.g., sum of glacier volume changes).
-
-(spatialstats-intro)=
-
-## Spatial statistics for DEM precision estimation
-
-### Assumptions for statistical inference in spatial statistics
-
-Spatial statistics are valid if the variable of interest verifies [the assumption of second-order stationarity](https://www.aspexit.com/en/fundamental-assumptions-of-the-variogram-second-order-stationarity-intrinsic-stationarity-what-is-this-all-about/).
-That is, if the three following assumptions are verified:
-
-> 1. The mean of the variable of interest is stationary in space, i.e. constant over sufficiently large areas,
-> 2. The variance of the variable of interest is stationary in space, i.e. constant over sufficiently large areas.
-> 3. The covariance between two observations only depends on the spatial distance between them, i.e. no other factor than this distance plays a role in the spatial correlation of measurement errors.
-
-```{eval-rst}
-.. plot:: code/spatialstats_stationarity_assumption.py
- :width: 90%
-```
-
-In other words, for a reliable analysis, the DEM should:
-
-> 1. Not contain systematic biases that do not average out over sufficiently large distances (e.g., shifts, tilts), but can contain pseudo-periodic biases (e.g., along-track undulations),
-> 2. Not contain measurement errors that vary significantly across space.
-> 3. Not contain factors that affect the spatial distribution of measurement errors, except for the distance between observations.
-
-### Quantifying the precision of a single DEM, or of a difference of DEMs
-
-To statistically infer the precision of a DEM, it is compared against independent elevation observations.
-
-Significant measurement errors can originate from both sets of elevation observations, and the analysis of differences will represent the mixed precision of the two.
-As there is no reason for a dependency between the elevation data sets, the analysis of elevation differences yields:
-
-$$
-\sigma_{dh} = \sigma_{h_{\textrm{precision1}} - h_{\textrm{precision2}}} = \sqrt{\sigma_{h_{\textrm{precision1}}}^{2} + \sigma_{h_{\textrm{precision2}}}^{2}}
-$$
-
-If the other elevation data is known to be of higher-precision, one can assume that the analysis of differences will represent only the precision of the rougher DEM.
-
-$$
-\sigma_{dh} = \sigma_{h_{\textrm{higher precision}} - h_{\textrm{lower precision}}} \approx \sigma_{h_{\textrm{lower precision}}}
-$$
-
-### Using stable terrain as a proxy
-
-Stable terrain is the terrain that has supposedly not been subject to any elevation change. It often refers to bare-rock,
-and is generally computed by simply excluding glaciers, snow and forests.
-
-Due to the sparsity of synchronous acquisitions, elevation data cannot be easily compared for simultaneous acquisition
-times. Thus, stable terrain is used a proxy to assess the precision of a DEM on all its terrain,
-including moving terrain that is generally of greater interest for analysis.
-
-As shown in [Hugonnet et al. (2022)](https://doi.org/10.1109/jstars.2022.3188922), accounting for {ref}`spatialstats-heterosc` is needed to reliably
-use stable terrain as a proxy for other types of terrain.
-
-(spatialstats-metrics)=
-
-## Metrics for DEM precision
-
-Historically, the precision of DEMs has been reported as a single value indicating the random error at the scale of a
-single pixel, for example $\pm 2$ meters at the 1$\sigma$ [confidence level](https://en.wikipedia.org/wiki/Confidence_interval).
-
-However, there is some limitations to this simple metric:
-
-> - the variability of the pixel-wise precision is not reported. The pixel-wise precision can vary depending on terrain- or instrument-related factors, such as the terrain slope. In rare occurrences, part of this variability has been accounted in recent DEM products, such as TanDEM-X global DEM that partitions the precision between flat and steep slopes ([Rizzoli et al. (2017)](https://doi.org/10.1016/j.isprsjprs.2017.08.008)),
-> - the area-wise precision of a DEM is generally not reported. Depending on the inherent resolution of the DEM, and patterns of noise that might plague the observations, the precision of a DEM over a surface area can vary significantly.
-
-### Pixel-wise elevation measurement error
-
-The pixel-wise measurement error corresponds directly to the dispersion $\sigma_{dh}$ of the sample $dh$.
-
-To estimate the pixel-wise measurement error for elevation data, two issues arise:
-
-> 1. The dispersion $\sigma_{dh}$ cannot be estimated directly on changing terrain,
-> 2. The dispersion $\sigma_{dh}$ can show important non-stationarities.
-
-The section {ref}`spatialstats-heterosc` describes how to quantify the measurement error as a function of
-several explanatory variables by using stable terrain as a proxy.
-
-### Spatially-integrated elevation measurement error
-
-The [standard error](https://en.wikipedia.org/wiki/Standard_error) of a statistic is the dispersion of the
-distribution of this statistic. For spatially distributed samples, the standard error of the mean corresponds to the
-error of a mean (or sum) of samples in space.
-
-The standard error $\sigma_{\overline{dh}}$ of the mean $\overline{dh}$ of the elevation changes
-samples $dh$ can be written as:
-
-$$
-\sigma_{\overline{dh}} = \frac{\sigma_{dh}}{\sqrt{N}},
-$$
-
-where $\sigma_{dh}$ is the dispersion of the samples, and $N$ is the number of **independent** observations.
-
-To estimate the standard error of the mean for elevation data, two issue arises:
-
-> 1. The dispersion of elevation differences $\sigma_{dh}$ is not stationary, a necessary assumption for spatial statistics.
-> 2. The number of pixels in the DEM $N$ does not equal the number of independent observations in the DEMs, because of spatial correlations.
-
-The sections {ref}`spatialstats-corr` and {ref}`spatialstats-errorpropag` describe how to account for spatial correlations
-and use those to integrate and propagate measurement errors in space.
-
-## Workflow for DEM precision estimation
-
-(spatialstats-heterosc)=
-
-### Elevation heteroscedasticity
-
-Elevation data contains significant variability in measurement errors.
-
-xDEM provides tools to **quantify** this variability using explanatory variables, **model** those numerically to
-estimate a function predicting elevation error, and **standardize** data for further analysis.
-
-#### Quantify and model heteroscedasticity
-
-Elevation [heteroscedasticity](https://en.wikipedia.org/wiki/Heteroscedasticity) corresponds to a variability in
-precision of elevation observations, that are linked to terrain or instrument variables.
-
-$$
-\sigma_{dh} = \sigma_{dh}(\textrm{var}_{1},\textrm{var}_{2}, \textrm{...}) \neq \textrm{constant}
-$$
-
-Owing to the large number of samples of elevation data, we can easily estimate this variability by [binning](https://en.wikipedia.org/wiki/Data_binning) the data and estimating the statistical dispersion (see
-{ref}`robuststats-meanstd`) across several explanatory variables using {func}`xdem.spatialstats.nd_binning`.
-
-
-```{code-cell} ipython3
-:tags: [hide-input, hide-output]
-import geoutils as gu
-import numpy as np
-
-import xdem
-
-# Load data
-dh = gu.Raster(xdem.examples.get_path("longyearbyen_ddem"))
-ref_dem = xdem.DEM(xdem.examples.get_path("longyearbyen_ref_dem"))
-glacier_mask = gu.Vector(xdem.examples.get_path("longyearbyen_glacier_outlines"))
-mask = glacier_mask.create_mask(dh)
-
-slope = xdem.terrain.get_terrain_attribute(ref_dem, attribute=["slope"])
-
-# Keep only stable terrain data
-dh.load()
-dh.set_mask(mask)
-dh_arr = gu.raster.get_array_and_mask(dh)[0]
-slope_arr = gu.raster.get_array_and_mask(slope)[0]
-
-# Subsample to run the snipped code faster
-indices = gu.raster.subsample_array(dh_arr, subsample=10000, return_indices=True, random_state=42)
-dh_arr = dh_arr[indices]
-slope_arr = slope_arr[indices]
-```
-
-```{code-cell} ipython3
-# Estimate the measurement error by bin of slope, using the NMAD as robust estimator
-df_ns = xdem.spatialstats.nd_binning(
- dh_arr, list_var=[slope_arr], list_var_names=["slope"], statistics=["count", xdem.spatialstats.nmad]
-)
-```
-
-```{eval-rst}
-.. plot:: code/spatialstats_heterosc_slope.py
- :width: 90%
-```
-
-The most common explanatory variables are:
-
-> - the terrain slope and terrain curvature (see {ref}`terrain-attributes`) that can explain a large part of the terrain-related variability in measurement error,
-> - the quality of stereo-correlation that can explain a large part of the measurement error of DEMs generated by stereophotogrammetry,
-> - the interferometric coherence that can explain a large part of the measurement error of DEMs generated by [InSAR](https://en.wikipedia.org/wiki/Interferometric_synthetic-aperture_radar).
-
-Once quantified, elevation heteroscedasticity can be modelled numerically by linear interpolation across several
-variables using {func}`xdem.spatialstats.interp_nd_binning`.
-
-```{code-cell} ipython3
-# Derive a numerical function of the measurement error
-err_dh = xdem.spatialstats.interp_nd_binning(df_ns, list_var_names=["slope"])
-```
-
-#### Standardize elevation differences for further analysis
-
-In order to verify the assumptions of spatial statistics and be able to use stable terrain as a reliable proxy in
-further analysis (see {ref}`spatialstats-intro`), [standardization](https://en.wikipedia.org/wiki/Standard_score)
-of the elevation differences are required to reach a stationary variance.
-
-```{eval-rst}
-.. plot:: code/spatialstats_standardizing.py
- :width: 90%
-```
-
-For application to DEM precision estimation, the mean is already centered on zero and the variance is non-stationary,
-which yields:
-
-$$
-z_{dh} = \frac{dh(\textrm{var}_{1}, \textrm{var}_{2}, \textrm{...})}{\sigma_{dh}(\textrm{var}_{1}, \textrm{var}_{2}, \textrm{...})}
-$$
-
-where $z_{dh}$ is the standardized elevation difference sample.
-
-Code-wise, standardization is as simple as a division of the elevation differences `dh` using the estimated measurement
-error:
-
-```{code-cell} ipython3
-# Standardize the data
-z_dh = dh_arr / err_dh(slope_arr)
-```
-
-To later de-standardize estimations of the dispersion of a given subsample of elevation differences,
-possibly after further analysis of {ref}`spatialstats-corr` and {ref}`spatialstats-errorpropag`,
-one simply needs to apply the opposite operation.
-
-For a single pixel $\textrm{P}$, the dispersion is directly the elevation measurement error evaluated for the
-explanatory variable of this pixel as, per construction, $\sigma_{z_{dh}} = 1$:
-
-$$
-\sigma_{dh}(\textrm{P}) = 1 \cdot \sigma_{dh}(\textrm{var}_{1}(\textrm{P}), \textrm{var}_{2}(\textrm{P}), \textrm{...})
-$$
-
-For a mean of pixels $\overline{dh}\vert_{\mathbb{S}}$ in the subsample $\mathbb{S}$, the standard error of the mean
-of the standardized data $\overline{\sigma_{z_{dh}}}\vert_{\mathbb{S}}$ can be de-standardized by multiplying by the
-average measurement error of the pixels in the subsample, evaluated through the explanatory variables of each pixel:
-
-$$
-\sigma_{\overline{dh}}\vert_{\mathbb{S}} = \sigma_{\overline{z_{dh}}}\vert_{\mathbb{S}} \cdot \overline{\sigma_{dh}(\textrm{var}_{1}, \textrm{var}_{2}, \textrm{...})}\vert_{\mathbb{S}}
-$$
-
-Estimating the standard error of the mean of the standardized data $\sigma_{\overline{z_{dh}}}\vert_{\mathbb{S}}$
-requires an analysis of spatial correlation and a spatial integration of this correlation, described in the next sections.
-
-```{eval-rst}
-.. minigallery:: xdem.spatialstats.infer_heteroscedasticity_from_stable xdem.spatialstats.nd_binning
- :add-heading: Examples that deal with elevation heteroscedasticity
- :heading-level: "
-```
-
-(spatialstats-corr)=
-
-### Spatial correlation of elevation measurement errors
-
-Spatial correlation of elevation measurement errors correspond to a dependency between measurement errors of spatially
-close pixels in elevation data. Those can be related to the resolution of the data (short-range correlation), or to
-instrument noise and deformations (mid- to long-range correlations).
-
-xDEM provides tools to **quantify** these spatial correlation with pairwise sampling optimized for grid data and to
-**model** correlations simultaneously at multiple ranges.
-
-#### Quantify spatial correlations
-
-[Variograms](https://en.wikipedia.org/wiki/Variogram) are functions that describe the spatial correlation of a sample.
-The variogram $2\gamma(h)$ is a function of the distance between two points, referred to as spatial lag $l$
-(usually noted $h$, here avoided to avoid confusion with the elevation and elevation differences).
-The output of a variogram is the correlated variance of the sample.
-
-$$
-2\gamma(l) = \textrm{var}\left(Z(\textrm{s}_{1}) - Z(\textrm{s}_{2})\right)
-$$
-
-where $Z(\textrm{s}_{i})$ is the value taken by the sample at location $\textrm{s}_{i}$, and sample positions
-$\textrm{s}_{1}$ and $\textrm{s}_{2}$ are separated by a distance $l$.
-
-For elevation differences $dh$, this translates into:
-
-$$
-2\gamma_{dh}(l) = \textrm{var}\left(dh(\textrm{s}_{1}) - dh(\textrm{s}_{2})\right)
-$$
-
-The variogram essentially describes the spatial covariance $C$ in relation to the variance of the entire sample
-$\sigma_{dh}^{2}$:
-
-$$
-\gamma_{dh}(l) = \sigma_{dh}^{2} - C_{dh}(l)
-$$
-
-```{eval-rst}
-.. plot:: code/spatialstats_variogram_covariance.py
- :width: 90%
-```
-
-Empirical variograms are variograms estimated directly by [binned](https://en.wikipedia.org/wiki/Data_binning) analysis
-of variance of the data. Historically, empirical variograms were estimated for point data by calculating all possible
-pairwise differences in the samples. This amounts to $N^2$ pairwise calculations for $N$ samples, which is
-not well-suited to grid data that contains many millions of points and would be impossible to comupute. Thus, in order
-to estimate a variogram for large grid data, subsampling is necessary.
-
-Random subsampling of the grid samples used is a solution, but often unsatisfactory as it creates a clustering
-of pairwise samples that unevenly represents lag classes (most pairwise differences are found at mid distances, but too
-few at short distances and long distances).
-
-To remedy this issue, xDEM provides {func}`xdem.spatialstats.sample_empirical_variogram`, an empirical variogram estimation tool
-that encapsulates a pairwise subsampling method described in `skgstat.MetricSpace.RasterEquidistantMetricSpace`.
-This method compares pairwise distances between a center subset and equidistant subsets iteratively across a grid, based on
-[sparse matrices](https://en.wikipedia.org/wiki/Sparse_matrix) routines computing pairwise distances of two separate
-subsets, as in [scipy.cdist](https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.cdist.html)
-(instead of using pairwise distances within the same subset, as implemented in most spatial statistics packages).
-The resulting pairwise differences are evenly distributed across the grid and across lag classes (in 2 dimensions, this
-means that lag classes separated by a factor of $\sqrt{2}$ have an equal number of pairwise differences computed).
-
-```{code-cell} ipython3
-# Sample empirical variogram
-df_vgm = xdem.spatialstats.sample_empirical_variogram(values=dh, subsample=10, random_state=42)
-```
-
-The variogram is returned as a {class}`~pandas.DataFrame` object.
-
-With all spatial lags sampled evenly, estimating a variogram requires significantly less samples, increasing the
-robustness of the spatial correlation estimation and decreasing computing time!
-
-#### Model spatial correlations
-
-Once an empirical variogram is estimated, fitting a function model allows to simplify later analysis by directly
-providing a function form (e.g., for kriging equations, or uncertainty analysis - see {ref}`spatialstats-errorpropag`),
-which would otherwise have to be numerically modelled.
-
-Generally, in spatial statistics, a single model is used to describe the correlation in the data.
-In elevation data, however, spatial correlations are observed at different scales, which requires fitting a sum of models at
-multiple ranges (introduced in [Rolstad et al. (2009)](https://doi.org/10.3189/002214309789470950) for glaciology
-applications).
-
-This can be performed through the function {func}`xdem.spatialstats.fit_sum_model_variogram`, which expects as input a
-`pd.Dataframe` variogram.
-
-```{code-cell} ipython3
-# Fit sum of double-range spherical model
-func_sum_vgm, params_variogram_model = xdem.spatialstats.fit_sum_model_variogram(
- list_models=["Gaussian", "Spherical"], empirical_variogram=df_vgm
-)
-```
-
-```{eval-rst}
-.. minigallery:: xdem.spatialstats.infer_spatial_correlation_from_stable xdem.spatialstats.sample_empirical_variogram
- :add-heading: Examples that deal with spatial correlations
- :heading-level: "
-```
-
-(spatialstats-errorpropag)=
-
-### Spatially integrated measurement errors
-
-After quantifying and modelling spatial correlations, those an effective sample size, and elevation measurement error:
-
-```{code-cell} ipython3
-# Calculate the area-averaged uncertainty with these models
-neff = xdem.spatialstats.number_effective_samples(area=1000, params_variogram_model=params_variogram_model)
-```
-
-TODO: Add this section based on Rolstad et al. (2009), Hugonnet et al. (in prep)
-
-### Propagation of correlated errors
-
-TODO: Add this section based on Krige's relation (Webster & Oliver, 2007), Hugonnet et al. (in prep)
diff --git a/doc/source/terrain.md b/doc/source/terrain.md
deleted file mode 100644
index 50391759..00000000
--- a/doc/source/terrain.md
+++ /dev/null
@@ -1,229 +0,0 @@
-(terrain-attributes)=
-
-# Terrain attributes
-
-For analytic and visual purposes, deriving certain attributes of a DEM may be required.
-Some are useful for direct analysis, such as a slope map to differentiate features of different angles, while others, like the hillshade, are great tools for visualizing a DEM.
-
-## Slope
-
-{func}`xdem.terrain.slope`
-
-The slope of a DEM describes the tilt, or gradient, of each pixel in relation to its neighbours.
-It is most often described in degrees, where a flat surface is 0° and a vertical cliff is 90°.
-No tilt direction is stored in the slope map; a 45° tilt westward is identical to a 45° tilt eastward.
-
-The slope can be computed either by the method of [Horn (1981)](http://dx.doi.org/10.1109/PROC.1981.11918) (default)
-based on a refined gradient formulation on a 3x3 pixel window, or by the method of [Zevenbergen and Thorne (1987)](http://dx.doi.org/10.1002/esp.3290120107) based on a plane fit on a 3x3 pixel window.
-
-The differences between methods are illustrated in the {ref}`sphx_glr_basic_examples_plot_terrain_attributes.py`
-example.
-
-```{image} basic_examples/images/sphx_glr_plot_terrain_attributes_001.png
-:width: 600
-```
-
-```{eval-rst}
-.. minigallery:: xdem.terrain.slope
-```
-
-## Aspect
-
-{func}`xdem.terrain.aspect`
-
-The aspect describes the orientation of strongest slope.
-It is often reported in degrees, where a slope tilting straight north corresponds to an aspect of 0°, and an eastern
-aspect is 90°, south is 180° and west is 270°. By default, a flat slope is given an arbitrary aspect of 180°.
-
-As the aspect is directly based on the slope, it varies between the method of [Horn (1981)](http://dx.doi.org/10.1109/PROC.1981.11918) (default) and that of [Zevenbergen and Thorne (1987)](http://dx.doi.org/10.1002/esp.3290120107).
-
-```{image} basic_examples/images/sphx_glr_plot_terrain_attributes_002.png
-:width: 600
-```
-
-```{eval-rst}
-.. minigallery:: xdem.terrain.aspect
- :add-heading:
-```
-
-## Hillshade
-
-{func}`xdem.terrain.hillshade`
-
-The hillshade is a slope map, shaded by the aspect of the slope.
-The slope map is a good tool to visualize terrain, but it does not distinguish between a mountain and a valley.
-It may therefore be slightly difficult to interpret in mountainous terrain.
-Hillshades are therefore often preferable for visualizing DEMs.
-With a westerly azimuth (a simulated sun coming from the west), all eastern slopes are slightly darker.
-This mode of shading the slopes often generates a map that is much more easily interpreted than the slope map.
-
-As the hillshade is directly based on the slope and aspect, it varies between the method of [Horn (1981)](http://dx.doi.org/10.1109/PROC.1981.11918) (default) and that of [Zevenbergen and Thorne (1987)](http://dx.doi.org/10.1002/esp.3290120107).
-
-Note, however, that the hillshade is not a shadow map; no occlusion is taken into account so it does not represent "true" shading.
-It therefore has little analytic purpose, but it still constitutes a great visualization tool.
-
-```{image} basic_examples/images/sphx_glr_plot_terrain_attributes_003.png
-:width: 600
-```
-
-```{eval-rst}
-.. minigallery:: xdem.terrain.hillshade
- :add-heading:
-```
-
-## Curvature
-
-{func}`xdem.terrain.curvature`
-
-The curvature map is the second derivative of elevation, which highlights the convexity or concavity of the terrain.
-If a surface is convex (like a mountain peak), it will have positive curvature.
-If a surface is concave (like a through or a valley bottom), it will have negative curvature.
-The curvature values in units of m{sup}`-1` are quite small, so they are by convention multiplied by 100.
-
-The curvature is based on the method of [Zevenbergen and Thorne (1987)](http://dx.doi.org/10.1002/esp.3290120107).
-
-```{image} basic_examples/images/sphx_glr_plot_terrain_attributes_004.png
-:width: 600
-```
-
-```{eval-rst}
-.. minigallery:: xdem.terrain.curvature
- :add-heading:
-```
-
-## Planform curvature
-
-{func}`xdem.terrain.planform_curvature`
-
-The planform curvature is the curvature perpendicular to the direction of slope, reported in 100 m{sup}`-1`.
-
-It is based on the method of [Zevenbergen and Thorne (1987)](http://dx.doi.org/10.1002/esp.3290120107).
-
-```{image} basic_examples/images/sphx_glr_plot_terrain_attributes_005.png
-:width: 600
-```
-
-```{eval-rst}
-.. minigallery:: xdem.terrain.planform_curvature
- :add-heading:
-```
-
-## Profile curvature
-
-{func}`xdem.terrain.profile_curvature`
-
-The profile curvature is the curvature parallel to the direction of slope, reported in 100 m{sup}`-1`..
-
-It is based on the method of [Zevenbergen and Thorne (1987)](http://dx.doi.org/10.1002/esp.3290120107).
-
-```{image} basic_examples/images/sphx_glr_plot_terrain_attributes_006.png
-:width: 600
-```
-
-```{eval-rst}
-.. minigallery:: xdem.terrain.profile_curvature
- :add-heading:
-```
-
-## Topographic Position Index
-
-{func}`xdem.terrain.topographic_position_index`
-
-The Topographic Position Index (TPI) is a metric of slope position, based on the method of [Weiss (2001)](http://www.jennessent.com/downloads/TPI-poster-TNC_18x22.pdf) that corresponds to the difference of the elevation of a central
-pixel with the average of that of neighbouring pixels. Its unit is that of the DEM (typically meters) and it can be
-computed for any window size (default 3x3 pixels).
-
-```{image} basic_examples/images/sphx_glr_plot_terrain_attributes_007.png
-:width: 600
-```
-
-```{eval-rst}
-.. minigallery:: xdem.terrain.topographic_position_index
- :add-heading:
-```
-
-## Terrain Ruggedness Index
-
-{func}`xdem.terrain.terrain_ruggedness_index`
-
-The Terrain Ruggedness Index (TRI) is a metric of terrain ruggedness, based on cumulated differences in elevation between
-a central pixel and its surroundings. Its unit is that of the DEM (typically meters) and it can be computed for any
-window size (default 3x3 pixels).
-
-For topography (default), the method of [Riley et al. (1999)](http://download.osgeo.org/qgis/doc/reference-docs/Terrain_Ruggedness_Index.pdf) is generally used, where the TRI is computed by the squareroot of squared differences with
-neighbouring pixels.
-
-For bathymetry, the method of [Wilson et al. (2007)](http://dx.doi.org/10.1080/01490410701295962) is generally used,
-where the TRI is defined by the mean absolute difference with neighbouring pixels
-
-```{image} basic_examples/images/sphx_glr_plot_terrain_attributes_008.png
-:width: 600
-```
-
-```{eval-rst}
-.. minigallery:: xdem.terrain.terrain_ruggedness_index
- :add-heading:
-```
-
-## Roughness
-
-{func}`xdem.terrain.roughness`
-
-The roughness is a metric of terrain ruggedness, based on the maximum difference in elevation in the surroundings.
-The roughness is based on the method of [Dartnell (2000)](http://dx.doi.org/10.14358/PERS.70.9.1081). Its unit is that of the DEM (typically meters) and it can be computed for any window size (default 3x3 pixels).
-
-```{image} basic_examples/images/sphx_glr_plot_terrain_attributes_009.png
-:width: 600
-```
-
-```{eval-rst}
-.. minigallery:: xdem.terrain.roughness
- :add-heading:
-```
-
-## Rugosity
-
-{func}`xdem.terrain.rugosity`
-
-The rugosity is a metric of terrain ruggedness, based on the ratio between planimetric and real surface area. The
-rugosity is based on the method of [Jenness (2004)](
).
-It is unitless, and is only supported for a 3x3 window size.
-
-```{image} basic_examples/images/sphx_glr_plot_terrain_attributes_010.png
-:width: 600
-```
-
-```{eval-rst}
-.. minigallery:: xdem.terrain.rugosity
- :add-heading:
-```
-
-## Fractal roughness
-
-{func}`xdem.terrain.fractal_roughness`
-
-The fractal roughness is a metric of terrain ruggedness based on the local fractal dimension estimated by the volume
-box-counting method of [Taud and Parrot (2005)](https://doi.org/10.4000/geomorphologie.622).
-The fractal roughness is computed by estimating the fractal dimension in 3D space, for a local window centered on the
-DEM pixels. Its unit is that of a dimension, and is always between 1 (dimension of a line in 3D space) and 3
-(dimension of a cube in 3D space). It can only be computed on window sizes larger than 5x5 pixels, and defaults to 13x13.
-
-```{image} basic_examples/images/sphx_glr_plot_terrain_attributes_011.png
-:width: 600
-```
-
-```{eval-rst}
-.. minigallery:: xdem.terrain.fractal_roughness
- :add-heading:
-```
-
-## Generating multiple attributes at once
-
-Often, one may seek more terrain attributes than one, e.g. both the slope and the aspect.
-Since both are dependent on the gradient of the DEM, calculating them separately is computationally repetitive.
-Multiple terrain attributes can be calculated from the same gradient using the {func}`xdem.terrain.get_terrain_attribute` function.
-
-```{eval-rst}
-.. minigallery:: xdem.terrain.get_terrain_attribute
- :add-heading:
-```
diff --git a/doc/source/vertical_ref.md b/doc/source/vertical_ref.md
deleted file mode 100644
index 96f965f8..00000000
--- a/doc/source/vertical_ref.md
+++ /dev/null
@@ -1,240 +0,0 @@
----
-file_format: mystnb
-jupytext:
- formats: md:myst
- text_representation:
- extension: .md
- format_name: myst
-kernelspec:
- display_name: xdem-env
- language: python
- name: xdem
----
-(vertical-ref)=
-
-# Vertical referencing
-
-xDEM supports the use of **vertical coordinate reference systems (vertical CRSs)** and vertical transformations for DEMs
-by conveniently wrapping PROJ pipelines through [Pyproj](https://pyproj4.github.io/pyproj/stable/) in the {class}`~xdem.DEM` class.
-
-```{important}
-**A {class}`~xdem.DEM` already possesses a {class}`~xdem.DEM.crs` attribute that defines its 2- or 3D CRS**, inherited from
-{class}`~geoutils.Raster`. Unfortunately, most DEM products do not yet come with a 3D CRS in their raster metadata, and
-vertical CRSs often have to be set by the user. See {ref}`vref-setting` below.
-```
-
-## What is a vertical CRS?
-
-A vertical CRS is a **1D, often gravity-related, coordinate reference system of surface elevation** (or height), used to expand a [2D CRS](https://en.wikipedia.org/wiki/Spatial_reference_system) to a 3D CRS.
-
-There are two types of 3D CRSs, related to two types of definition of the vertical axis:
-- **Ellipsoidal heights** CRSs, that are simply 2D CRS promoted to 3D by explicitly adding an elevation axis to the ellipsoid used by the 2D CRS,
-- **Geoid heights** CRSs, that are a compound of a 2D CRS and a vertical CRS (2D + 1D), where the vertical CRS of the geoid is added relative to the ellipsoid.
-
-In xDEM, we merge these into a single vertical CRS attribute {class}`DEM.vcrs` that takes two types of values:
-- the string `"Ellipsoid"` for any ellipsoidal CRS promoted to 3D (e.g., the WGS84 ellipsoid), or
-- a {class}`pyproj.CRS` with only a vertical axis for a CRS based on geoid heights (e.g., the EGM96 geoid).
-
-In practice, a {class}`pyproj.CRS` with only a vertical axis is either a {class}`~pyproj.crs.BoundCRS` (when created from a grid) or a {class}`~pyproj.crs.VerticalCRS` (when created in any other manner).
-
-## Methods to manipulate vertical CRSs
-
-The parsing, setting and transformation of vertical CRSs revolves around **three methods**, all described in details further below:
-- The **instantiation** of {class}`~xdem.DEM` that implicitly tries to set the vertical CRS from the metadata (or explicitly through the `vcrs` argument),
-- The **setting** method {func}`~xdem.DEM.set_vcrs` to explicitly set the vertical CRS of a {class}`~xdem.DEM`,
-- The **transformation** method {func}`~xdem.DEM.to_vcrs` to explicitly transform the vertical CRS of a {class}`~xdem.DEM`.
-
-```{caution}
-As of now, **[Rasterio](https://rasterio.readthedocs.io/en/stable/) does not support vertical transformations during CRS reprojection** (even if the CRS
-provided contains a vertical axis).
-We therefore advise to perform horizontal transformation and vertical transformation independently using {func}`DEM.reproject` and {func}`DEM.to_vcrs`, respectively.
-```
-
-(vref-setting)=
-## Automated vertical CRS detection
-
-During instantiation of a {class}`~xdem.DEM`, the vertical CRS {attr}`~xdem.DEM.vcrs` is tentatively set with the following priority order:
-
-1. **From the {attr}`~xdem.DEM.crs` of the DEM**, if 3-dimensional,
-
-```{code-cell} ipython3
-:tags: [remove-cell]
-
-import xdem
-
-# Replace this with a new DEM in xdem-data
-import numpy as np
-import pyproj
-import rasterio as rio
-dem = xdem.DEM.from_array(data=np.ones((2,2)),
- transform=rio.transform.from_bounds(0, 0, 1, 1, 2, 2),
- crs=pyproj.CRS("EPSG:4326+5773"),
- nodata=None)
-dem.save("mydem_with3dcrs.tif")
-```
-
-```{code-cell} ipython3
-import xdem
-
-# Open a DEM with a 3D CRS
-dem = xdem.DEM("mydem_with3dcrs.tif")
-# First, let's look at what was the 3D CRS
-pyproj.CRS(dem.crs)
-```
-
-```{code-cell} ipython3
-# The vertical CRS is extracted automatically
-dem.vcrs
-```
-
-```{code-cell} ipython3
-:tags: [remove-cell]
-
-import os
-os.remove("mydem_with3dcrs.tif")
-```
-
-2. **From the {attr}`~xdem.DEM.product` name of the DEM**, if parsed from the filename by {class}`geoutils.SatelliteImage`.
-
-
-```{see-also}
-The {class}`~geoutils.SatelliteImage` parent class that parses the product metadata is described in [a dedicated section of GeoUtils' documentation](https://geoutils.readthedocs.io/en/latest/satimg_class.html).
-```
-
-```{code-cell} ipython3
-:tags: [remove-cell]
-
-# Replace this with a new DEM in xdem-data
-import rasterio as rio
-dem = xdem.DEM.from_array(data=np.ones((2,2)),
- transform=rio.transform.from_bounds(0, 0, 1, 1, 2, 2),
- crs=pyproj.CRS("EPSG:4326"),
- nodata=None)
-# Save with the name of an ArcticDEM strip file
-dem.save("SETSM_WV03_20151101_104001001327F500_104001001312DE00_seg2_2m_v3.0_dem.tif")
-```
-
-```{code-cell} ipython3
-# Open an ArcticDEM strip file, recognized as an ArcticDEM product by SatelliteImage
-dem = xdem.DEM("SETSM_WV03_20151101_104001001327F500_104001001312DE00_seg2_2m_v3.0_dem.tif")
-# The vertical CRS is set as "Ellipsoid" from knowing that is it an ArcticDEM product
-dem.vcrs
-```
-
-```{code-cell} ipython3
-:tags: [remove-cell]
-
-os.remove("SETSM_WV03_20151101_104001001327F500_104001001312DE00_seg2_2m_v3.0_dem.tif")
-```
-
-**Currently recognized DEM products**:
-
-```{list-table}
- :widths: 1 1
- :header-rows: 1
-
- * - **DEM**
- - **Vertical CRS**
- * - [ArcticDEM](https://www.pgc.umn.edu/data/arcticdem/)
- - Ellipsoid
- * - [REMA](https://www.pgc.umn.edu/data/arcticdem/)
- - Ellipsoid
- * - [EarthDEM](https://www.pgc.umn.edu/data/earthdem/)
- - Ellipsoid
- * - [TanDEM-X global DEM](https://geoservice.dlr.de/web/dataguide/tdm90/)
- - Ellipsoid
- * - [NASADEM-HGTS](https://lpdaac.usgs.gov/documents/592/NASADEM_User_Guide_V1.pdf)
- - Ellipsoid
- * - [NASADEM-HGT](https://lpdaac.usgs.gov/documents/592/NASADEM_User_Guide_V1.pdf)
- - EGM96
- * - [ALOS World 3D](https://www.eorc.jaxa.jp/ALOS/en/aw3d30/aw3d30v11_format_e.pdf)
- - EGM96
- * - [SRTM v4.1](http://www.cgiar-csi.org/data/srtm-90m-digital-elevation-database-v4-1)
- - EGM96
- * - [ASTER GDEM v2-3](https://lpdaac.usgs.gov/documents/434/ASTGTM_User_Guide_V3.pdf)
- - EGM96
- * - [Copernicus DEM](https://spacedata.copernicus.eu/web/cscda/dataset-details?articleId=394198)
- - EGM08
-```
-
-If your DEM does not have a `.vcrs` after instantiation, none of those steps worked. You can define the vertical CRS
-explicitly during {class}`~xdem.DEM` instantiation with the `vcrs` argument or with {func}`~xdem.DEM.set_vcrs`,
-with user inputs described below.
-
-## Setting a vertical CRS with convenient user inputs
-
-The vertical CRS of a {class}`~xdem.DEM` can be set or re-set manually at any point using {func}`~xdem.DEM.set_vcrs`.
-
-The `vcrs` argument, consistent across the three functions {class}`~xdem.DEM`, {func}`~xdem.DEM.set_vcrs` and {func}`~xdem.DEM.to_vcrs`,
-accepts a **wide variety of user inputs**:
-
-- **Simple strings for the three most common references: `"Ellipsoid"`, `"EGM08"` or `"EGM96"`**,
-
-```{code-cell} ipython3
-# Set a geoid vertical CRS based on a string
-dem.set_vcrs("EGM96")
-dem.vcrs
-```
-
-```{code-cell} ipython3
-# Set a vertical CRS extended from the ellipsoid of the DEM's CRS
-dem.set_vcrs("Ellipsoid")
-dem.vcrs
-```
-
-- **Any PROJ grid name available at [https://cdn.proj.org/](https://cdn.proj.org/)**,
-
-```{tip}
-**No need to download the grid!** This is done automatically during the setting operation, if the grid does not already exist locally.
-```
-
-```{code-cell} ipython3
-# Set a geoid vertical CRS based on a grid
-dem.set_vcrs("us_noaa_geoid06_ak.tif")
-dem.vcrs
-```
-
-- **Any EPSG code as {class}`int`**,
-
-```{code-cell} ipython3
-# Set a geoid vertical CRS based on an EPSG code
-dem.set_vcrs(5773)
-dem.vcrs
-```
-
-- **Any {class}`~pyproj.crs.CRS` that possesses a vertical axis**.
-
-```{code-cell} ipython3
-# Set a vertical CRS based on a pyproj.CRS
-import pyproj
-dem.set_vcrs(pyproj.CRS("EPSG:3855"))
-dem.vcrs
-```
-
-## Transforming to another vertical CRS
-
-To transform a {class}`~xdem.DEM` to a different vertical CRS, {func}`~xdem.DEM.to_vcrs` is used.
-
-```{note}
-If your transformation requires a grid that is not available locally, it will be **downloaded automatically**.
-xDEM uses only the best available (i.e. best accuracy) transformation returned by {class}`pyproj.transformer.TransformerGroup`, considering the area-of-interest as the DEM extent {class}`~xdem.DEM.bounds`.
-```
-
-```{code-cell} ipython3
-# Open a DEM and set its CRS
-filename_dem = xdem.examples.get_path("longyearbyen_ref_dem")
-dem = xdem.DEM(filename_dem, vcrs="Ellipsoid")
-dem.to_vcrs("EGM96")
-dem.vcrs
-```
-
-The operation updates the DEM array **in-place**, shifting each pixel by the transformation at their coordinates:
-
-```{code-cell} ipython3
-import numpy as np
-
-# Mean difference after transformation (about 30 m in Svalbard)
-dem_orig = xdem.DEM(filename_dem)
-diff = dem_orig - dem
-np.nanmean(diff)
-```
diff --git a/examples/advanced/README.rst b/examples/advanced/README.rst
deleted file mode 100644
index 16e8d5a9..00000000
--- a/examples/advanced/README.rst
+++ /dev/null
@@ -1,2 +0,0 @@
-Advanced
-========
diff --git a/examples/advanced/plot_blockwise_coreg.py b/examples/advanced/plot_blockwise_coreg.py
deleted file mode 100644
index 47dfc65c..00000000
--- a/examples/advanced/plot_blockwise_coreg.py
+++ /dev/null
@@ -1,102 +0,0 @@
-"""
-Blockwise coregistration
-========================
-
-Often, biases are spatially variable, and a "global" shift may not be enough to coregister a DEM properly.
-In the :ref:`sphx_glr_basic_examples_plot_nuth_kaab.py` example, we saw that the method improved the alignment significantly, but there were still possibly nonlinear artefacts in the result.
-Clearly, nonlinear coregistration approaches are needed.
-One solution is :class:`xdem.coreg.BlockwiseCoreg`, a helper to run any ``Coreg`` class over an arbitrarily small grid, and then "puppet warp" the DEM to fit the reference best.
-
-The ``BlockwiseCoreg`` class runs in five steps:
-
-1. Generate a subdivision grid to divide the DEM in N blocks.
-2. Run the requested coregistration approach in each block.
-3. Extract each result as a source and destination X/Y/Z point.
-4. Interpolate the X/Y/Z point-shifts into three shift-rasters.
-5. Warp the DEM to apply the X/Y/Z shifts.
-
-"""
-import geoutils as gu
-
-# sphinx_gallery_thumbnail_number = 2
-import matplotlib.pyplot as plt
-import numpy as np
-
-import xdem
-
-# %%
-# **Example files**
-
-reference_dem = xdem.DEM(xdem.examples.get_path("longyearbyen_ref_dem"))
-dem_to_be_aligned = xdem.DEM(xdem.examples.get_path("longyearbyen_tba_dem"))
-glacier_outlines = gu.Vector(xdem.examples.get_path("longyearbyen_glacier_outlines"))
-
-# Create a stable ground mask (not glacierized) to mark "inlier data"
-inlier_mask = ~glacier_outlines.create_mask(reference_dem)
-
-plt_extent = [
- reference_dem.bounds.left,
- reference_dem.bounds.right,
- reference_dem.bounds.bottom,
- reference_dem.bounds.top,
-]
-
-# %%
-# The DEM to be aligned (a 1990 photogrammetry-derived DEM) has some vertical and horizontal biases that we want to avoid, as well as possible nonlinear distortions.
-# The product is a mosaic of multiple DEMs, so "seams" may exist in the data.
-# These can be visualized by plotting a change map:
-
-diff_before = reference_dem - dem_to_be_aligned
-
-diff_before.show(cmap="coolwarm_r", vmin=-10, vmax=10)
-plt.show()
-
-# %%
-# Horizontal and vertical shifts can be estimated using :class:`xdem.coreg.NuthKaab`.
-# Let's prepare a coregistration class that calculates 64 offsets, evenly spread over the DEM.
-
-blockwise = xdem.coreg.BlockwiseCoreg(xdem.coreg.NuthKaab(), subdivision=64)
-
-
-# %%
-# The grid that will be used can be visualized with a helper function.
-# Coregistration will be performed in each block separately.
-
-plt.title("Subdivision grid")
-plt.imshow(blockwise.subdivide_array(dem_to_be_aligned.shape), cmap="gist_ncar")
-plt.show()
-
-# %%
-# Coregistration is performed with the ``.fit()`` method.
-# This runs in multiple threads by default, so more CPU cores are preferable here.
-
-blockwise.fit(reference_dem, dem_to_be_aligned, inlier_mask=inlier_mask)
-
-aligned_dem = blockwise.apply(dem_to_be_aligned)
-
-# %%
-# The estimated shifts can be visualized by applying the coregistration to a completely flat surface.
-# This shows the estimated shifts that would be applied in elevation; additional horizontal shifts will also be applied if the method supports it.
-# The :func:`xdem.coreg.BlockwiseCoreg.stats` method can be used to annotate each block with its associated Z shift.
-
-z_correction = blockwise.apply(
- np.zeros_like(dem_to_be_aligned.data), transform=dem_to_be_aligned.transform, crs=dem_to_be_aligned.crs
-)[0]
-plt.title("Vertical correction")
-plt.imshow(z_correction, cmap="coolwarm_r", vmin=-10, vmax=10, extent=plt_extent)
-for _, row in blockwise.stats().iterrows():
- plt.annotate(round(row["z_off"], 1), (row["center_x"], row["center_y"]), ha="center")
-
-# %%
-# Then, the new difference can be plotted to validate that it improved.
-
-diff_after = reference_dem - aligned_dem
-
-diff_after.show(cmap="coolwarm_r", vmin=-10, vmax=10)
-plt.show()
-
-# %%
-# We can compare the NMAD to validate numerically that there was an improvment:
-
-print(f"Error before: {xdem.spatialstats.nmad(diff_before):.2f} m")
-print(f"Error after: {xdem.spatialstats.nmad(diff_after):.2f} m")
diff --git a/examples/advanced/plot_demcollection.py b/examples/advanced/plot_demcollection.py
deleted file mode 100644
index 5d57ca53..00000000
--- a/examples/advanced/plot_demcollection.py
+++ /dev/null
@@ -1,105 +0,0 @@
-"""
-Working with a collection of DEMs
-=================================
-
-Oftentimes, more than two timestamps (DEMs) are analyzed simultaneously.
-One single dDEM only captures one interval, so multiple dDEMs have to be created.
-In addition, if multiple masking polygons exist (e.g. glacier outlines from multiple years), these should be accounted for properly.
-The :class:`xdem.DEMCollection` is a tool to properly work with multiple timestamps at the same time, and makes calculations of elevation/volume change over multiple years easy.
-"""
-
-from datetime import datetime
-
-import geoutils as gu
-import matplotlib.pyplot as plt
-
-import xdem
-
-# %%
-# **Example data**.
-#
-# We can load the DEMs as usual, but with the addition that the ``datetime`` argument should be filled.
-# Since multiple DEMs are in question, the "time dimension" is what keeps them apart.
-
-dem_2009 = xdem.DEM(xdem.examples.get_path("longyearbyen_ref_dem"), datetime=datetime(2009, 8, 1))
-dem_1990 = xdem.DEM(xdem.examples.get_path("longyearbyen_tba_dem"), datetime=datetime(1990, 8, 1))
-
-
-# %%
-# For glacier research (any many other fields), only a subset of the DEMs are usually interesting.
-# These parts can be delineated with masks or polygons.
-# Here, we have glacier outlines from 1990 and 2009.
-outlines = {
- datetime(1990, 8, 1): gu.Vector(xdem.examples.get_path("longyearbyen_glacier_outlines")),
- datetime(2009, 8, 1): gu.Vector(xdem.examples.get_path("longyearbyen_glacier_outlines_2010")),
-}
-
-# %%
-# To experiment with a longer time-series, we can also fake a 2060 DEM, by simply exaggerating the 1990-2009 change.
-
-# Fake a 2060 DEM by assuming twice the change from 1990-2009 between 2009 and 2060
-dem_2060 = dem_2009 + (dem_2009 - dem_1990).data * 3
-dem_2060.datetime = datetime(2060, 8, 1)
-
-
-# %%
-# Now, all data are ready to be collected in an :class:`xdem.DEMCollection` object.
-# What we have are:
-# 1. Three DEMs from 1990, 2009, and 2060 (the last is artificial)
-# 2. Two glacier outline timestamps from 1990 and 2009
-#
-
-demcollection = xdem.DEMCollection(dems=[dem_1990, dem_2009, dem_2060], outlines=outlines, reference_dem=1)
-
-
-# %%
-# We can generate :class:`xdem.dDEM` objects using :func:`xdem.DEMCollection.subtract_dems`.
-# In this case, it will generate three dDEMs:
-#
-# * 1990-2009
-# * 2009-2009 (to maintain the ``dems`` and ``ddems`` list length and order)
-# * 2060-2009 (note the inverted order; negative change will be positive)
-
-_ = demcollection.subtract_dems()
-
-# %%
-# These are saved internally, but are also returned as a list.
-#
-# An elevation or volume change series can automatically be generated from the ``DEMCollection``.
-# In this case, we should specify *which* glacier we want the change for, as a regional value may not always be required.
-# We can look at the glacier called "Scott Turnerbreen", specified in the "NAME" column of the outline data.
-# `See here for the outline filtering syntax `_.
-
-demcollection.get_cumulative_series(kind="dh", outlines_filter="NAME == 'Scott Turnerbreen'")
-
-# %%
-# And there we have a cumulative dH series of the glacier Scott Turnerbreen on Svalbard!
-# The dDEMs can be visualized to give further context.
-
-extent = [
- demcollection.dems[0].bounds.left,
- demcollection.dems[0].bounds.right,
- demcollection.dems[0].bounds.bottom,
- demcollection.dems[0].bounds.top,
-]
-
-scott_extent = [518600, 523800, 8666600, 8672300]
-
-plt.figure(figsize=(8, 5))
-
-for i in range(2):
- plt.subplot(1, 2, i + 1)
-
- if i == 0:
- title = "1990 - 2009"
- ddem_2060 = demcollection.ddems[0].data.squeeze()
- else:
- title = "2009 - 2060"
- # The 2009 - 2060 DEM is inverted since the reference year is 2009
- ddem_2060 = -demcollection.ddems[2].data.squeeze()
-
- plt.imshow(ddem_2060, cmap="coolwarm_r", vmin=-50, vmax=50, extent=extent)
- plt.xlim(scott_extent[:2])
- plt.ylim(scott_extent[2:])
-
-plt.show()
diff --git a/examples/advanced/plot_deramp.py b/examples/advanced/plot_deramp.py
deleted file mode 100644
index 218c737b..00000000
--- a/examples/advanced/plot_deramp.py
+++ /dev/null
@@ -1,56 +0,0 @@
-"""
-Bias correction with deramping
-==============================
-
-(On latest only) Update will follow soon with more consistent bias correction examples.
-In ``xdem``, this approach is implemented through the :class:`xdem.biascorr.Deramp` class.
-
-For more information about the approach, see :ref:`biascorr-deramp`.
-"""
-import geoutils as gu
-import numpy as np
-
-import xdem
-
-# %%
-# **Example files**
-reference_dem = xdem.DEM(xdem.examples.get_path("longyearbyen_ref_dem"))
-dem_to_be_aligned = xdem.DEM(xdem.examples.get_path("longyearbyen_tba_dem"))
-glacier_outlines = gu.Vector(xdem.examples.get_path("longyearbyen_glacier_outlines"))
-
-# Create a stable ground mask (not glacierized) to mark "inlier data"
-inlier_mask = ~glacier_outlines.create_mask(reference_dem)
-
-# %%
-# The DEM to be aligned (a 1990 photogrammetry-derived DEM) has some vertical and horizontal biases that we want to avoid.
-# These can be visualized by plotting a change map:
-
-diff_before = reference_dem - dem_to_be_aligned
-diff_before.show(cmap="coolwarm_r", vmin=-10, vmax=10, cbar_title="Elevation change (m)")
-
-
-# %%
-# A 2-D 3rd order polynomial is estimated, and applied to the data:
-
-deramp = xdem.coreg.Deramp(poly_order=2)
-
-deramp.fit(reference_dem, dem_to_be_aligned, inlier_mask=inlier_mask)
-corrected_dem = deramp.apply(dem_to_be_aligned)
-
-# %%
-# Then, the new difference can be plotted.
-
-diff_after = reference_dem - corrected_dem
-diff_after.show(cmap="coolwarm_r", vmin=-10, vmax=10, cbar_title="Elevation change (m)")
-
-
-# %%
-# We compare the median and NMAD to validate numerically that there was an improvement (see :ref:`robuststats-meanstd`):
-inliers_before = diff_before[inlier_mask]
-med_before, nmad_before = np.median(inliers_before), xdem.spatialstats.nmad(inliers_before)
-
-inliers_after = diff_after[inlier_mask]
-med_after, nmad_after = np.median(inliers_after), xdem.spatialstats.nmad(inliers_after)
-
-print(f"Error before: median = {med_before:.2f} - NMAD = {nmad_before:.2f} m")
-print(f"Error after: median = {med_after:.2f} - NMAD = {nmad_after:.2f} m")
diff --git a/examples/advanced/plot_heterosc_estimation_modelling.py b/examples/advanced/plot_heterosc_estimation_modelling.py
deleted file mode 100644
index ede8a95c..00000000
--- a/examples/advanced/plot_heterosc_estimation_modelling.py
+++ /dev/null
@@ -1,272 +0,0 @@
-"""
-Estimation and modelling of heteroscedasticity
-==============================================
-
-Digital elevation models have a precision that can vary with terrain and instrument-related variables. This variability
-in variance is called `heteroscedasticy `_,
-and rarely accounted for in DEM studies (see :ref:`intro`). Quantifying elevation heteroscedasticity is essential to
-use stable terrain as an error proxy for moving terrain, and standardize data towards a stationary variance, necessary
-to apply spatial statistics (see :ref:`spatialstats`).
-
-Here, we show an advanced example in which we look for terrain-dependent explanatory variables to explain the
-heteroscedasticity for a DEM difference at Longyearbyen. We use `data binning `_
-and robust statistics in N-dimension with :func:`xdem.spatialstats.nd_binning`, apply a N-dimensional interpolation with
-:func:`xdem.spatialstats.interp_nd_binning`, and scale our interpolant function with a two-step standardization
-:func:`xdem.spatialstats.two_step_standardization` to produce the final elevation error function.
-
-**References**: `Hugonnet et al. (2021) `_, Equation 1, Extended Data Fig.
-3a and `Hugonnet et al. (2022) `_, Figs. 4 and S6–S9. Equations 7 or 8 can
-be used to convert elevation change errors into elevation errors.
-"""
-import geoutils as gu
-
-# sphinx_gallery_thumbnail_number = 8
-import matplotlib.pyplot as plt
-import numpy as np
-
-import xdem
-
-# %%
-# Here, we detail the steps used by ``xdem.spatialstats.infer_heteroscedasticity_from_stable`` exemplified in
-# :ref:`sphx_glr_basic_examples_plot_infer_heterosc.py`. First, we load example files and create a glacier mask.
-
-ref_dem = xdem.DEM(xdem.examples.get_path("longyearbyen_ref_dem"))
-dh = xdem.DEM(xdem.examples.get_path("longyearbyen_ddem"))
-glacier_outlines = gu.Vector(xdem.examples.get_path("longyearbyen_glacier_outlines"))
-mask_glacier = glacier_outlines.create_mask(dh)
-
-# %%
-# We derive terrain attributes from the reference DEM (see :ref:`sphx_glr_basic_examples_plot_terrain_attributes.py`),
-# which we will use to explore the variability in elevation error.
-slope, aspect, planc, profc = xdem.terrain.get_terrain_attribute(
- dem=ref_dem, attribute=["slope", "aspect", "planform_curvature", "profile_curvature"]
-)
-
-# %%
-# We convert to arrays and keep only stable terrain for the analysis of variability
-dh_arr = dh[~mask_glacier].filled(np.nan)
-slope_arr = slope[~mask_glacier].filled(np.nan)
-aspect_arr = aspect[~mask_glacier].filled(np.nan)
-planc_arr = planc[~mask_glacier].filled(np.nan)
-profc_arr = profc[~mask_glacier].filled(np.nan)
-
-# %%
-# We use :func:`xdem.spatialstats.nd_binning` to perform N-dimensional binning on all those terrain variables, with uniform
-# bin length divided by 30. We use the NMAD as a robust measure of `statistical dispersion `_
-# (see :ref:`robuststats-meanstd`).
-
-df = xdem.spatialstats.nd_binning(
- values=dh_arr,
- list_var=[slope_arr, aspect_arr, planc_arr, profc_arr],
- list_var_names=["slope", "aspect", "planc", "profc"],
- statistics=["count", xdem.spatialstats.nmad],
- list_var_bins=30,
-)
-
-# %%
-# We obtain a dataframe with the 1D binning results for each variable, the 2D binning results for all combinations of
-# variables and the N-D (here 4D) binning with all variables.
-# Overview of the dataframe structure for the 1D binning:
-df[df.nd == 1]
-
-# %%
-# And for the 4D binning:
-df[df.nd == 4]
-
-# %%
-# We can now visualize the results of the 1D binning of the computed NMAD of elevation differences with each variable
-# using :func:`xdem.spatialstats.plot_1d_binning`.
-# We can start with the slope that has been long known to be related to the elevation measurement error (e.g.,
-# `Toutin (2002) `_).
-xdem.spatialstats.plot_1d_binning(
- df, var_name="slope", statistic_name="nmad", label_var="Slope (degrees)", label_statistic="NMAD of dh (m)"
-)
-
-# %%
-# We identify a clear variability, with the dispersion estimated from the NMAD increasing from ~2 meters for nearly flat
-# slopes to above 12 meters for slopes steeper than 50°.
-#
-# What about the aspect?
-
-xdem.spatialstats.plot_1d_binning(df, "aspect", "nmad", "Aspect (degrees)", "NMAD of dh (m)")
-
-# %%
-# There is no variability with the aspect that shows a dispersion averaging 2-3 meters, i.e. that of the complete sample.
-#
-# What about the plan curvature?
-
-xdem.spatialstats.plot_1d_binning(df, "planc", "nmad", "Planform curvature (100 m$^{-1}$)", "NMAD of dh (m)")
-
-# %%
-# The relation with the plan curvature remains ambiguous.
-# We should better define our bins to avoid sampling bins with too many or too few samples. For this, we can partition
-# the data in quantiles in :func:`xdem.spatialstats.nd_binning`.
-# *Note: we need a higher number of bins to work with quantiles and still resolve the edges of the distribution. As
-# with many dimensions the ND bin size increases exponentially, we avoid binning all variables at the same
-# time and instead bin one at a time.*
-# We define 1000 quantile bins of size 0.001 (equivalent to 0.1% percentile bins) for the profile curvature:
-
-df = xdem.spatialstats.nd_binning(
- values=dh_arr,
- list_var=[profc_arr],
- list_var_names=["profc"],
- statistics=["count", np.nanmedian, xdem.spatialstats.nmad],
- list_var_bins=[np.nanquantile(profc_arr, np.linspace(0, 1, 1000))],
-)
-xdem.spatialstats.plot_1d_binning(df, "profc", "nmad", "Profile curvature (100 m$^{-1}$)", "NMAD of dh (m)")
-
-# %%
-# We clearly identify a variability with the profile curvature, from 2 meters for low curvatures to above 4 meters
-# for higher positive or negative curvature.
-#
-# What about the role of the plan curvature?
-
-df = xdem.spatialstats.nd_binning(
- values=dh_arr,
- list_var=[planc_arr],
- list_var_names=["planc"],
- statistics=["count", np.nanmedian, xdem.spatialstats.nmad],
- list_var_bins=[np.nanquantile(planc_arr, np.linspace(0, 1, 1000))],
-)
-xdem.spatialstats.plot_1d_binning(df, "planc", "nmad", "Planform curvature (100 m$^{-1}$)", "NMAD of dh (m)")
-
-# %%
-# The plan curvature shows a similar relation. Those are symmetrical with 0, and almost equal for both types of curvature.
-# To simplify the analysis, we here combine those curvatures into the maximum absolute curvature:
-
-maxc_arr = np.maximum(np.abs(planc_arr), np.abs(profc_arr))
-df = xdem.spatialstats.nd_binning(
- values=dh_arr,
- list_var=[maxc_arr],
- list_var_names=["maxc"],
- statistics=["count", np.nanmedian, xdem.spatialstats.nmad],
- list_var_bins=[np.nanquantile(maxc_arr, np.linspace(0, 1, 1000))],
-)
-xdem.spatialstats.plot_1d_binning(df, "maxc", "nmad", "Maximum absolute curvature (100 m$^{-1}$)", "NMAD of dh (m)")
-
-# %%
-# Here's our simplified relation! We now have both slope and maximum absolute curvature with clear variability of
-# the elevation error.
-#
-# **But, one might wonder: high curvatures might occur more often around steep slopes than flat slope,
-# so what if those two dependencies are actually one and the same?**
-#
-# We need to explore the variability with both slope and curvature at the same time:
-
-df = xdem.spatialstats.nd_binning(
- values=dh_arr,
- list_var=[slope_arr, maxc_arr],
- list_var_names=["slope", "maxc"],
- statistics=["count", np.nanmedian, xdem.spatialstats.nmad],
- list_var_bins=30,
-)
-
-xdem.spatialstats.plot_2d_binning(
- df,
- var_name_1="slope",
- var_name_2="maxc",
- statistic_name="nmad",
- label_var_name_1="Slope (degrees)",
- label_var_name_2="Maximum absolute curvature (100 m$^{-1}$)",
- label_statistic="NMAD of dh (m)",
-)
-
-# %%
-# We can see that part of the variability seems to be independent, but with the uniform bins it is hard to tell much
-# more.
-#
-# If we use custom quantiles for both binning variables, and adjust the plot scale:
-
-custom_bin_slope = np.unique(
- np.concatenate(
- [
- np.nanquantile(slope_arr, np.linspace(0, 0.95, 20)),
- np.nanquantile(slope_arr, np.linspace(0.96, 0.99, 5)),
- np.nanquantile(slope_arr, np.linspace(0.991, 1, 10)),
- ]
- )
-)
-
-custom_bin_curvature = np.unique(
- np.concatenate(
- [
- np.nanquantile(maxc_arr, np.linspace(0, 0.95, 20)),
- np.nanquantile(maxc_arr, np.linspace(0.96, 0.99, 5)),
- np.nanquantile(maxc_arr, np.linspace(0.991, 1, 10)),
- ]
- )
-)
-
-df = xdem.spatialstats.nd_binning(
- values=dh_arr,
- list_var=[slope_arr, maxc_arr],
- list_var_names=["slope", "maxc"],
- statistics=["count", np.nanmedian, xdem.spatialstats.nmad],
- list_var_bins=[custom_bin_slope, custom_bin_curvature],
-)
-xdem.spatialstats.plot_2d_binning(
- df,
- "slope",
- "maxc",
- "nmad",
- "Slope (degrees)",
- "Maximum absolute curvature (100 m$^{-1}$)",
- "NMAD of dh (m)",
- scale_var_2="log",
- vmin=2,
- vmax=10,
-)
-
-
-# %%
-# We identify clearly that the two variables have an independent effect on the precision, with
-#
-# - *high curvatures and flat slopes* that have larger errors than *low curvatures and flat slopes*
-# - *steep slopes and low curvatures* that have larger errors than *low curvatures and flat slopes* as well
-#
-# We also identify that, steep slopes (> 40°) only correspond to high curvature, while the opposite is not true, hence
-# the importance of mapping the variability in two dimensions.
-#
-# Now we need to account for the heteroscedasticity identified. For this, the simplest approach is a numerical
-# approximation i.e. a piecewise linear interpolation/extrapolation based on the binning results available through
-# the function :func:`xdem.spatialstats.interp_nd_binning`. To ensure that only robust statistic values are used
-# in the interpolation, we set a ``min_count`` value at 30 samples.
-
-unscaled_dh_err_fun = xdem.spatialstats.interp_nd_binning(
- df, list_var_names=["slope", "maxc"], statistic="nmad", min_count=30
-)
-
-# %%
-# The output is an interpolant function of slope and curvature that predicts the elevation error at any point. However,
-# this predicted error might have a spread slightly off from that of the data:
-#
-# We compare the spread of the elevation difference on stable terrain and the average predicted error:
-dh_err_stable = unscaled_dh_err_fun((slope_arr, maxc_arr))
-
-print(
- "The spread of elevation difference is {:.2f} "
- "compared to a mean predicted elevation error of {:.2f}.".format(
- xdem.spatialstats.nmad(dh_arr), np.nanmean(dh_err_stable)
- )
-)
-
-# %%
-# Thus, we rescale the function to exactly match the spread on stable terrain using the
-# :func:`xdem.spatialstats.two_step_standardization` function, and get our final error function.
-
-zscores, dh_err_fun = xdem.spatialstats.two_step_standardization(
- dh_arr, list_var=[slope_arr, maxc_arr], unscaled_error_fun=unscaled_dh_err_fun
-)
-
-for s, c in [(0.0, 0.1), (50.0, 0.1), (0.0, 20.0), (50.0, 20.0)]:
- print(
- "Elevation measurement error for slope of {:.0f} degrees, "
- "curvature of {:.2f} m-1: {:.1f}".format(s, c / 100, dh_err_fun((s, c))) + " meters."
- )
-
-# %%
-# This function can be used to estimate the spatial distribution of the elevation error on the extent of our DEMs:
-maxc = np.maximum(np.abs(profc), np.abs(planc))
-errors = dh.copy(new_array=dh_err_fun((slope.data, maxc.data)))
-
-errors.show(cmap="Reds", vmin=2, vmax=8, cbar_title=r"Elevation error ($1\sigma$, m)")
diff --git a/examples/advanced/plot_norm_regional_hypso.py b/examples/advanced/plot_norm_regional_hypso.py
deleted file mode 100644
index bf6ff0b3..00000000
--- a/examples/advanced/plot_norm_regional_hypso.py
+++ /dev/null
@@ -1,117 +0,0 @@
-"""
-Normalized regional hypsometric interpolation
-=============================================
-
-There are many ways of interpolating gaps in a dDEM.
-In the case of glaciers, one very useful fact is that elevation change generally varies with elevation.
-This means that if valid pixels exist in a certain elevation bin, their values can be used to fill other pixels in the same approximate elevation.
-Filling gaps by elevation is the main basis of "hypsometric interpolation approaches", of which there are many variations of.
-
-One problem with simple hypsometric approaches is that they may not work for glaciers with different elevation ranges and scales.
-Let's say we have two glaciers: one gigantic reaching from 0-1000 m, and one small from 900-1100 m.
-Usually in the 2000s, glaciers thin rapidly at the bottom, while they may be neutral or only thin slightly in the top.
-If we extrapolate the hypsometric signal of the gigantic glacier to use on the small one, it may seem like the smaller glacier has almost no change whatsoever.
-This may be right, or it may be catastrophically wrong!
-
-Normalized regional hypsometric interpolation solves the scale and elevation range problems in one go. It:
-
- 1. Calculates a regional signal using the weighted average of each glacier's normalized signal:
-
- a. The glacier's elevation range is scaled from 0-1 to be elevation-independent.
- b. The glaciers elevation change is scaled from 0-1 to be magnitude-independent.
- c. A weight is assigned by the amount of valid pixels (well-covered large glaciers gain a higher weight)
-
- 2. Re-scales that signal to fit each glacier once determined.
-
-The consequence is a much more accurate interpolation approach that can be used in a multitude of glacierized settings.
-
-"""
-import geoutils as gu
-
-# sphinx_gallery_thumbnail_number = 2
-import matplotlib.pyplot as plt
-import numpy as np
-
-import xdem
-import xdem.misc
-
-# %%
-# **Example files**
-
-dem_2009 = xdem.DEM(xdem.examples.get_path("longyearbyen_ref_dem"))
-dem_1990 = xdem.DEM(xdem.examples.get_path("longyearbyen_tba_dem_coreg"))
-
-glacier_outlines = gu.Vector(xdem.examples.get_path("longyearbyen_glacier_outlines"))
-
-# Rasterize the glacier outlines to create an index map.
-# Stable ground is 0, the first glacier is 1, the second is 2, etc.
-glacier_index_map = glacier_outlines.rasterize(dem_2009)
-
-plt_extent = [
- dem_2009.bounds.left,
- dem_2009.bounds.right,
- dem_2009.bounds.bottom,
- dem_2009.bounds.top,
-]
-
-
-# %%
-# To test the method, we can generate a semi-random mask to assign nans to glacierized areas.
-# Let's remove 30% of the data.
-np.random.seed(42)
-random_nans = (xdem.misc.generate_random_field(dem_1990.shape, corr_size=5) > 0.7) & (glacier_index_map > 0)
-
-random_nans.show()
-
-# %%
-# The normalized hypsometric signal shows the tendency for elevation change as a function of elevation.
-# The magnitude may vary between glaciers, but the shape is generally similar.
-# Normalizing by both elevation and elevation change, and then re-scaling the signal to every glacier, ensures that it is as accurate as possible.
-# **NOTE**: The hypsometric signal does not need to be generated separately; it will be created by :func:`xdem.volume.norm_regional_hypsometric_interpolation`.
-# Generating it first, however, allows us to visualize and validate it.
-
-ddem = dem_2009 - dem_1990
-ddem_voided = np.where(random_nans.data, np.nan, ddem.data)
-
-signal = xdem.volume.get_regional_hypsometric_signal(
- ddem=ddem_voided,
- ref_dem=dem_2009.data,
- glacier_index_map=glacier_index_map,
-)
-
-plt.fill_between(signal.index.mid, signal["sigma-1-lower"], signal["sigma-1-upper"], label="Spread (+- 1 sigma)")
-plt.plot(signal.index.mid, signal["w_mean"], color="black", label="Weighted mean")
-plt.ylabel("Normalized elevation change")
-plt.xlabel("Normalized elevation")
-plt.legend()
-plt.show()
-
-# %%
-# The signal can now be used (or simply estimated again if not provided) to interpolate the DEM.
-
-ddem_filled = xdem.volume.norm_regional_hypsometric_interpolation(
- voided_ddem=ddem_voided, ref_dem=dem_2009, glacier_index_map=glacier_index_map, regional_signal=signal
-)
-
-
-plt.figure(figsize=(8, 5))
-plt.imshow(ddem_filled.data, cmap="coolwarm_r", vmin=-10, vmax=10, extent=plt_extent)
-plt.colorbar()
-plt.show()
-
-
-# %%
-# We can plot the difference between the actual and the interpolated values, to validate the method.
-
-difference = (ddem_filled - ddem)[random_nans]
-median = np.nanmedian(difference)
-nmad = xdem.spatialstats.nmad(difference)
-
-plt.title(f"Median: {median:.2f} m, NMAD: {nmad:.2f} m")
-plt.hist(difference.data, bins=np.linspace(-15, 15, 100))
-plt.show()
-
-# %%
-# As we see, the median is close to zero, while the NMAD varies slightly more.
-# This is expected, as the regional signal is good for multiple glaciers at once, but it cannot account for difficult local topography and meteorological conditions.
-# It is therefore highly recommended for large regions; just don't zoom in too close!
diff --git a/examples/advanced/plot_slope_methods.py b/examples/advanced/plot_slope_methods.py
deleted file mode 100644
index 9eb4063e..00000000
--- a/examples/advanced/plot_slope_methods.py
+++ /dev/null
@@ -1,120 +0,0 @@
-"""
-Slope and aspect methods
-========================
-
-Terrain slope and aspect can be estimated using different methods.
-Here is an example of how to generate the two with each method, and understand their differences.
-
-For more information, see the :ref:`terrain-attributes` chapter and the
-:ref:`sphx_glr_basic_examples_plot_terrain_attributes.py` example.
-"""
-import matplotlib.pyplot as plt
-import numpy as np
-
-import xdem
-
-# %%
-# **Example data**
-
-dem = xdem.DEM(xdem.examples.get_path("longyearbyen_ref_dem"))
-
-
-def plot_attribute(attribute, cmap, label=None, vlim=None):
- plt.figure(figsize=(8, 5))
-
- if vlim is not None:
- if isinstance(vlim, (int, np.integer, float, np.floating)):
- vlims = {"vmin": -vlim, "vmax": vlim}
- elif len(vlim) == 2:
- vlims = {"vmin": vlim[0], "vmax": vlim[1]}
- else:
- vlims = {}
-
- plt.imshow(
- attribute.squeeze(),
- cmap=cmap,
- extent=[dem.bounds.left, dem.bounds.right, dem.bounds.bottom, dem.bounds.top],
- **vlims,
- )
- if label is not None:
- cbar = plt.colorbar()
- cbar.set_label(label)
-
- plt.xticks([])
- plt.yticks([])
- plt.tight_layout()
-
- plt.show()
-
-
-# %%
-# Slope with method of `Horn (1981) `_ (GDAL default), based on a refined
-# approximation of the gradient (page 18, bottom left, and pages 20-21).
-
-slope_horn = xdem.terrain.slope(dem.data, resolution=dem.res)
-
-plot_attribute(slope_horn, "Reds", "Slope of Horn (1981) (°)")
-
-# %%
-# Slope with method of `Zevenbergen and Thorne (1987) `_, Equation 13.
-
-slope_zevenberg = xdem.terrain.slope(dem.data, resolution=dem.res, method="ZevenbergThorne")
-
-plot_attribute(slope_zevenberg, "Reds", "Slope of Zevenberg and Thorne (1987) (°)")
-
-# %%
-# We compute the difference between the slopes computed with each method.
-
-diff_slope = slope_horn - slope_zevenberg
-
-plot_attribute(diff_slope, "RdYlBu", "Slope of Horn (1981) minus\n slope of Zevenberg and Thorne (1987) (°)", vlim=3)
-
-# %%
-# The differences are negative, implying that the method of Horn always provides flatter slopes.
-# Additionally, they seem to occur in places of high curvatures. We verify this by plotting the maximum curvature.
-
-maxc = xdem.terrain.maximum_curvature(dem.data, resolution=dem.res)
-
-plot_attribute(maxc, "RdYlBu", "Maximum curvature (100 m $^{-1}$)", vlim=2)
-
-# %%
-# We quantify the relationship by computing the median of slope differences in bins of curvatures, and plot the
-# result. We define custom bins for curvature, due to its skewed distribution.
-
-df_bin = xdem.spatialstats.nd_binning(
- values=diff_slope[:],
- list_var=[maxc[:]],
- list_var_names=["maxc"],
- list_var_bins=30,
- statistics=[np.nanmedian, "count"],
-)
-
-xdem.spatialstats.plot_1d_binning(
- df_bin,
- var_name="maxc",
- statistic_name="nanmedian",
- label_var="Maximum absolute curvature (100 m$^{-1}$)",
- label_statistic="Slope of Horn (1981) minus\n " "slope of Zevenberg and Thorne (1987) (°)",
-)
-
-
-# %%
-# We perform the same exercise to analyze the differences in terrain aspect. We compute the difference modulo 360°,
-# to account for the circularity of aspect.
-
-aspect_horn = xdem.terrain.aspect(dem.data)
-aspect_zevenberg = xdem.terrain.aspect(dem.data, method="ZevenbergThorne")
-
-diff_aspect = aspect_horn - aspect_zevenberg
-diff_aspect_mod = np.minimum(np.mod(diff_aspect, 360), 360 - np.mod(diff_aspect, 360))
-
-plot_attribute(
- diff_aspect_mod, "Spectral", "Aspect of Horn (1981) minus\n aspect of Zevenberg and Thorne (1987) (°)", vlim=[0, 90]
-)
-
-# %%
-# Same as for slope, differences in aspect seem to coincide with high curvature areas. We observe also observe large
-# differences for areas with nearly flat slopes, owing to the high sensitivity of orientation estimation
-# for flat terrain.
-
-# Note: the default aspect for a 0° slope is 180°, as in GDAL.
diff --git a/examples/advanced/plot_standardization.py b/examples/advanced/plot_standardization.py
deleted file mode 100644
index 34fbe26d..00000000
--- a/examples/advanced/plot_standardization.py
+++ /dev/null
@@ -1,264 +0,0 @@
-"""
-Standardization for stable terrain as error proxy
-=================================================
-
-Digital elevation models have both a precision that can vary with terrain or instrument-related variables, and
-a spatial correlation of errors that can be due to effects of resolution, processing or instrument noise.
-Accouting for non-stationarities in elevation errors is essential to use stable terrain as a proxy to infer the
-precision on other types of terrain and reliably use spatial statistics (see :ref:`spatialstats`).
-
-Here, we show an example of standardization of the data based on terrain-dependent explanatory variables
-(see :ref:`sphx_glr_basic_examples_plot_infer_heterosc.py`) and combine it with an analysis of spatial correlation
-(see :ref:`sphx_glr_basic_examples_plot_infer_spatial_correlation.py`) .
-
-**Reference**: `Hugonnet et al. (2022) `_, Equation 12.
-"""
-import geoutils as gu
-
-# sphinx_gallery_thumbnail_number = 4
-import matplotlib.pyplot as plt
-import numpy as np
-
-import xdem
-from xdem.spatialstats import nmad
-
-# %%
-# We start by estimating the elevation heteroscedasticity and deriving a terrain-dependent measurement error as a function of both
-# slope and maximum curvature, as shown in the :ref:`sphx_glr_basic_examples_plot_infer_heterosc.py` example.
-
-# Load the data
-ref_dem = xdem.DEM(xdem.examples.get_path("longyearbyen_ref_dem"))
-dh = xdem.DEM(xdem.examples.get_path("longyearbyen_ddem"))
-glacier_outlines = gu.Vector(xdem.examples.get_path("longyearbyen_glacier_outlines"))
-mask_glacier = glacier_outlines.create_mask(dh)
-
-# Compute the slope and maximum curvature
-slope, planc, profc = xdem.terrain.get_terrain_attribute(
- dem=ref_dem, attribute=["slope", "planform_curvature", "profile_curvature"]
-)
-
-# Remove values on unstable terrain
-dh_arr = dh[~mask_glacier].filled(np.nan)
-slope_arr = slope[~mask_glacier].filled(np.nan)
-planc_arr = planc[~mask_glacier].filled(np.nan)
-profc_arr = profc[~mask_glacier].filled(np.nan)
-maxc_arr = np.maximum(np.abs(planc_arr), np.abs(profc_arr))
-
-# Remove large outliers
-dh_arr[np.abs(dh_arr) > 4 * xdem.spatialstats.nmad(dh_arr)] = np.nan
-
-# Define bins for 2D binning
-custom_bin_slope = np.unique(
- np.concatenate(
- [
- np.nanquantile(slope_arr, np.linspace(0, 0.95, 20)),
- np.nanquantile(slope_arr, np.linspace(0.96, 0.99, 5)),
- np.nanquantile(slope_arr, np.linspace(0.991, 1, 10)),
- ]
- )
-)
-
-custom_bin_curvature = np.unique(
- np.concatenate(
- [
- np.nanquantile(maxc_arr, np.linspace(0, 0.95, 20)),
- np.nanquantile(maxc_arr, np.linspace(0.96, 0.99, 5)),
- np.nanquantile(maxc_arr, np.linspace(0.991, 1, 10)),
- ]
- )
-)
-
-# Perform 2D binning to estimate the measurement error with slope and maximum curvature
-df = xdem.spatialstats.nd_binning(
- values=dh_arr,
- list_var=[slope_arr, maxc_arr],
- list_var_names=["slope", "maxc"],
- statistics=["count", np.nanmedian, nmad],
- list_var_bins=[custom_bin_slope, custom_bin_curvature],
-)
-
-# Estimate an interpolant of the measurement error with slope and maximum curvature
-slope_curv_to_dh_err = xdem.spatialstats.interp_nd_binning(
- df, list_var_names=["slope", "maxc"], statistic="nmad", min_count=30
-)
-maxc = np.maximum(np.abs(profc), np.abs(planc))
-
-# Estimate a measurement error per pixel
-dh_err = slope_curv_to_dh_err((slope.data, maxc.data))
-
-# %%
-# Using the measurement error estimated for each pixel, we standardize the elevation differences by applying
-# a simple division:
-
-z_dh = dh.data / dh_err
-
-# %%
-# We remove values on glacierized terrain and large outliers.
-z_dh.data[mask_glacier.data] = np.nan
-z_dh.data[np.abs(z_dh.data) > 4] = np.nan
-
-# %%
-# We perform a scale-correction for the standardization, to ensure that the spread of the data is exactly 1.
-# The NMAD is used as a robust measure for the spread (see :ref:`robuststats-nmad`).
-print(f"NMAD before scale-correction: {nmad(z_dh.data):.1f}")
-scale_fac_std = nmad(z_dh.data)
-z_dh = z_dh / scale_fac_std
-print(f"NMAD after scale-correction: {nmad(z_dh.data):.1f}")
-
-plt.figure(figsize=(8, 5))
-plt_extent = [
- ref_dem.bounds.left,
- ref_dem.bounds.right,
- ref_dem.bounds.bottom,
- ref_dem.bounds.top,
-]
-ax = plt.gca()
-glacier_outlines.ds.plot(ax=ax, fc="none", ec="tab:gray")
-ax.plot([], [], color="tab:gray", label="Glacier 1990 outlines")
-plt.imshow(z_dh.squeeze(), cmap="RdYlBu", vmin=-3, vmax=3, extent=plt_extent)
-cbar = plt.colorbar()
-cbar.set_label("Standardized elevation differences (m)")
-plt.legend(loc="lower right")
-plt.show()
-
-# %%
-# Now, we can perform an analysis of spatial correlation as shown in the :ref:`sphx_glr_advanced_examples_plot_variogram_estimation_modelling.py`
-# example, by estimating a variogram and fitting a sum of two models.
-# Dowd's variogram is used for robustness in conjunction with the NMAD (see :ref:`robuststats-corr`).
-df_vgm = xdem.spatialstats.sample_empirical_variogram(
- values=z_dh.data.squeeze(),
- gsd=dh.res[0],
- subsample=300,
- n_variograms=10,
- estimator="dowd",
- random_state=42,
-)
-
-func_sum_vgm, params_vgm = xdem.spatialstats.fit_sum_model_variogram(
- ["Gaussian", "Spherical"], empirical_variogram=df_vgm
-)
-xdem.spatialstats.plot_variogram(
- df_vgm,
- xscale_range_split=[100, 1000, 10000],
- list_fit_fun=[func_sum_vgm],
- list_fit_fun_label=["Standardized double-range variogram"],
-)
-
-# %%
-# With standardized input, the variogram should converge towards one. With the input data close to a stationary
-# variance, the variogram will be more robust as it won't be affected by changes in variance due to terrain- or
-# instrument-dependent variability of measurement error. The variogram should only capture changes in variance due to
-# spatial correlation.
-
-# %%
-# **How to use this standardized spatial analysis to compute final uncertainties?**
-#
-# Let's take the example of two glaciers of similar size: Svendsenbreen and Medalsbreen, which are respectively
-# north and south-facing. The south-facing Medalsbreen glacier is subject to more sun exposure, and thus should be
-# located in higher slopes, with possibly higher curvatures.
-
-svendsen_shp = gu.Vector(glacier_outlines.ds[glacier_outlines.ds["NAME"] == "Svendsenbreen"])
-svendsen_mask = svendsen_shp.create_mask(dh)
-
-medals_shp = gu.Vector(glacier_outlines.ds[glacier_outlines.ds["NAME"] == "Medalsbreen"])
-medals_mask = medals_shp.create_mask(dh)
-
-plt.figure(figsize=(8, 5))
-ax = plt.gca()
-plt_extent = [
- ref_dem.bounds.left,
- ref_dem.bounds.right,
- ref_dem.bounds.bottom,
- ref_dem.bounds.top,
-]
-plt.imshow(slope.data, cmap="Blues", vmin=0, vmax=40, extent=plt_extent)
-cbar = plt.colorbar(ax=ax)
-cbar.set_label("Slope (degrees)")
-svendsen_shp.ds.plot(ax=ax, fc="none", ec="tab:olive", lw=2)
-medals_shp.ds.plot(ax=ax, fc="none", ec="tab:gray", lw=2)
-plt.plot([], [], color="tab:olive", label="Medalsbreen")
-plt.plot([], [], color="tab:gray", label="Svendsenbreen")
-plt.legend(loc="lower left")
-plt.show()
-
-print(f"Average slope of Svendsenbreen glacier: {np.nanmean(slope[svendsen_mask]):.1f}")
-print(f"Average maximum curvature of Svendsenbreen glacier: {np.nanmean(maxc[svendsen_mask]):.3f}")
-
-print(f"Average slope of Medalsbreen glacier: {np.nanmean(slope[medals_mask]):.1f}")
-print(f"Average maximum curvature of Medalsbreen glacier : {np.nanmean(maxc[medals_mask]):.1f}")
-
-# %%
-# We calculate the number of effective samples for each glacier based on the variogram
-svendsen_neff = xdem.spatialstats.neff_circular_approx_numerical(
- area=svendsen_shp.ds.area.values[0], params_variogram_model=params_vgm
-)
-
-medals_neff = xdem.spatialstats.neff_circular_approx_numerical(
- area=medals_shp.ds.area.values[0], params_variogram_model=params_vgm
-)
-
-print(f"Number of effective samples of Svendsenbreen glacier: {svendsen_neff:.1f}")
-print(f"Number of effective samples of Medalsbreen glacier: {medals_neff:.1f}")
-
-# %%
-# Due to the long-range spatial correlations affecting the elevation differences, both glacier have a similar, low
-# number of effective samples. This transcribes into a large standardized integrated error.
-
-svendsen_z_err = 1 / np.sqrt(svendsen_neff)
-medals_z_err = 1 / np.sqrt(medals_neff)
-
-print(f"Standardized integrated error of Svendsenbreen glacier: {svendsen_z_err:.1f}")
-print(f"Standardized integrated error of Medalsbreen glacier: {medals_z_err:.1f}")
-
-# %%
-# Finally, we destandardize the spatially integrated errors based on the measurement error dependent on slope and
-# maximum curvature. This yields the uncertainty into the mean elevation change for each glacier.
-
-# Destandardize the uncertainty
-fac_svendsen_dh_err = scale_fac_std * np.nanmean(dh_err[svendsen_mask.data])
-fac_medals_dh_err = scale_fac_std * np.nanmean(dh_err[medals_mask.data])
-svendsen_dh_err = fac_svendsen_dh_err * svendsen_z_err
-medals_dh_err = fac_medals_dh_err * medals_z_err
-
-# Derive mean elevation change
-svendsen_dh = np.nanmean(dh.data[svendsen_mask.data])
-medals_dh = np.nanmean(dh.data[medals_mask.data])
-
-# Plot the result
-plt.figure(figsize=(8, 5))
-ax = plt.gca()
-plt.imshow(dh.data, cmap="RdYlBu", vmin=-50, vmax=50, extent=plt_extent)
-cbar = plt.colorbar(ax=ax)
-cbar.set_label("Elevation differences (m)")
-svendsen_shp.ds.plot(ax=ax, fc="none", ec="tab:olive", lw=2)
-medals_shp.ds.plot(ax=ax, fc="none", ec="tab:gray", lw=2)
-plt.plot([], [], color="tab:olive", label="Svendsenbreen glacier")
-plt.plot([], [], color="tab:gray", label="Medalsbreen glacier")
-ax.text(
- svendsen_shp.ds.centroid.x.values[0],
- svendsen_shp.ds.centroid.y.values[0] - 1500,
- f"{svendsen_dh:.2f} \n$\\pm$ {svendsen_dh_err:.2f}",
- color="tab:olive",
- fontweight="bold",
- va="top",
- ha="center",
- fontsize=12,
-)
-ax.text(
- medals_shp.ds.centroid.x.values[0],
- medals_shp.ds.centroid.y.values[0] + 2000,
- f"{medals_dh:.2f} \n$\\pm$ {medals_dh_err:.2f}",
- color="tab:gray",
- fontweight="bold",
- va="bottom",
- ha="center",
- fontsize=12,
-)
-plt.legend(loc="lower left")
-plt.show()
-
-# %%
-# Because of slightly higher slopes and curvatures, the final uncertainty for Medalsbreen is larger by about 10%.
-# The differences between the mean terrain slope and curvatures of stable terrain and those of glaciers is quite limited
-# on Svalbard. In high moutain terrain, such as the Alps or Himalayas, the difference between stable terrain and glaciers,
-# and among glaciers, would be much larger.
diff --git a/examples/advanced/plot_variogram_estimation_modelling.py b/examples/advanced/plot_variogram_estimation_modelling.py
deleted file mode 100644
index 230471ec..00000000
--- a/examples/advanced/plot_variogram_estimation_modelling.py
+++ /dev/null
@@ -1,255 +0,0 @@
-"""
-Estimation and modelling of spatial variograms
-==============================================
-
-Digital elevation models have errors that are often `correlated in space `_.
-While many DEM studies used solely short-range `variogram `_ to
-estimate the correlation of elevation measurement errors (e.g., `Howat et al. (2008) `_
-, `Wang and Kääb (2015) `_), recent studies show that variograms of multiple ranges
-provide larger, more reliable estimates of spatial correlation for DEMs.
-
-Here, we show an example in which we estimate the spatial correlation for a DEM difference at Longyearbyen, and its
-impact on the standard error with averaging area. We first estimate an empirical variogram with
-:func:`xdem.spatialstats.sample_empirical_variogram` based on routines of `scikit-gstat
-`_. We then fit the empirical variogram with a sum of variogram
-models using :func:`xdem.spatialstats.fit_sum_model_variogram`. Finally, we perform spatial propagation for a range of
-averaging area using :func:`xdem.spatialstats.number_effective_samples`, and empirically validate the improved
-robustness of our results using :func:`xdem.spatialstats.patches_method`, an intensive Monte-Carlo sampling approach.
-
-**Reference:** `Hugonnet et al. (2022) `_, Figure 5 and Equations 13–16.
-"""
-import geoutils as gu
-
-# sphinx_gallery_thumbnail_number = 6
-import matplotlib.pyplot as plt
-import numpy as np
-
-import xdem
-from xdem.spatialstats import nmad
-
-# %%
-# We load example files.
-
-dh = xdem.DEM(xdem.examples.get_path("longyearbyen_ddem"))
-glacier_outlines = gu.Vector(xdem.examples.get_path("longyearbyen_glacier_outlines"))
-mask_glacier = glacier_outlines.create_mask(dh)
-
-# %%
-# We exclude values on glacier terrain in order to isolate stable terrain, our proxy for elevation errors.
-dh.set_mask(mask_glacier)
-
-# %%
-# We estimate the average per-pixel elevation error on stable terrain, using both the standard deviation
-# and normalized median absolute deviation. For this example, we do not account for elevation heteroscedasticity.
-print(f"STD: {np.nanstd(dh.data):.2f} meters.")
-print(f"NMAD: {xdem.spatialstats.nmad(dh.data):.2f} meters.")
-
-# %%
-# The two measures of dispersion are quite similar showing that, on average, there is a small influence of outliers on the
-# elevation differences. The per-pixel precision is about :math:`\pm` 2.5 meters.
-# **Does this mean that every pixel has an independent measurement error of** :math:`\pm` **2.5 meters?**
-# Let's plot the elevation differences to visually check the quality of the data.
-plt.figure(figsize=(8, 5))
-dh.show(ax=plt.gca(), cmap="RdYlBu", vmin=-4, vmax=4, cbar_title="Elevation differences (m)")
-
-# %%
-# We clearly see that the residual elevation differences on stable terrain are not random. The positive and negative
-# differences (blue and red, respectively) appear correlated over large distances. These correlated errors are what
-# we want to estimate and model.
-
-# %%
-# Additionally, we notice that the elevation differences are still polluted by unrealistically large elevation
-# differences near glaciers, probably because the glacier inventory is more recent than the data, hence with too small outlines.
-# To remedy this, we filter large elevation differences outside 4 NMAD.
-dh.set_mask(np.abs(dh.data) > 4 * xdem.spatialstats.nmad(dh.data))
-
-# %%
-# We plot the elevation differences after filtering to check that we successively removed glacier signals.
-plt.figure(figsize=(8, 5))
-dh.show(ax=plt.gca(), cmap="RdYlBu", vmin=-4, vmax=4, cbar_title="Elevation differences (m)")
-
-# %%
-# To quantify the spatial correlation of the data, we sample an empirical variogram.
-# The empirical variogram describes the variance between the elevation differences of pairs of pixels depending on their
-# distance. This distance between pairs of pixels if referred to as spatial lag.
-#
-# To perform this procedure effectively, we use improved methods that provide efficient pairwise sampling methods for
-# large grid data in `scikit-gstat `_, which are encapsulated
-# conveniently by :func:`xdem.spatialstats.sample_empirical_variogram`:
-# Dowd's variogram is used for robustness in conjunction with the NMAD (see :ref:`robuststats-corr`).
-
-df = xdem.spatialstats.sample_empirical_variogram(
- values=dh.data, gsd=dh.res[0], subsample=100, n_variograms=10, estimator="dowd", random_state=42
-)
-
-# %%
-# *Note: in this example, we add a* ``random_state`` *argument to yield a reproducible random sampling of pixels within
-# the grid.*
-
-# %%
-# We plot the empirical variogram:
-xdem.spatialstats.plot_variogram(df)
-
-# %%
-# With this plot, it is hard to conclude anything! Properly visualizing the empirical variogram is one of the most
-# important step. With grid data, we expect short-range correlations close to the resolution of the grid (~20-200
-# meters), but also possibly longer range correlation due to instrument noise or alignment issues (~1-50 km).
-#
-# To better visualize the variogram, we can either change the axis to log-scale, but this might make it more difficult
-# to later compare to variogram models. # Another solution is to split the variogram plot into subpanels, each with
-# its own linear scale. Both are shown below.
-
-# %%
-# **Log scale:**
-xdem.spatialstats.plot_variogram(df, xscale="log")
-
-# %%
-# **Subpanels with linear scale:**
-xdem.spatialstats.plot_variogram(df, xscale_range_split=[100, 1000, 10000])
-
-# %%
-# We identify:
-# - a short-range (i.e., correlation length) correlation, likely due to effects of resolution. It has a large partial sill (correlated variance), meaning that the elevation measurement errors are strongly correlated until a range of ~100 m.
-# - a longer range correlation, with a smaller partial sill, meaning the part of the elevation measurement errors remain correlated over a longer distance.
-#
-# In order to show the difference between accounting only for the most noticeable, short-range correlation, or adding the
-# long-range correlation, we fit this empirical variogram with two different models: a single spherical model, and
-# the sum of two spherical models (two ranges). For this, we use :func:`xdem.spatialstats.fit_sum_model_variogram`, which
-# is based on `scipy.optimize.curve_fit `_:
-func_sum_vgm1, params_vgm1 = xdem.spatialstats.fit_sum_model_variogram(
- list_models=["Spherical"], empirical_variogram=df
-)
-
-func_sum_vgm2, params_vgm2 = xdem.spatialstats.fit_sum_model_variogram(
- list_models=["Spherical", "Spherical"], empirical_variogram=df
-)
-
-xdem.spatialstats.plot_variogram(
- df,
- list_fit_fun=[func_sum_vgm1, func_sum_vgm2],
- list_fit_fun_label=["Single-range model", "Double-range model"],
- xscale_range_split=[100, 1000, 10000],
-)
-
-# %%
-# The sum of two spherical models fits better, accouting for the small partial sill at longer ranges. Yet this longer
-# range partial sill (correlated variance) is quite small...
-#
-# **So one could wonder: is it really important to account for this small additional "bump" in the variogram?**
-#
-# To answer this, we compute the precision of the DEM integrated over a certain surface area based on spatial integration of the
-# variogram models using :func:`xdem.spatialstats.neff_circ`, with areas varying from pixel size to grid size.
-# Numerical and exact integration of variogram is fast, allowing us to estimate errors for a wide range of areas rapidly.
-
-areas = np.linspace(20, 10000, 50) ** 2
-
-list_stderr_singlerange, list_stderr_doublerange, list_stderr_empirical = ([] for i in range(3))
-for area in areas:
-
- # Number of effective samples integrated over the area for a single-range model
- neff_singlerange = xdem.spatialstats.number_effective_samples(area, params_vgm1)
-
- # For a double-range model
- neff_doublerange = xdem.spatialstats.number_effective_samples(area, params_vgm2)
-
- # Convert into a standard error
- stderr_singlerange = nmad(dh.data) / np.sqrt(neff_singlerange)
- stderr_doublerange = nmad(dh.data) / np.sqrt(neff_doublerange)
- list_stderr_singlerange.append(stderr_singlerange)
- list_stderr_doublerange.append(stderr_doublerange)
-
-# %%
-# We add an empirical error based on intensive Monte-Carlo sampling ("patches" method) to validate our results.
-# This method is implemented in :func:`xdem.spatialstats.patches_method`. Here, we sample fewer areas to avoid for the
-# patches method to run over long processing times, increasing from areas of 5 pixels to areas of 10000 pixels exponentially.
-
-areas_emp = [4000 * 2 ** (i) for i in range(10)]
-df_patches = xdem.spatialstats.patches_method(dh, gsd=dh.res[0], areas=areas_emp)
-
-
-fig, ax = plt.subplots()
-plt.plot(np.asarray(areas) / 1000000, list_stderr_singlerange, label="Single-range spherical model")
-plt.plot(np.asarray(areas) / 1000000, list_stderr_doublerange, label="Double-range spherical model")
-plt.scatter(
- df_patches.exact_areas.values / 1000000,
- df_patches.nmad.values,
- label="Empirical estimate",
- color="black",
- marker="x",
-)
-plt.xlabel("Averaging area (km²)")
-plt.ylabel("Uncertainty in the mean elevation difference (m)")
-plt.xscale("log")
-plt.yscale("log")
-plt.legend()
-plt.show()
-
-# %%
-# *Note: in this example, we add a* ``random_state`` *argument to the patches method to yield a reproducible random
-# sampling, and set* ``n_patches`` *to reduce computing time.*
-
-# %%
-# Using a single-range variogram highly underestimates the measurement error integrated over an area, by over a factor
-# of ~100 for large surface areas. Using a double-range variogram brings us closer to the empirical error.
-#
-# **But, in this case, the error is still too small. Why?**
-# The small size of the sampling area against the very large range of the noise implies that we might not verify the
-# assumption of second-order stationarity (see :ref:`spatialstats`). Longer range correlations might be omitted by
-# our analysis, due to the limits of the variogram sampling. In other words, a small part of the variance could be
-# fully correlated over a large part of the grid: a vertical bias.
-#
-# As a first guess for this, let's examine the difference between mean and median to gain some insight on the central
-# tendency of our sample:
-
-diff_med_mean = np.nanmean(dh.data.data) - np.nanmedian(dh.data.data)
-print(f"Difference mean/median: {diff_med_mean:.3f} meters.")
-
-# %%
-# If we now express it as a percentage of the dispersion:
-
-print(f"{diff_med_mean/np.nanstd(dh.data)*100:.1f} % of STD.")
-
-# %%
-# There might be a significant bias of central tendency, i.e. almost fully correlated measurement error across the grid.
-# If we assume that around 5% of the variance is fully correlated, and re-calculate our elevation measurement errors
-# accordingly.
-
-list_stderr_doublerange_plus_fullycorrelated = []
-for area in areas:
-
- # For a double-range model
- neff_doublerange = xdem.spatialstats.neff_circular_approx_numerical(area=area, params_variogram_model=params_vgm2)
-
- # About 5% of the variance might be fully correlated, the other 95% has the random part that we quantified
- stderr_fullycorr = np.sqrt(0.05 * np.nanvar(dh.data))
- stderr_doublerange = np.sqrt(0.95 * np.nanvar(dh.data)) / np.sqrt(neff_doublerange)
- list_stderr_doublerange_plus_fullycorrelated.append(stderr_fullycorr + stderr_doublerange)
-
-fig, ax = plt.subplots()
-plt.plot(np.asarray(areas) / 1000000, list_stderr_singlerange, label="Single-range spherical model")
-plt.plot(np.asarray(areas) / 1000000, list_stderr_doublerange, label="Double-range spherical model")
-plt.plot(
- np.asarray(areas) / 1000000,
- list_stderr_doublerange_plus_fullycorrelated,
- label="5% fully correlated,\n 95% double-range spherical model",
-)
-plt.scatter(
- df_patches.exact_areas.values / 1000000,
- df_patches.nmad.values,
- label="Empirical estimate",
- color="black",
- marker="x",
-)
-plt.xlabel("Averaging area (km²)")
-plt.ylabel("Uncertainty in the mean elevation difference (m)")
-plt.xscale("log")
-plt.yscale("log")
-plt.legend()
-plt.show()
-
-# %%
-# Our final estimation is now very close to the empirical error estimate.
-#
-# Some take-home points:
-# 1. Long-range correlations are very important to reliably estimate measurement errors integrated in space, even if they have a small partial sill i.e. correlated variance,
-# 2. Ideally, the grid must only contain correlation patterns significantly smaller than the grid size to verify second-order stationarity. Otherwise, be wary of small biases of central tendency, i.e. fully correlated measurement errors!
diff --git a/examples/basic/README.rst b/examples/basic/README.rst
deleted file mode 100644
index a9d0b02c..00000000
--- a/examples/basic/README.rst
+++ /dev/null
@@ -1,2 +0,0 @@
-Basic
-=====
diff --git a/examples/basic/plot_dem_subtraction.py b/examples/basic/plot_dem_subtraction.py
deleted file mode 100644
index f27e4bdf..00000000
--- a/examples/basic/plot_dem_subtraction.py
+++ /dev/null
@@ -1,80 +0,0 @@
-"""
-DEM subtraction
-===============
-
-Subtracting one DEM with another should be easy!
-This is why ``xdem`` (with functionality from `geoutils `_) allows directly using the ``-`` or ``+`` operators on :class:`xdem.DEM` objects.
-
-Before DEMs can be compared, they need to be reprojected/resampled/cropped to fit the same grid.
-The :func:`xdem.DEM.reproject` method takes care of this.
-
-"""
-import geoutils as gu
-import matplotlib.pyplot as plt
-
-import xdem
-
-# %%
-
-dem_2009 = xdem.DEM(xdem.examples.get_path("longyearbyen_ref_dem"))
-dem_1990 = xdem.DEM(xdem.examples.get_path("longyearbyen_tba_dem_coreg"))
-
-# %%
-# We can print the information about the DEMs for a "sanity check"
-
-print(dem_2009)
-print(dem_1990)
-
-# %%
-# In this particular case, the two DEMs are already on the same grid (they have the same bounds, resolution and coordinate system).
-# If they don't, we need to reproject one DEM to fit the other.
-# :func:`xdem.DEM.reproject` is a multi-purpose method that ensures a fit each time:
-
-_ = dem_1990.reproject(dem_2009)
-
-# %%
-# Oops!
-# ``xdem`` just warned us that ``dem_1990`` did not need reprojection, but we asked it to anyway.
-# To hide this prompt, add ``.reproject(..., silent=True)``.
-# By default, :func:`xdem.DEM.reproject` uses "bilinear" resampling (assuming resampling is needed).
-# Other options are "nearest" (fast but inaccurate), "cubic_spline", "lanczos" and others.
-# See `geoutils.Raster.reproject() `_ and `rasterio.enums.Resampling `_ for more information about reprojection.
-#
-# Now, we are ready to generate the dDEM:
-
-ddem = dem_2009 - dem_1990
-
-print(ddem)
-
-# %%
-# It is a new :class:`xdem.DEM` instance, loaded in memory.
-# Let's visualize it:
-
-ddem.show(cmap="coolwarm_r", vmin=-20, vmax=20, cbar_title="Elevation change (m)")
-
-# %%
-# Let's add some glacier outlines
-
-# Load the outlines
-glacier_outlines = gu.Vector(xdem.examples.get_path("longyearbyen_glacier_outlines"))
-
-# Need to create a common matplotlib Axes to plot both on the same figure
-# The xlim/ylim commands are necessary only because outlines extend further than the raster extent
-ax = plt.subplot(111)
-ddem.show(ax=ax, cmap="coolwarm_r", vmin=-20, vmax=20, cbar_title="Elevation change (m)")
-glacier_outlines.ds.plot(ax=ax, fc="none", ec="k")
-plt.xlim(ddem.bounds.left, ddem.bounds.right)
-plt.ylim(ddem.bounds.bottom, ddem.bounds.top)
-plt.title("With glacier outlines")
-plt.show()
-
-# %%
-# For missing values, ``xdem`` provides a number of interpolation methods which are shown in the other examples.
-
-# %%
-# Saving the output to a file is also very simple
-
-ddem.save("temp.tif")
-
-# %%
-# ... and that's it!
diff --git a/examples/basic/plot_icp_coregistration.py b/examples/basic/plot_icp_coregistration.py
deleted file mode 100644
index 679a6176..00000000
--- a/examples/basic/plot_icp_coregistration.py
+++ /dev/null
@@ -1,99 +0,0 @@
-"""
-Iterative Closest Point coregistration
-======================================
-Some DEMs may for one or more reason be erroneously rotated in the X, Y or Z directions.
-Established coregistration approaches like :ref:`coregistration-nuthkaab` work great for X, Y and Z *translations*, but rotation is not accounted for at all.
-
-Iterative Closest Point (ICP) is one method that takes both rotation and translation into account.
-It is however not as good as :ref:`coregistration-nuthkaab` when it comes to sub-pixel accuracy.
-Fortunately, ``xdem`` provides the best of two worlds by allowing a combination of the two.
-
-**Reference**: `Besl and McKay (1992) `_.
-"""
-# sphinx_gallery_thumbnail_number = 2
-import matplotlib.pyplot as plt
-import numpy as np
-
-import xdem
-
-# %%
-# Let's load a DEM and crop it to a single mountain on Svalbard, called Battfjellet.
-# Its aspects vary in every direction, and is therefore a good candidate for coregistration exercises.
-dem = xdem.DEM(xdem.examples.get_path("longyearbyen_ref_dem"))
-
-subset_extent = [523000, 8660000, 529000, 8665000]
-dem.crop(subset_extent)
-
-# %%
-# Let's plot a hillshade of the mountain for context.
-xdem.terrain.hillshade(dem).show(cmap="gray")
-
-# %%
-# To try the effects of rotation, we can artificially rotate the DEM using a transformation matrix.
-# Here, a rotation of just one degree is attempted.
-# But keep in mind: the window is 6 km wide; 1 degree of rotation at the center equals to a 52 m vertical difference at the edges!
-
-rotation = np.deg2rad(1)
-rotation_matrix = np.array(
- [
- [np.cos(rotation), 0, np.sin(rotation), 0],
- [0, 1, 0, 0],
- [-np.sin(rotation), 0, np.cos(rotation), 0],
- [0, 0, 0, 1],
- ]
-)
-
-# This will apply the matrix along the center of the DEM
-rotated_dem_data = xdem.coreg.apply_matrix(dem.data.squeeze(), transform=dem.transform, matrix=rotation_matrix)
-rotated_dem = xdem.DEM.from_array(rotated_dem_data, transform=dem.transform, crs=dem.crs, nodata=-9999)
-
-# %%
-# We can plot the difference between the original and rotated DEM.
-# It is now artificially tilting from east down to the west.
-diff_before = dem - rotated_dem
-diff_before.show(cmap="coolwarm_r", vmin=-20, vmax=20)
-plt.show()
-
-# %%
-# As previously mentioned, ``NuthKaab`` works well on sub-pixel scale but does not handle rotation.
-# ``ICP`` works with rotation but lacks the sub-pixel accuracy.
-# Luckily, these can be combined!
-# Any :class:`xdem.coreg.Coreg` subclass can be added with another, making a :class:`xdem.coreg.CoregPipeline`.
-# With a pipeline, each step is run sequentially, potentially leading to a better result.
-# Let's try all three approaches: ``ICP``, ``NuthKaab`` and ``ICP + NuthKaab``.
-
-approaches = [
- (xdem.coreg.ICP(), "ICP"),
- (xdem.coreg.NuthKaab(), "NuthKaab"),
- (xdem.coreg.ICP() + xdem.coreg.NuthKaab(), "ICP + NuthKaab"),
-]
-
-
-plt.figure(figsize=(6, 12))
-
-for i, (approach, name) in enumerate(approaches):
- approach.fit(
- reference_dem=dem,
- dem_to_be_aligned=rotated_dem,
- )
-
- corrected_dem = approach.apply(dem=rotated_dem)
-
- diff = dem - corrected_dem
-
- ax = plt.subplot(3, 1, i + 1)
- plt.title(name)
- diff.show(cmap="coolwarm_r", vmin=-20, vmax=20, ax=ax)
-
-plt.tight_layout()
-plt.show()
-
-
-# %%
-# The results show what we expected:
-#
-# * ``ICP`` alone handled the rotational offset, but left a horizontal offset as it is not sub-pixel accurate (in this case, the resolution is 20x20m).
-# * ``NuthKaab`` barely helped at all, since the offset is purely rotational.
-# * ``ICP + NuthKaab`` first handled the rotation, then fit the reference with sub-pixel accuracy.
-#
-# The last result is an almost identical raster that was offset but then corrected back to its original position!
diff --git a/examples/basic/plot_infer_heterosc.py b/examples/basic/plot_infer_heterosc.py
deleted file mode 100644
index 2d1fe528..00000000
--- a/examples/basic/plot_infer_heterosc.py
+++ /dev/null
@@ -1,57 +0,0 @@
-"""
-Elevation error map
-===================
-
-Digital elevation models have a precision that can vary with terrain and instrument-related variables. Here, we
-rely on a non-stationary spatial statistics framework to estimate and model this variability in elevation error,
-using terrain slope and maximum curvature as explanatory variables, with stable terrain as an error proxy for moving
-terrain.
-
-**Reference**: `Hugonnet et al. (2022) `_, Figs. 4 and S6–S9. Equations 7
-or 8 can be used to convert elevation change errors into elevation errors.
-"""
-import geoutils as gu
-
-# sphinx_gallery_thumbnail_number = 1
-import xdem
-
-# %%
-# We load a difference of DEMs at Longyearbyen, already coregistered using :ref:`coregistration-nuthkaab` as shown in
-# the :ref:`sphx_glr_basic_examples_plot_nuth_kaab.py` example. We also load the reference DEM to derive terrain
-# attributes and the glacier outlines here corresponding to moving terrain.
-dh = xdem.DEM(xdem.examples.get_path("longyearbyen_ddem"))
-ref_dem = xdem.DEM(xdem.examples.get_path("longyearbyen_ref_dem"))
-glacier_outlines = gu.Vector(xdem.examples.get_path("longyearbyen_glacier_outlines"))
-
-# %%
-# We derive the terrain slope and maximum curvature from the reference DEM.
-slope, maximum_curvature = xdem.terrain.get_terrain_attribute(ref_dem, attribute=["slope", "maximum_curvature"])
-
-# %%
-# Then, we run the pipeline for inference of elevation heteroscedasticity from stable terrain:
-errors, df_binning, error_function = xdem.spatialstats.infer_heteroscedasticity_from_stable(
- dvalues=dh, list_var=[slope, maximum_curvature], list_var_names=["slope", "maxc"], unstable_mask=glacier_outlines
-)
-
-# %%
-# The first output corresponds to the error map for the DEM (:math:`\pm` 1\ :math:`\sigma` level):
-errors.show(vmin=2, vmax=7, cmap="Reds", cbar_title=r"Elevation error (1$\sigma$, m)")
-
-# %%
-# The second output is the dataframe of 2D binning with slope and maximum curvature:
-df_binning
-
-# %%
-# The third output is the 2D binning interpolant, i.e. an error function with the slope and maximum curvature
-# (*Note: below we divide the maximum curvature by 100 to convert it in* m\ :sup:`-1` ):
-for slope, maxc in [(0, 0), (40, 0), (0, 5), (40, 5)]:
- print(
- "Error for a slope of {:.0f} degrees and"
- " {:.2f} m-1 max. curvature: {:.1f} m".format(slope, maxc / 100, error_function((slope, maxc)))
- )
-
-# %%
-# This pipeline will not always work optimally with default parameters: spread estimates can be affected by skewed
-# distributions, the binning by extreme range of values, some DEMs do not have any error variability with terrain (e.g.,
-# terrestrial photogrammetry). **To learn how to tune more parameters and use the subfunctions, see the gallery example:**
-# :ref:`sphx_glr_advanced_examples_plot_heterosc_estimation_modelling.py`!
diff --git a/examples/basic/plot_infer_spatial_correlation.py b/examples/basic/plot_infer_spatial_correlation.py
deleted file mode 100644
index 83fc7785..00000000
--- a/examples/basic/plot_infer_spatial_correlation.py
+++ /dev/null
@@ -1,72 +0,0 @@
-"""
-Spatial correlation of errors
-=============================
-
-Digital elevation models have errors that are spatially correlated due to instrument or processing effects. Here, we
-rely on a non-stationary spatial statistics framework to estimate and model spatial correlations in elevation error.
-We use a sum of variogram forms to model this correlation, with stable terrain as an error proxy for moving terrain.
-
-**Reference**: `Hugonnet et al. (2022) `_, Figure 5 and Equations 13–16.
-"""
-import geoutils as gu
-
-# sphinx_gallery_thumbnail_number = 1
-import xdem
-
-# %%
-# We load a difference of DEMs at Longyearbyen, already coregistered using :ref:`coregistration-nuthkaab` as shown in
-# the :ref:`sphx_glr_basic_examples_plot_nuth_kaab.py` example. We also load the glacier outlines here corresponding to
-# moving terrain.
-dh = xdem.DEM(xdem.examples.get_path("longyearbyen_ddem"))
-glacier_outlines = gu.Vector(xdem.examples.get_path("longyearbyen_glacier_outlines"))
-
-# %%
-# Then, we run the pipeline for inference of elevation heteroscedasticity from stable terrain (*Note: we pass a*
-# ``random_state`` *argument to ensure a fixed, reproducible random subsampling in this example*). We ask for a fit with
-# a Gaussian model for short range (as it is passed first), and Spherical for long range (as it is passed second):
-(
- df_empirical_variogram,
- df_model_params,
- spatial_corr_function,
-) = xdem.spatialstats.infer_spatial_correlation_from_stable(
- dvalues=dh, list_models=["Gaussian", "Spherical"], unstable_mask=glacier_outlines, random_state=42
-)
-
-# %%
-# The first output corresponds to the dataframe of the empirical variogram, by default estimated using Dowd's estimator
-# and the circular sampling scheme of ``skgstat.RasterEquidistantMetricSpace`` (Fig. S13 of Hugonnet et al. (2022)). The
-# ``lags`` columns is the upper bound of spatial lag bins (lower bound of first bin being 0), the ``exp`` column is the
-# "experimental" variance value of the variogram in that bin, the ``count`` the number of pairwise samples, and
-# ``err_exp`` the 1-sigma error of the "experimental" variance, if more than one variogram is estimated with the
-# ``n_variograms`` parameter.
-df_empirical_variogram
-
-# %%
-# The second output is the dataframe of optimized model parameters (``range``, ``sill``, and possibly ``smoothness``)
-# for a sum of gaussian and spherical models:
-df_model_params
-
-# %%
-# The third output is the spatial correlation function with spatial lags, derived from the variogram:
-for spatial_lag in [0, 100, 1000, 10000, 30000]:
- print(
- "Errors are correlated at {:.1f}% for a {:,.0f} m spatial lag".format(
- spatial_corr_function(spatial_lag) * 100, spatial_lag
- )
- )
-
-# %%
-# We can plot the empirical variogram and its model on a non-linear X-axis to identify the multi-scale correlations.
-xdem.spatialstats.plot_variogram(
- df=df_empirical_variogram,
- list_fit_fun=[xdem.spatialstats.get_variogram_model_func(df_model_params)],
- xlabel="Spatial lag (m)",
- ylabel="Variance of\nelevation differences (m)",
- xscale_range_split=[100, 1000],
-)
-
-# %%
-# This pipeline will not always work optimally with default parameters: variogram sampling is more robust with a lot of
-# samples but takes long computing times, and the fitting might require multiple tries for forms and possibly bounds
-# and first guesses to help the least-squares optimization. **To learn how to tune more parameters and use the
-# subfunctions, see the gallery example:** :ref:`sphx_glr_advanced_examples_plot_variogram_estimation_modelling.py`!
diff --git a/examples/basic/plot_nuth_kaab.py b/examples/basic/plot_nuth_kaab.py
deleted file mode 100644
index 61bf86b8..00000000
--- a/examples/basic/plot_nuth_kaab.py
+++ /dev/null
@@ -1,64 +0,0 @@
-"""
-Nuth and Kääb coregistration
-============================
-
-Nuth and Kääb (`2011 `_) coregistration allows for horizontal and vertical shifts to be estimated and corrected for.
-In ``xdem``, this approach is implemented through the :class:`xdem.coreg.NuthKaab` class.
-
-For more information about the approach, see :ref:`coregistration-nuthkaab`.
-"""
-import geoutils as gu
-import numpy as np
-
-import xdem
-
-# %%
-# **Example files**
-reference_dem = xdem.DEM(xdem.examples.get_path("longyearbyen_ref_dem"))
-dem_to_be_aligned = xdem.DEM(xdem.examples.get_path("longyearbyen_tba_dem"))
-glacier_outlines = gu.Vector(xdem.examples.get_path("longyearbyen_glacier_outlines"))
-
-# Create a stable ground mask (not glacierized) to mark "inlier data"
-inlier_mask = ~glacier_outlines.create_mask(reference_dem)
-
-
-# %%
-# The DEM to be aligned (a 1990 photogrammetry-derived DEM) has some vertical and horizontal biases that we want to avoid.
-# These can be visualized by plotting a change map:
-
-diff_before = reference_dem - dem_to_be_aligned
-diff_before.show(cmap="coolwarm_r", vmin=-10, vmax=10, cbar_title="Elevation change (m)")
-
-
-# %%
-# Horizontal and vertical shifts can be estimated using :class:`xdem.coreg.NuthKaab`.
-# First, the shifts are estimated, and then they can be applied to the data:
-
-nuth_kaab = xdem.coreg.NuthKaab()
-
-nuth_kaab.fit(reference_dem, dem_to_be_aligned, inlier_mask)
-
-aligned_dem = nuth_kaab.apply(dem_to_be_aligned)
-
-# %%
-# Then, the new difference can be plotted to validate that it improved.
-
-diff_after = reference_dem - aligned_dem
-diff_after.show(cmap="coolwarm_r", vmin=-10, vmax=10, cbar_title="Elevation change (m)")
-
-
-# %%
-# We compare the median and NMAD to validate numerically that there was an improvement (see :ref:`robuststats-meanstd`):
-inliers_before = diff_before[inlier_mask]
-med_before, nmad_before = np.median(inliers_before), xdem.spatialstats.nmad(inliers_before)
-
-inliers_after = diff_after[inlier_mask]
-med_after, nmad_after = np.median(inliers_after), xdem.spatialstats.nmad(inliers_after)
-
-print(f"Error before: median = {med_before:.2f} - NMAD = {nmad_before:.2f} m")
-print(f"Error after: median = {med_after:.2f} - NMAD = {nmad_after:.2f} m")
-
-# %%
-# In the plot above, one may notice a positive (blue) tendency toward the east.
-# The 1990 DEM is a mosaic, and likely has a "seam" near there.
-# :ref:`sphx_glr_advanced_examples_plot_blockwise_coreg.py` tackles this issue, using a nonlinear coregistration approach.
diff --git a/examples/basic/plot_spatial_error_propagation.py b/examples/basic/plot_spatial_error_propagation.py
deleted file mode 100644
index 1b126832..00000000
--- a/examples/basic/plot_spatial_error_propagation.py
+++ /dev/null
@@ -1,90 +0,0 @@
-"""
-Spatial propagation of elevation errors
-=======================================
-
-Propagating elevation errors spatially accounting for heteroscedasticity and spatial correlation is complex. It
-requires computing the pairwise correlations between all points of an area of interest (be it for a sum, mean, or
-other operation), which is computationally intensive. Here, we rely on published formulations to perform
-computationally-efficient spatial propagation for the mean of elevation (or elevation differences) in an area.
-
-**References**: `Hugonnet et al. (2022) `_, Figure S16, Equations 17–19 and
-`Rolstad et al. (2009) `_, Equation 8.
-"""
-import geoutils as gu
-import matplotlib.pyplot as plt
-
-# sphinx_gallery_thumbnail_number = 1
-import numpy as np
-
-import xdem
-
-# %%
-# We load the same data, and perform the same calculations on heteroscedasticity and spatial correlations of errors as
-# in the :ref:`sphx_glr_basic_examples_plot_infer_heterosc.py` and :ref:`sphx_glr_basic_examples_plot_infer_spatial_correlation.py`
-# examples.
-
-dh = xdem.DEM(xdem.examples.get_path("longyearbyen_ddem"))
-ref_dem = xdem.DEM(xdem.examples.get_path("longyearbyen_ref_dem"))
-glacier_outlines = gu.Vector(xdem.examples.get_path("longyearbyen_glacier_outlines"))
-slope, maximum_curvature = xdem.terrain.get_terrain_attribute(ref_dem, attribute=["slope", "maximum_curvature"])
-errors, df_binning, error_function = xdem.spatialstats.infer_heteroscedasticity_from_stable(
- dvalues=dh, list_var=[slope, maximum_curvature], list_var_names=["slope", "maxc"], unstable_mask=glacier_outlines
-)
-
-# %%
-# We use the error map to standardize the elevation differences before variogram estimation, following Equation 12 of
-# Hugonnet et al. (2022), which is more robust as it removes the variance variability due to heteroscedasticity.
-zscores = dh / errors
-emp_variogram, params_variogram_model, spatial_corr_function = xdem.spatialstats.infer_spatial_correlation_from_stable(
- dvalues=zscores, list_models=["Gaussian", "Spherical"], unstable_mask=glacier_outlines, random_state=42
-)
-
-# %%
-# With our estimated heteroscedasticity and spatial correlation, we can now perform the spatial propagation of errors.
-# We select two glaciers intersecting this elevation change map in Svalbard. The best estimation of their standard error
-# is done by directly providing the shapefile, which relies on Equation 18 of Hugonnet et al. (2022).
-areas = [
- glacier_outlines.ds[glacier_outlines.ds["NAME"] == "Brombreen"],
- glacier_outlines.ds[glacier_outlines.ds["NAME"] == "Medalsbreen"],
-]
-stderr_glaciers = xdem.spatialstats.spatial_error_propagation(
- areas=areas, errors=errors, params_variogram_model=params_variogram_model
-)
-
-for glacier_name, stderr_gla in [("Brombreen", stderr_glaciers[0]), ("Medalsbreen", stderr_glaciers[1])]:
- print(f"The error (1-sigma) in mean elevation change for {glacier_name} is {stderr_gla:.2f} meters.")
-
-# %%
-# When passing a numerical area value, we compute an approximation with disk shape from Equation 8 of Rolstad et al.
-# (2009). This approximation is practical to visualize changes in elevation error when averaging over different area
-# sizes, but is less accurate to estimate the standard error of a certain area shape.
-areas = 10 ** np.linspace(1, 12)
-stderrs = xdem.spatialstats.spatial_error_propagation(
- areas=areas, errors=errors, params_variogram_model=params_variogram_model
-)
-plt.plot(areas / 10**6, stderrs)
-plt.xlabel("Averaging area (km²)")
-plt.ylabel("Standard error (m)")
-plt.vlines(
- x=np.pi * params_variogram_model["range"].values[0] ** 2 / 10**6,
- ymin=np.min(stderrs),
- ymax=np.max(stderrs),
- colors="red",
- linestyles="dashed",
- label="Disk area with radius the\n1st correlation range of {:,.0f} meters".format(
- params_variogram_model["range"].values[0]
- ),
-)
-plt.vlines(
- x=np.pi * params_variogram_model["range"].values[1] ** 2 / 10**6,
- ymin=np.min(stderrs),
- ymax=np.max(stderrs),
- colors="blue",
- linestyles="dashed",
- label="Disk area with radius the\n2nd correlation range of {:,.0f} meters".format(
- params_variogram_model["range"].values[1]
- ),
-)
-plt.xscale("log")
-plt.legend()
-plt.show()
diff --git a/examples/basic/plot_terrain_attributes.py b/examples/basic/plot_terrain_attributes.py
deleted file mode 100644
index 50a132e3..00000000
--- a/examples/basic/plot_terrain_attributes.py
+++ /dev/null
@@ -1,161 +0,0 @@
-"""
-Terrain attributes
-==================
-
-Terrain attributes generated from a DEM have a multitude of uses for analytic and visual purposes.
-Here is an example of how to generate these products.
-
-For more information, see the :ref:`terrain-attributes` chapter and the
-:ref:`sphx_glr_advanced_examples_plot_slope_methods.py` example.
-"""
-# sphinx_gallery_thumbnail_number = 12
-import matplotlib.pyplot as plt
-
-import xdem
-
-# %%
-# **Example data**
-
-dem = xdem.DEM(xdem.examples.get_path("longyearbyen_ref_dem"))
-
-
-def plot_attribute(attribute, cmap, label=None, vlim=None):
-
- add_cbar = True if label is not None else False
-
- fig = plt.figure(figsize=(8, 5))
- ax = fig.add_subplot(111)
-
- if vlim is not None:
- if isinstance(vlim, (int, float)):
- vlims = {"vmin": -vlim, "vmax": vlim}
- elif len(vlim) == 2:
- vlims = {"vmin": vlim[0], "vmax": vlim[1]}
- else:
- vlims = {}
-
- attribute.show(ax=ax, cmap=cmap, add_cbar=add_cbar, cbar_title=label, **vlims)
-
- plt.xticks([])
- plt.yticks([])
- plt.tight_layout()
-
- plt.show()
-
-
-# %%
-# Slope
-# -----
-
-slope = xdem.terrain.slope(dem)
-
-plot_attribute(slope, "Reds", "Slope (°)")
-
-# %%
-# Note that all functions also work with numpy array as inputs, if resolution is specified
-
-slope = xdem.terrain.slope(dem.data, resolution=dem.res)
-
-# %%
-# Aspect
-# ------
-
-aspect = xdem.terrain.aspect(dem)
-
-plot_attribute(aspect, "twilight", "Aspect (°)")
-
-# %%
-# Hillshade
-# ---------
-
-hillshade = xdem.terrain.hillshade(dem, azimuth=315.0, altitude=45.0)
-
-plot_attribute(hillshade, "Greys_r")
-
-# %%
-# Curvature
-# ---------
-
-curvature = xdem.terrain.curvature(dem)
-
-plot_attribute(curvature, "RdGy_r", "Curvature (100 / m)", vlim=1)
-
-# %%
-# Planform curvature
-# ------------------
-
-planform_curvature = xdem.terrain.planform_curvature(dem)
-
-plot_attribute(planform_curvature, "RdGy_r", "Planform curvature (100 / m)", vlim=1)
-
-# %%
-# Profile curvature
-# -----------------
-profile_curvature = xdem.terrain.profile_curvature(dem)
-
-plot_attribute(profile_curvature, "RdGy_r", "Profile curvature (100 / m)", vlim=1)
-
-# %%
-# Topographic Position Index
-# --------------------------
-tpi = xdem.terrain.topographic_position_index(dem)
-
-plot_attribute(tpi, "Spectral", "Topographic Position Index", vlim=5)
-
-# %%
-# Terrain Ruggedness Index
-# ------------------------
-tri = xdem.terrain.terrain_ruggedness_index(dem)
-
-plot_attribute(tri, "Purples", "Terrain Ruggedness Index")
-
-# %%
-# Roughness
-# ---------
-roughness = xdem.terrain.roughness(dem)
-
-plot_attribute(roughness, "Oranges", "Roughness")
-
-# %%
-# Rugosity
-# --------
-rugosity = xdem.terrain.rugosity(dem)
-
-plot_attribute(rugosity, "YlOrRd", "Rugosity")
-
-# %%
-# Fractal roughness
-# -----------------
-fractal_roughness = xdem.terrain.fractal_roughness(dem)
-
-plot_attribute(fractal_roughness, "Reds", "Fractal roughness")
-
-# %%
-# Generating multiple attributes at once
-# --------------------------------------
-
-attributes = xdem.terrain.get_terrain_attribute(
- dem.data,
- resolution=dem.res,
- attribute=["hillshade", "slope", "aspect", "curvature", "terrain_ruggedness_index", "rugosity"],
-)
-
-plt.figure(figsize=(8, 6.5))
-
-plt_extent = [dem.bounds.left, dem.bounds.right, dem.bounds.bottom, dem.bounds.top]
-
-cmaps = ["Greys_r", "Reds", "twilight", "RdGy_r", "Purples", "YlOrRd"]
-labels = ["Hillshade", "Slope (°)", "Aspect (°)", "Curvature (100 / m)", "Terrain Ruggedness Index", "Rugosity"]
-vlims = [(None, None) for i in range(6)]
-vlims[3] = [-2, 2]
-
-for i in range(6):
- plt.subplot(3, 2, i + 1)
- plt.imshow(attributes[i].squeeze(), cmap=cmaps[i], extent=plt_extent, vmin=vlims[i][0], vmax=vlims[i][1])
- cbar = plt.colorbar()
- cbar.set_label(labels[i])
- plt.xticks([])
- plt.yticks([])
-
-plt.tight_layout()
-plt.show()
diff --git a/mypy.ini b/mypy.ini
deleted file mode 100644
index 3dc875f9..00000000
--- a/mypy.ini
+++ /dev/null
@@ -1,2 +0,0 @@
-[mypy]
-plugins = numpy.typing.mypy_plugin
diff --git a/requirements.txt b/requirements.txt
deleted file mode 100644
index 0176ab5e..00000000
--- a/requirements.txt
+++ /dev/null
@@ -1,19 +0,0 @@
-# This file is auto-generated from environment.yml, do not modify.
-# See that file for comments about the need/usage of each dependency.
-
-geopandas>=0.10.0
-fiona
-shapely
-numba
-numpy
-matplotlib
-pyproj>=3.4
-rasterio>=1.3
-scipy
-tqdm
-scikit-image
-scikit-gstat>=1.0
-geoutils==0.0.15
-pip
-setuptools>=42
-setuptools_scm[toml]>=6.2
diff --git a/setup.cfg b/setup.cfg
deleted file mode 100644
index 6c2ea90a..00000000
--- a/setup.cfg
+++ /dev/null
@@ -1,79 +0,0 @@
-[metadata]
-author = The GlacioHack Team
-name = xdem
-version = 0.0.17
-description = Analysis of digital elevation models (DEMs)
-keywords = dem, elevation, geoutils, xarray
-long_description = file: README.md
-long_description_content_type = text/markdown
-license = MIT
-license_files = LICENSE
-platform = any
-classifiers =
- Development Status :: 4 - Beta
- Intended Audience :: Developers
- Intended Audience :: Science/Research
- Natural Language :: English
- License :: OSI Approved :: MIT License
- Operating System :: OS Independent
- Topic :: Scientific/Engineering :: GIS
- Topic :: Scientific/Engineering :: Image Processing
- Topic :: Scientific/Engineering :: Information Analysis
- Programming Language :: Python
- Programming Language :: Python :: 3.9
- Programming Language :: Python :: 3.10
- Programming Language :: Python :: 3.11
- Programming Language :: Python :: 3
- Topic :: Software Development :: Libraries :: Python Modules
- Typing :: Typed
-url = https://github.com/GlacioHack/xdem
-download_url = https://pypi.org/project/xdem/
-
-[options]
-packages = find:
-zip_safe = False # https://mypy.readthedocs.io/en/stable/installed_packages.html
-include_package_data = True
-python_requires = >=3.9
-# Avoid pinning dependencies in requirements.txt (which we don't do anyways, and we rely mostly on Conda)
-# (https://caremad.io/posts/2013/07/setup-vs-requirement/, https://github.com/pypa/setuptools/issues/1951)
-install_requires = file: requirements.txt
-
-[options.package_data]
-xdem =
- py.typed
-
-[options.packages.find]
-include =
- xdem
- xdem.coreg
-
-[options.extras_require]
-opt =
- opencv
- openh264
- pytransform3d
- richdem
- noisyopt
-test =
- pytest
- pytest-xdist
- pyyaml
- flake8
- pylint
-doc =
- sphinx
- sphinx-book-theme
- sphinxcontrib-programoutput
- sphinx-design
- sphinx-autodoc-typehints
- sphinx-gallery
- autovizwidget
- graphviz
- myst-nb
- numpydoc
-dev =
- %(opt)s
- %(test)s
- %(doc)s
-all =
- %(dev)s
diff --git a/tests/test_coreg/__init__.py b/tests/test_coreg/__init__.py
deleted file mode 100644
index e69de29b..00000000
diff --git a/tests/test_coreg/test_affine.py b/tests/test_coreg/test_affine.py
deleted file mode 100644
index baf5e54e..00000000
--- a/tests/test_coreg/test_affine.py
+++ /dev/null
@@ -1,302 +0,0 @@
-"""Functions to test the affine coregistrations."""
-from __future__ import annotations
-
-import copy
-import warnings
-
-import numpy as np
-import pytest
-import rasterio as rio
-from geoutils import Raster, Vector
-from geoutils.raster import RasterType
-
-import xdem
-from xdem import coreg, examples
-from xdem.coreg.affine import AffineCoreg, CoregDict
-
-
-def load_examples() -> tuple[RasterType, RasterType, Vector]:
- """Load example files to try coregistration methods with."""
- with warnings.catch_warnings():
- warnings.simplefilter("ignore")
- reference_raster = Raster(examples.get_path("longyearbyen_ref_dem"))
- to_be_aligned_raster = Raster(examples.get_path("longyearbyen_tba_dem"))
- glacier_mask = Vector(examples.get_path("longyearbyen_glacier_outlines"))
-
- return reference_raster, to_be_aligned_raster, glacier_mask
-
-
-class TestAffineCoreg:
-
- ref, tba, outlines = load_examples() # Load example reference, to-be-aligned and mask.
- inlier_mask = ~outlines.create_mask(ref)
-
- fit_params = dict(
- reference_dem=ref.data,
- dem_to_be_aligned=tba.data,
- inlier_mask=inlier_mask,
- transform=ref.transform,
- crs=ref.crs,
- verbose=False,
- )
- # Create some 3D coordinates with Z coordinates being 0 to try the apply_pts functions.
- points = np.array([[1, 2, 3, 4], [1, 2, 3, 4], [0, 0, 0, 0]], dtype="float64").T
-
- def test_from_classmethods(self) -> None:
- warnings.simplefilter("error")
-
- # Check that the from_matrix function works as expected.
- vshift = 5
- matrix = np.diag(np.ones(4, dtype=float))
- matrix[2, 3] = vshift
- coreg_obj = AffineCoreg.from_matrix(matrix)
- transformed_points = coreg_obj.apply_pts(self.points)
- assert transformed_points[0, 2] == vshift
-
- # Check that the from_translation function works as expected.
- x_offset = 5
- coreg_obj2 = AffineCoreg.from_translation(x_off=x_offset)
- transformed_points2 = coreg_obj2.apply_pts(self.points)
- assert np.array_equal(self.points[:, 0] + x_offset, transformed_points2[:, 0])
-
- # Try to make a Coreg object from a nan translation (should fail).
- try:
- AffineCoreg.from_translation(np.nan)
- except ValueError as exception:
- if "non-finite values" not in str(exception):
- raise exception
-
- def test_vertical_shift(self) -> None:
- warnings.simplefilter("error")
-
- # Create a vertical shift correction instance
- vshiftcorr = coreg.VerticalShift()
- # Fit the vertical shift model to the data
- vshiftcorr.fit(**self.fit_params)
-
- # Check that a vertical shift was found.
- assert vshiftcorr._meta.get("vshift") is not None
- assert vshiftcorr._meta["vshift"] != 0.0
-
- # Copy the vertical shift to see if it changes in the test (it shouldn't)
- vshift = copy.copy(vshiftcorr._meta["vshift"])
-
- # Check that the to_matrix function works as it should
- matrix = vshiftcorr.to_matrix()
- assert matrix[2, 3] == vshift, matrix
-
- # Check that the first z coordinate is now the vertical shift
- assert vshiftcorr.apply_pts(self.points)[0, 2] == vshiftcorr._meta["vshift"]
-
- # Apply the model to correct the DEM
- tba_unshifted, _ = vshiftcorr.apply(self.tba.data, self.ref.transform, self.ref.crs)
-
- # Create a new vertical shift correction model
- vshiftcorr2 = coreg.VerticalShift()
- # Check that this is indeed a new object
- assert vshiftcorr is not vshiftcorr2
- # Fit the corrected DEM to see if the vertical shift will be close to or at zero
- vshiftcorr2.fit(
- reference_dem=self.ref.data,
- dem_to_be_aligned=tba_unshifted,
- transform=self.ref.transform,
- crs=self.ref.crs,
- inlier_mask=self.inlier_mask,
- )
- # Test the vertical shift
- newmeta: CoregDict = vshiftcorr2._meta
- new_vshift = newmeta["vshift"]
- assert np.abs(new_vshift) < 0.01
-
- # Check that the original model's vertical shift has not changed
- # (that the _meta dicts are two different objects)
- assert vshiftcorr._meta["vshift"] == vshift
-
- def test_all_nans(self) -> None:
- """Check that the coregistration approaches fail gracefully when given only nans."""
- dem1 = np.ones((50, 50), dtype=float)
- dem2 = dem1.copy() + np.nan
- affine = rio.transform.from_origin(0, 0, 1, 1)
- crs = rio.crs.CRS.from_epsg(4326)
-
- vshiftcorr = coreg.VerticalShift()
- icp = coreg.ICP()
-
- pytest.raises(ValueError, vshiftcorr.fit, dem1, dem2, transform=affine)
- pytest.raises(ValueError, icp.fit, dem1, dem2, transform=affine)
-
- dem2[[3, 20, 40], [2, 21, 41]] = 1.2
-
- vshiftcorr.fit(dem1, dem2, transform=affine, crs=crs)
-
- pytest.raises(ValueError, icp.fit, dem1, dem2, transform=affine)
-
- def test_coreg_example(self, verbose: bool = False) -> None:
- """
- Test the co-registration outputs performed on the example are always the same. This overlaps with the test in
- test_examples.py, but helps identify from where differences arise.
- """
-
- # Run co-registration
- nuth_kaab = xdem.coreg.NuthKaab()
- nuth_kaab.fit(self.ref, self.tba, inlier_mask=self.inlier_mask, verbose=verbose, random_state=42)
-
- # Check the output metadata is always the same
- shifts = (nuth_kaab._meta["offset_east_px"], nuth_kaab._meta["offset_north_px"], nuth_kaab._meta["vshift"])
- assert shifts == pytest.approx((-0.463, -0.133, -1.9876264671765433))
-
- def test_gradientdescending(self, subsample: int = 10000, inlier_mask: bool = True, verbose: bool = False) -> None:
- """
- Test the co-registration outputs performed on the example are always the same. This overlaps with the test in
- test_examples.py, but helps identify from where differences arise.
-
- It also implicitly tests the z_name kwarg and whether a geometry column can be provided instead of E/N cols.
- """
- if inlier_mask:
- inlier_mask = self.inlier_mask
-
- # Run co-registration
- gds = xdem.coreg.GradientDescending(subsample=subsample)
- gds.fit_pts(
- self.ref.to_points().ds,
- self.tba,
- inlier_mask=inlier_mask,
- verbose=verbose,
- subsample=subsample,
- z_name="b1",
- )
- assert gds._meta["offset_east_px"] == pytest.approx(-0.496000, rel=1e-1, abs=0.1)
- assert gds._meta["offset_north_px"] == pytest.approx(-0.1875, rel=1e-1, abs=0.1)
- assert gds._meta["vshift"] == pytest.approx(-1.8730, rel=1e-1)
-
- @pytest.mark.parametrize("shift_px", [(1, 1), (2, 2)]) # type: ignore
- @pytest.mark.parametrize("coreg_class", [coreg.NuthKaab, coreg.GradientDescending, coreg.ICP]) # type: ignore
- @pytest.mark.parametrize("points_or_raster", ["raster", "points"])
- def test_coreg_example_shift(self, shift_px, coreg_class, points_or_raster, verbose=False, subsample=5000):
- """
- For comparison of coreg algorithms:
- Shift a ref_dem on purpose, e.g. shift_px = (1,1), and then applying coreg to shift it back.
- """
- warnings.simplefilter("error")
- res = self.ref.res[0]
-
- # shift DEM by shift_px
- shifted_ref = self.ref.copy()
- shifted_ref.shift(shift_px[0] * res, shift_px[1] * res)
-
- shifted_ref_points = shifted_ref.to_points(as_array=False, subset=subsample, pixel_offset="center").ds
- shifted_ref_points["E"] = shifted_ref_points.geometry.x
- shifted_ref_points["N"] = shifted_ref_points.geometry.y
- shifted_ref_points.rename(columns={"b1": "z"}, inplace=True)
-
- kwargs = {} if coreg_class.__name__ != "GradientDescending" else {"subsample": subsample}
-
- coreg_obj = coreg_class(**kwargs)
-
- best_east_diff = 1e5
- best_north_diff = 1e5
- if points_or_raster == "raster":
- coreg_obj.fit(shifted_ref, self.ref, verbose=verbose, random_state=42)
- elif points_or_raster == "points":
- coreg_obj.fit_pts(shifted_ref_points, self.ref, verbose=verbose, random_state=42)
-
- if coreg_class.__name__ == "ICP":
- matrix = coreg_obj.to_matrix()
- # The ICP fit only creates a matrix and doesn't normally show the alignment in pixels
- # Since the test is formed to validate pixel shifts, these calls extract the approximate pixel shift
- # from the matrix (it's not perfect since rotation/scale can change it).
- coreg_obj._meta["offset_east_px"] = -matrix[0][3] / res
- coreg_obj._meta["offset_north_px"] = -matrix[1][3] / res
-
- # ICP can never be expected to be much better than 1px on structured data, as its implementation often finds a
- # minimum between two grid points. This is clearly warned for in the documentation.
- precision = 1e-2 if coreg_class.__name__ != "ICP" else 1
-
- if coreg_obj._meta["offset_east_px"] == pytest.approx(-shift_px[0], rel=precision) and coreg_obj._meta[
- "offset_north_px"
- ] == pytest.approx(-shift_px[0], rel=precision):
- return
- best_east_diff = coreg_obj._meta["offset_east_px"] - shift_px[0]
- best_north_diff = coreg_obj._meta["offset_north_px"] - shift_px[1]
-
- raise AssertionError(f"Diffs are too big. east: {best_east_diff:.2f} px, north: {best_north_diff:.2f} px")
-
- def test_nuth_kaab(self) -> None:
- warnings.simplefilter("error")
-
- nuth_kaab = coreg.NuthKaab(max_iterations=10)
-
- # Synthesize a shifted and vertically offset DEM
- pixel_shift = 2
- vshift = 5
- shifted_dem = self.ref.data.squeeze().copy()
- shifted_dem[:, pixel_shift:] = shifted_dem[:, :-pixel_shift]
- shifted_dem[:, :pixel_shift] = np.nan
- shifted_dem += vshift
-
- # Fit the synthesized shifted DEM to the original
- nuth_kaab.fit(
- self.ref.data.squeeze(),
- shifted_dem,
- transform=self.ref.transform,
- crs=self.ref.crs,
- verbose=self.fit_params["verbose"],
- )
-
- # Make sure that the estimated offsets are similar to what was synthesized.
- assert nuth_kaab._meta["offset_east_px"] == pytest.approx(pixel_shift, abs=0.03)
- assert nuth_kaab._meta["offset_north_px"] == pytest.approx(0, abs=0.03)
- assert nuth_kaab._meta["vshift"] == pytest.approx(-vshift, 0.03)
-
- # Apply the estimated shift to "revert the DEM" to its original state.
- unshifted_dem, _ = nuth_kaab.apply(shifted_dem, transform=self.ref.transform, crs=self.ref.crs)
- # Measure the difference (should be more or less zero)
- diff = self.ref.data.squeeze() - unshifted_dem
- diff = diff.compressed() # turn into a 1D array with only unmasked values
-
- # Check that the median is very close to zero
- assert np.abs(np.median(diff)) < 0.01
- # Check that the RMSE is low
- assert np.sqrt(np.mean(np.square(diff))) < 1
-
- # Transform some arbitrary points.
- transformed_points = nuth_kaab.apply_pts(self.points)
-
- # Check that the x shift is close to the pixel_shift * image resolution
- assert abs((transformed_points[0, 0] - self.points[0, 0]) - pixel_shift * self.ref.res[0]) < 0.1
- # Check that the z shift is close to the original vertical shift.
- assert abs((transformed_points[0, 2] - self.points[0, 2]) + vshift) < 0.1
-
- def test_tilt(self) -> None:
- warnings.simplefilter("error")
-
- # Try a 1st degree deramping.
- tilt = coreg.Tilt()
-
- # Fit the data
- tilt.fit(**self.fit_params, random_state=42)
-
- # Apply the deramping to a DEM
- tilted_dem = tilt.apply(self.tba)
-
- # Get the periglacial offset after deramping
- periglacial_offset = (self.ref - tilted_dem)[self.inlier_mask]
- # Get the periglacial offset before deramping
- pre_offset = (self.ref - self.tba)[self.inlier_mask]
-
- # Check that the error improved
- assert np.abs(np.mean(periglacial_offset)) < np.abs(np.mean(pre_offset))
-
- # Check that the mean periglacial offset is low
- assert np.abs(np.mean(periglacial_offset)) < 0.02
-
- def test_icp_opencv(self) -> None:
- warnings.simplefilter("error")
-
- # Do a fast and dirty 3 iteration ICP just to make sure it doesn't error out.
- icp = coreg.ICP(max_iterations=3)
- icp.fit(**self.fit_params)
-
- aligned_dem, _ = icp.apply(self.tba.data, self.ref.transform, self.ref.crs)
-
- assert aligned_dem.shape == self.ref.data.squeeze().shape
diff --git a/tests/test_coreg/test_base.py b/tests/test_coreg/test_base.py
deleted file mode 100644
index d7ff64ef..00000000
--- a/tests/test_coreg/test_base.py
+++ /dev/null
@@ -1,1038 +0,0 @@
-"""Functions to test the coregistration base classes."""
-
-from __future__ import annotations
-
-import inspect
-import re
-import warnings
-from typing import Any, Callable
-
-import geoutils as gu
-import numpy as np
-import pytest
-import rasterio as rio
-from geoutils import Raster, Vector
-from geoutils.raster import RasterType
-
-with warnings.catch_warnings():
- warnings.simplefilter("ignore")
- import xdem
- from xdem import coreg, examples, misc, spatialstats
- from xdem._typing import NDArrayf
- from xdem.coreg.base import Coreg, apply_matrix
-
-
-def load_examples() -> tuple[RasterType, RasterType, Vector]:
- """Load example files to try coregistration methods with."""
- with warnings.catch_warnings():
- warnings.simplefilter("ignore")
- reference_raster = Raster(examples.get_path("longyearbyen_ref_dem"))
- to_be_aligned_raster = Raster(examples.get_path("longyearbyen_tba_dem"))
- glacier_mask = Vector(examples.get_path("longyearbyen_glacier_outlines"))
-
- return reference_raster, to_be_aligned_raster, glacier_mask
-
-
-class TestCoregClass:
-
- ref, tba, outlines = load_examples() # Load example reference, to-be-aligned and mask.
- inlier_mask = ~outlines.create_mask(ref)
-
- fit_params = dict(
- reference_dem=ref.data,
- dem_to_be_aligned=tba.data,
- inlier_mask=inlier_mask,
- transform=ref.transform,
- crs=ref.crs,
- verbose=False,
- )
- # Create some 3D coordinates with Z coordinates being 0 to try the apply_pts functions.
- points = np.array([[1, 2, 3, 4], [1, 2, 3, 4], [0, 0, 0, 0]], dtype="float64").T
-
- def test_init(self) -> None:
- """Test instantiation of Coreg"""
-
- c = coreg.Coreg()
-
- assert c._fit_called is False
- assert c._is_affine is None
- assert c._needs_vars is False
-
- @pytest.mark.parametrize("coreg_class", [coreg.VerticalShift, coreg.ICP, coreg.NuthKaab]) # type: ignore
- def test_copy(self, coreg_class: Callable[[], Coreg]) -> None:
- """Test that copying work expectedly (that no attributes still share references)."""
- warnings.simplefilter("error")
-
- # Create a coreg instance and copy it.
- corr = coreg_class()
- corr_copy = corr.copy()
-
- # Assign some attributes and metadata after copying, respecting the CoregDict type class
- corr.vshift = 1
- corr._meta["resolution"] = 30
- # Make sure these don't appear in the copy
- assert corr_copy._meta != corr._meta
- assert not hasattr(corr_copy, "vshift")
-
- def test_error_method(self) -> None:
- """Test different error measures."""
- dem1: NDArrayf = np.ones((50, 50)).astype(np.float32)
- # Create a vertically shifted dem
- dem2 = dem1.copy() + 2.0
- affine = rio.transform.from_origin(0, 0, 1, 1)
- crs = rio.crs.CRS.from_epsg(4326)
-
- vshiftcorr = coreg.VerticalShift()
- # Fit the vertical shift
- vshiftcorr.fit(dem1, dem2, transform=affine, crs=crs)
-
- # Check that the vertical shift after coregistration is zero
- assert vshiftcorr.error(dem1, dem2, transform=affine, crs=crs, error_type="median") == 0
-
- # Remove the vertical shift fit and see what happens.
- vshiftcorr._meta["vshift"] = 0
- # Now it should be equal to dem1 - dem2
- assert vshiftcorr.error(dem1, dem2, transform=affine, crs=crs, error_type="median") == -2
-
- # Create random noise and see if the standard deviation is equal (it should)
- dem3 = dem1.copy() + np.random.random(size=dem1.size).reshape(dem1.shape)
- assert abs(vshiftcorr.error(dem1, dem3, transform=affine, crs=crs, error_type="std") - np.std(dem3)) < 1e-6
-
- def test_ij_xy(self, i: int = 10, j: int = 20) -> None:
- """
- Test the reversibility of ij2xy and xy2ij, which is important for point co-registration.
- """
- x, y = self.ref.ij2xy(i, j, offset="ul")
- i, j = self.ref.xy2ij(x, y, shift_area_or_point=False)
- assert i == pytest.approx(10)
- assert j == pytest.approx(20)
-
- @pytest.mark.parametrize("subsample", [10, 10000, 0.5, 1]) # type: ignore
- def test_get_subsample_on_valid_mask(self, subsample: float | int) -> None:
- """Test the subsampling function called by all subclasses"""
-
- # Define a valid mask
- width = height = 50
- np.random.seed(42)
- valid_mask = np.random.randint(low=0, high=2, size=(width, height), dtype=bool)
-
- # Define a class with a subsample and random_state in the metadata
- coreg = Coreg(meta={"subsample": subsample, "random_state": 42})
- subsample_mask = coreg._get_subsample_on_valid_mask(valid_mask=valid_mask)
-
- # Check that it returns a same-shaped array that is boolean
- assert np.shape(valid_mask) == np.shape(subsample_mask)
- assert subsample_mask.dtype == bool
- # Check that the subsampled values are all within valid values
- assert all(valid_mask[subsample_mask])
- # Check that the number of subsampled value is coherent, or the maximum possible
- if subsample <= 1:
- # If value lower than 1, fraction of valid pixels
- subsample_val: float | int = int(subsample * np.count_nonzero(valid_mask))
- else:
- # Otherwise the number of pixels
- subsample_val = subsample
- assert np.count_nonzero(subsample_mask) == min(subsample_val, np.count_nonzero(valid_mask))
-
- all_coregs = [
- coreg.VerticalShift,
- coreg.NuthKaab,
- coreg.ICP,
- coreg.Deramp,
- coreg.TerrainBias,
- coreg.DirectionalBias,
- ]
-
- @pytest.mark.parametrize("coreg", all_coregs) # type: ignore
- def test_subsample(self, coreg: Callable) -> None: # type: ignore
- warnings.simplefilter("error")
-
- # Check that default value is set properly
- coreg_full = coreg()
- argspec = inspect.getfullargspec(coreg)
- assert coreg_full._meta["subsample"] == argspec.defaults[argspec.args.index("subsample") - 1] # type: ignore
-
- # But can be overridden during fit
- coreg_full.fit(**self.fit_params, subsample=10000, random_state=42)
- assert coreg_full._meta["subsample"] == 10000
- # Check that the random state is properly set when subsampling explicitly or implicitly
- assert coreg_full._meta["random_state"] == 42
-
- # Test subsampled vertical shift correction
- coreg_sub = coreg(subsample=0.1)
- assert coreg_sub._meta["subsample"] == 0.1
-
- # Fit the vertical shift using 10% of the unmasked data using a fraction
- coreg_sub.fit(**self.fit_params, random_state=42)
- # Do the same but specify the pixel count instead.
- # They are not perfectly equal (np.count_nonzero(self.mask) // 2 would be exact)
- # But this would just repeat the subsample code, so that makes little sense to test.
- coreg_sub = coreg(subsample=self.tba.data.size // 10)
- assert coreg_sub._meta["subsample"] == self.tba.data.size // 10
- coreg_sub.fit(**self.fit_params, random_state=42)
-
- # Add a few performance checks
- coreg_name = coreg.__name__
- if coreg_name == "VerticalShift":
- # Check that the estimated vertical shifts are similar
- assert abs(coreg_sub._meta["vshift"] - coreg_full._meta["vshift"]) < 0.1
-
- elif coreg_name == "NuthKaab":
- # Calculate the difference in the full vs. subsampled matrices
- matrix_diff = np.abs(coreg_full.to_matrix() - coreg_sub.to_matrix())
- # Check that the x/y/z differences do not exceed 30cm
- assert np.count_nonzero(matrix_diff > 0.5) == 0
-
- elif coreg_name == "Tilt":
- # Check that the estimated biases are similar
- assert coreg_sub._meta["coefficients"] == pytest.approx(coreg_full._meta["coefficients"], rel=1e-1)
-
- def test_subsample__pipeline(self) -> None:
- """Test that the subsample argument works as intended for pipelines"""
-
- # Check definition during instantiation
- pipe = coreg.VerticalShift(subsample=200) + coreg.Deramp(subsample=5000)
-
- # Check the arguments are properly defined
- assert pipe.pipeline[0]._meta["subsample"] == 200
- assert pipe.pipeline[1]._meta["subsample"] == 5000
-
- # Check definition during fit
- pipe = coreg.VerticalShift() + coreg.Deramp()
- pipe.fit(**self.fit_params, subsample=1000)
- assert pipe.pipeline[0]._meta["subsample"] == 1000
- assert pipe.pipeline[1]._meta["subsample"] == 1000
-
- def test_subsample__errors(self) -> None:
- """Check proper errors are raised when using the subsample argument"""
-
- # A warning should be raised when overriding with fit if non-default parameter was passed during instantiation
- vshift = coreg.VerticalShift(subsample=100)
-
- with pytest.warns(
- UserWarning,
- match=re.escape(
- "Subsample argument passed to fit() will override non-default "
- "subsample value defined at instantiation. To silence this "
- "warning: only define 'subsample' in either fit(subsample=...) "
- "or instantiation e.g. VerticalShift(subsample=...)."
- ),
- ):
- vshift.fit(**self.fit_params, subsample=1000)
-
- # Same for a pipeline
- pipe = coreg.VerticalShift(subsample=200) + coreg.Deramp()
- with pytest.warns(
- UserWarning,
- match=re.escape(
- "Subsample argument passed to fit() will override non-default "
- "subsample values defined for individual steps of the pipeline. "
- "To silence this warning: only define 'subsample' in either "
- "fit(subsample=...) or instantiation e.g., VerticalShift(subsample=...)."
- ),
- ):
- pipe.fit(**self.fit_params, subsample=1000)
-
- # Same for a blockwise co-registration
- block = coreg.BlockwiseCoreg(coreg.VerticalShift(subsample=200), subdivision=4)
- with pytest.warns(
- UserWarning,
- match=re.escape(
- "Subsample argument passed to fit() will override non-default subsample "
- "values defined in the step within the blockwise method. To silence this "
- "warning: only define 'subsample' in either fit(subsample=...) or "
- "instantiation e.g., VerticalShift(subsample=...)."
- ),
- ):
- block.fit(**self.fit_params, subsample=1000)
-
- def test_coreg_raster_and_ndarray_args(self) -> None:
-
- # Create a small sample-DEM
- dem1 = xdem.DEM.from_array(
- np.arange(25, dtype="int32").reshape(5, 5),
- transform=rio.transform.from_origin(0, 5, 1, 1),
- crs=4326,
- nodata=-9999,
- )
- # Assign a funny value to one particular pixel. This is to validate that reprojection works perfectly.
- dem1.data[1, 1] = 100
-
- # Translate the DEM 1 "meter" right and add a vertical shift
- dem2 = dem1.reproject(dst_bounds=rio.coords.BoundingBox(1, 0, 6, 5), silent=True)
- dem2 += 1
-
- # Create a vertical shift correction for Rasters ("_r") and for arrays ("_a")
- vshiftcorr_r = coreg.VerticalShift()
- vshiftcorr_a = vshiftcorr_r.copy()
-
- # Fit the data
- vshiftcorr_r.fit(reference_dem=dem1, dem_to_be_aligned=dem2)
- vshiftcorr_a.fit(
- reference_dem=dem1.data,
- dem_to_be_aligned=dem2.reproject(dem1, silent=True).data,
- transform=dem1.transform,
- crs=dem1.crs,
- )
-
- # Validate that they ended up giving the same result.
- assert vshiftcorr_r._meta["vshift"] == vshiftcorr_a._meta["vshift"]
-
- # De-shift dem2
- dem2_r = vshiftcorr_r.apply(dem2)
- dem2_a, _ = vshiftcorr_a.apply(dem2.data, dem2.transform, dem2.crs)
-
- # Validate that the return formats were the expected ones, and that they are equal.
- # Issue - dem2_a does not have the same shape, the first dimension is being squeezed
- # TODO - Fix coreg.apply?
- assert isinstance(dem2_r, xdem.DEM)
- assert isinstance(dem2_a, np.ma.masked_array)
- assert np.ma.allequal(dem2_r.data.squeeze(), dem2_a)
-
- # If apply on a masked_array was given without a transform, it should fail.
- with pytest.raises(ValueError, match="'transform' must be given"):
- vshiftcorr_a.apply(dem2.data, crs=dem2.crs)
-
- # If apply on a masked_array was given without a crs, it should fail.
- with pytest.raises(ValueError, match="'crs' must be given"):
- vshiftcorr_a.apply(dem2.data, transform=dem2.transform)
-
- # If transform provided with input Raster, should raise a warning
- with pytest.warns(UserWarning, match="DEM .* overrides the given 'transform'"):
- vshiftcorr_a.apply(dem2, transform=dem2.transform)
-
- # If crs provided with input Raster, should raise a warning
- with pytest.warns(UserWarning, match="DEM .* overrides the given 'crs'"):
- vshiftcorr_a.apply(dem2, crs=dem2.crs)
-
- # Inputs contain: coregistration method, is implemented, comparison is "strict" or "approx"
- @pytest.mark.parametrize(
- "inputs",
- [
- [xdem.coreg.VerticalShift(), True, "strict"],
- [xdem.coreg.Tilt(), True, "strict"],
- [xdem.coreg.NuthKaab(), True, "approx"],
- [xdem.coreg.NuthKaab() + xdem.coreg.Tilt(), True, "approx"],
- [xdem.coreg.BlockwiseCoreg(step=xdem.coreg.NuthKaab(), subdivision=16), False, ""],
- [xdem.coreg.ICP(), False, ""],
- ],
- ) # type: ignore
- def test_apply_resample(self, inputs: list[Any]) -> None:
- """
- Test that the option resample of coreg.apply works as expected.
- For vertical correction only (VerticalShift, Deramp...), option True or False should yield same results.
- For horizontal shifts (NuthKaab etc), georef should differ, but DEMs should be the same after resampling.
- For others, the method is not implemented.
- """
- # Get test inputs
- coreg_method, is_implemented, comp = inputs
- ref_dem, tba_dem, outlines = load_examples() # Load example reference, to-be-aligned and mask.
-
- # Prepare coreg
- inlier_mask = ~outlines.create_mask(ref_dem)
- coreg_method.fit(tba_dem, ref_dem, inlier_mask=inlier_mask)
-
- # If not implemented, should raise an error
- if not is_implemented:
- with pytest.raises(NotImplementedError, match="Option `resample=False` not implemented for coreg method *"):
- dem_coreg_noresample = coreg_method.apply(tba_dem, resample=False)
- return
- else:
- dem_coreg_resample = coreg_method.apply(tba_dem)
- dem_coreg_noresample = coreg_method.apply(tba_dem, resample=False)
-
- if comp == "strict":
- # Both methods should yield the exact same output
- assert dem_coreg_resample == dem_coreg_noresample
- elif comp == "approx":
- # The georef should be different
- assert dem_coreg_noresample.transform != dem_coreg_resample.transform
-
- # After resampling, both results should be almost equal
- dem_final = dem_coreg_noresample.reproject(dem_coreg_resample)
- diff = dem_final - dem_coreg_resample
- assert np.all(np.abs(diff.data) == pytest.approx(0, abs=1e-2))
- # assert np.count_nonzero(diff.data) == 0
-
- # Test it works with different resampling algorithms
- dem_coreg_resample = coreg_method.apply(tba_dem, resample=True, resampling=rio.warp.Resampling.nearest)
- dem_coreg_resample = coreg_method.apply(tba_dem, resample=True, resampling=rio.warp.Resampling.cubic)
- with pytest.raises(ValueError, match="`resampling` must be a rio.warp.Resampling algorithm"):
- dem_coreg_resample = coreg_method.apply(tba_dem, resample=True, resampling=None)
-
- @pytest.mark.parametrize(
- "combination",
- [
- ("dem1", "dem2", "None", "None", "fit", "passes", ""),
- ("dem1", "dem2", "None", "None", "apply", "passes", ""),
- ("dem1.data", "dem2.data", "dem1.transform", "dem1.crs", "fit", "passes", ""),
- ("dem1.data", "dem2.data", "dem1.transform", "dem1.crs", "apply", "passes", ""),
- (
- "dem1",
- "dem2.data",
- "dem1.transform",
- "dem1.crs",
- "fit",
- "warns",
- "'reference_dem' .* overrides the given 'transform'",
- ),
- ("dem1.data", "dem2", "dem1.transform", "None", "fit", "warns", "'dem_to_be_aligned' .* overrides .*"),
- (
- "dem1.data",
- "dem2.data",
- "None",
- "dem1.crs",
- "fit",
- "error",
- "'transform' must be given if both DEMs are array-like.",
- ),
- (
- "dem1.data",
- "dem2.data",
- "dem1.transform",
- "None",
- "fit",
- "error",
- "'crs' must be given if both DEMs are array-like.",
- ),
- (
- "dem1",
- "dem2.data",
- "None",
- "dem1.crs",
- "apply",
- "error",
- "'transform' must be given if DEM is array-like.",
- ),
- (
- "dem1",
- "dem2.data",
- "dem1.transform",
- "None",
- "apply",
- "error",
- "'crs' must be given if DEM is array-like.",
- ),
- ("dem1", "dem2", "dem2.transform", "None", "apply", "warns", "DEM .* overrides the given 'transform'"),
- ("None", "None", "None", "None", "fit", "error", "Both DEMs need to be array-like"),
- ("dem1 + np.nan", "dem2", "None", "None", "fit", "error", "'reference_dem' had only NaNs"),
- ("dem1", "dem2 + np.nan", "None", "None", "fit", "error", "'dem_to_be_aligned' had only NaNs"),
- ],
- ) # type: ignore
- def test_coreg_raises(self, combination: tuple[str, str, str, str, str, str, str]) -> None:
- """
- Assert that the expected warnings/errors are triggered under different circumstances.
-
- The 'combination' param contains this in order:
- 1. The reference_dem (will be eval'd)
- 2. The dem to be aligned (will be eval'd)
- 3. The transform to use (will be eval'd)
- 4. The CRS to use (will be eval'd)
- 5. Which coreg method to assess
- 6. The expected outcome of the test.
- 7. The error/warning message (if applicable)
- """
- warnings.simplefilter("error")
-
- ref_dem, tba_dem, transform, crs, testing_step, result, text = combination
-
- # Create a small sample-DEM
- dem1 = xdem.DEM.from_array(
- np.arange(25, dtype="float64").reshape(5, 5),
- transform=rio.transform.from_origin(0, 5, 1, 1),
- crs=4326,
- nodata=-9999,
- )
- dem2 = dem1.copy() # noqa
-
- # Evaluate the parametrization (e.g. 'dem2.transform')
- ref_dem, tba_dem, transform, crs = map(eval, (ref_dem, tba_dem, transform, crs))
-
- # Use VerticalShift as a representative example.
- vshiftcorr = xdem.coreg.VerticalShift()
-
- def fit_func() -> Coreg:
- return vshiftcorr.fit(ref_dem, tba_dem, transform=transform, crs=crs)
-
- def apply_func() -> NDArrayf:
- return vshiftcorr.apply(tba_dem, transform=transform, crs=crs)
-
- # Try running the methods in order and validate the result.
- for method, method_call in [("fit", fit_func), ("apply", apply_func)]:
- with warnings.catch_warnings():
- if method != testing_step: # E.g. skip warnings for 'fit' if 'apply' is being tested.
- warnings.simplefilter("ignore")
-
- if result == "warns" and testing_step == method:
- with pytest.warns(UserWarning, match=text):
- method_call()
- elif result == "error" and testing_step == method:
- with pytest.raises(ValueError, match=text):
- method_call()
- else:
- method_call()
-
- if testing_step == "fit": # If we're testing 'fit', 'apply' does not have to be run.
- return
-
- def test_coreg_oneliner(self) -> None:
- """Test that a DEM can be coregistered in one line by chaining calls."""
- dem_arr = np.ones((5, 5), dtype="int32")
- dem_arr2 = dem_arr + 1
- transform = rio.transform.from_origin(0, 5, 1, 1)
- crs = rio.crs.CRS.from_epsg(4326)
-
- dem_arr2_fixed, _ = (
- coreg.VerticalShift()
- .fit(dem_arr, dem_arr2, transform=transform, crs=crs)
- .apply(dem_arr2, transform=transform, crs=crs)
- )
-
- assert np.array_equal(dem_arr, dem_arr2_fixed)
-
-
-class TestCoregPipeline:
-
- ref, tba, outlines = load_examples() # Load example reference, to-be-aligned and mask.
- inlier_mask = ~outlines.create_mask(ref)
-
- fit_params = dict(
- reference_dem=ref.data,
- dem_to_be_aligned=tba.data,
- inlier_mask=inlier_mask,
- transform=ref.transform,
- crs=ref.crs,
- verbose=True,
- )
- # Create some 3D coordinates with Z coordinates being 0 to try the apply_pts functions.
- points = np.array([[1, 2, 3, 4], [1, 2, 3, 4], [0, 0, 0, 0]], dtype="float64").T
-
- @pytest.mark.parametrize("coreg_class", [coreg.VerticalShift, coreg.ICP, coreg.NuthKaab]) # type: ignore
- def test_copy(self, coreg_class: Callable[[], Coreg]) -> None:
-
- # Create a pipeline, add some metadata, and copy it
- pipeline = coreg_class() + coreg_class()
- pipeline.pipeline[0]._meta["vshift"] = 1
-
- pipeline_copy = pipeline.copy()
-
- # Add some more metadata after copying (this should not be transferred)
- pipeline._meta["resolution"] = 30
- pipeline_copy.pipeline[0]._meta["offset_north_px"] = 0.5
-
- assert pipeline._meta != pipeline_copy._meta
- assert pipeline.pipeline[0]._meta != pipeline_copy.pipeline[0]._meta
- assert pipeline_copy.pipeline[0]._meta["vshift"]
-
- def test_pipeline(self) -> None:
- warnings.simplefilter("error")
-
- # Create a pipeline from two coreg methods.
- pipeline = coreg.CoregPipeline([coreg.VerticalShift(), coreg.NuthKaab()])
- pipeline.fit(**self.fit_params)
-
- aligned_dem, _ = pipeline.apply(self.tba.data, transform=self.ref.transform, crs=self.ref.crs)
-
- assert aligned_dem.shape == self.ref.data.squeeze().shape
-
- # Make a new pipeline with two vertical shift correction approaches.
- pipeline2 = coreg.CoregPipeline([coreg.VerticalShift(), coreg.VerticalShift()])
- # Set both "estimated" vertical shifts to be 1
- pipeline2.pipeline[0]._meta["vshift"] = 1
- pipeline2.pipeline[1]._meta["vshift"] = 1
-
- # Assert that the combined vertical shift is 2
- assert pipeline2.to_matrix()[2, 3] == 2.0
-
- all_coregs = [
- coreg.VerticalShift(),
- coreg.NuthKaab(),
- coreg.ICP(),
- coreg.Deramp(),
- coreg.TerrainBias(),
- coreg.DirectionalBias(),
- ]
-
- @pytest.mark.parametrize("coreg1", all_coregs) # type: ignore
- @pytest.mark.parametrize("coreg2", all_coregs) # type: ignore
- def test_pipeline_combinations__nobiasvar(self, coreg1: Coreg, coreg2: Coreg) -> None:
- """Test pipelines with all combinations of coregistration subclasses (without bias variables)"""
-
- # Create a pipeline from one affine and one biascorr methods.
- pipeline = coreg.CoregPipeline([coreg1, coreg2])
- pipeline.fit(**self.fit_params)
-
- aligned_dem, _ = pipeline.apply(self.tba.data, transform=self.ref.transform, crs=self.ref.crs)
- assert aligned_dem.shape == self.ref.data.squeeze().shape
-
- @pytest.mark.parametrize("coreg1", all_coregs) # type: ignore
- @pytest.mark.parametrize(
- "coreg2",
- [
- coreg.BiasCorr1D(bias_var_names=["slope"], fit_or_bin="bin"),
- coreg.BiasCorr2D(bias_var_names=["slope", "aspect"], fit_or_bin="bin"),
- ],
- ) # type: ignore
- def test_pipeline_combinations__biasvar(self, coreg1: Coreg, coreg2: Coreg) -> None:
- """Test pipelines with all combinations of coregistration subclasses with bias variables"""
-
- # Create a pipeline from one affine and one biascorr methods.
- pipeline = coreg.CoregPipeline([coreg1, coreg2])
- bias_vars = {"slope": xdem.terrain.slope(self.ref), "aspect": xdem.terrain.aspect(self.ref)}
- pipeline.fit(**self.fit_params, bias_vars=bias_vars)
-
- aligned_dem, _ = pipeline.apply(
- self.tba.data, transform=self.ref.transform, crs=self.ref.crs, bias_vars=bias_vars
- )
- assert aligned_dem.shape == self.ref.data.squeeze().shape
-
- def test_pipeline__errors(self) -> None:
- """Test pipeline raises proper errors."""
-
- pipeline = coreg.CoregPipeline([coreg.NuthKaab(), coreg.BiasCorr1D()])
- with pytest.raises(
- ValueError,
- match=re.escape(
- "No `bias_vars` passed to .fit() for bias correction step "
- " of the pipeline."
- ),
- ):
- pipeline.fit(**self.fit_params)
-
- pipeline2 = coreg.CoregPipeline([coreg.NuthKaab(), coreg.BiasCorr1D(), coreg.BiasCorr1D()])
- with pytest.raises(
- ValueError,
- match=re.escape(
- "No `bias_vars` passed to .fit() for bias correction step "
- "of the pipeline. As you are using several bias correction steps requiring"
- " `bias_vars`, don't forget to explicitly define their `bias_var_names` "
- "during instantiation, e.g. BiasCorr1D(bias_var_names=['slope'])."
- ),
- ):
- pipeline2.fit(**self.fit_params)
-
- with pytest.raises(
- ValueError,
- match=re.escape(
- "When using several bias correction steps requiring `bias_vars` in a pipeline,"
- "the `bias_var_names` need to be explicitly defined at each step's "
- "instantiation, e.g. BiasCorr1D(bias_var_names=['slope'])."
- ),
- ):
- pipeline2.fit(**self.fit_params, bias_vars={"slope": xdem.terrain.slope(self.ref)})
-
- pipeline3 = coreg.CoregPipeline([coreg.NuthKaab(), coreg.BiasCorr1D(bias_var_names=["slope"])])
- with pytest.raises(
- ValueError,
- match=re.escape(
- "Not all keys of `bias_vars` in .fit() match the `bias_var_names` defined during "
- "instantiation of the bias correction step : ['slope']."
- ),
- ):
- pipeline3.fit(**self.fit_params, bias_vars={"ncc": xdem.terrain.slope(self.ref)})
-
- def test_pipeline_pts(self) -> None:
- warnings.simplefilter("ignore")
-
- pipeline = coreg.NuthKaab() + coreg.GradientDescending()
- ref_points = self.ref.to_points(as_array=False, subset=5000, pixel_offset="center").ds
- ref_points["E"] = ref_points.geometry.x
- ref_points["N"] = ref_points.geometry.y
- ref_points.rename(columns={"b1": "z"}, inplace=True)
-
- # Check that this runs without error
- pipeline.fit_pts(reference_dem=ref_points, dem_to_be_aligned=self.tba)
-
- for part in pipeline.pipeline:
- assert np.abs(part._meta["offset_east_px"]) > 0
-
- assert pipeline.pipeline[0]._meta["offset_east_px"] != pipeline.pipeline[1]._meta["offset_east_px"]
-
- def test_coreg_add(self) -> None:
- warnings.simplefilter("error")
- # Test with a vertical shift of 4
- vshift = 4
-
- vshift1 = coreg.VerticalShift()
- vshift2 = coreg.VerticalShift()
-
- # Set the vertical shift attribute
- for vshift_corr in (vshift1, vshift2):
- vshift_corr._meta["vshift"] = vshift
-
- # Add the two coregs and check that the resulting vertical shift is 2* vertical shift
- vshift3 = vshift1 + vshift2
- assert vshift3.to_matrix()[2, 3] == vshift * 2
-
- # Make sure the correct exception is raised on incorrect additions
- with pytest.raises(ValueError, match="Incompatible add type"):
- vshift1 + 1 # type: ignore
-
- # Try to add a Coreg step to an already existing CoregPipeline
- vshift4 = vshift3 + vshift1
- assert vshift4.to_matrix()[2, 3] == vshift * 3
-
- # Try to add two CoregPipelines
- vshift5 = vshift3 + vshift3
- assert vshift5.to_matrix()[2, 3] == vshift * 4
-
- def test_pipeline_consistency(self) -> None:
- """Check that pipelines properties are respected: reflectivity, fusion of same coreg"""
-
- # Test 1: Fusion of same coreg
- # Many vertical shifts
- many_vshifts = coreg.VerticalShift() + coreg.VerticalShift() + coreg.VerticalShift()
- many_vshifts.fit(**self.fit_params, random_state=42)
- aligned_dem, _ = many_vshifts.apply(self.tba.data, transform=self.ref.transform, crs=self.ref.crs)
-
- # The last steps should have shifts of EXACTLY zero
- assert many_vshifts.pipeline[1]._meta["vshift"] == pytest.approx(0, abs=10e-5)
- assert many_vshifts.pipeline[2]._meta["vshift"] == pytest.approx(0, abs=10e-5)
-
- # Many horizontal + vertical shifts
- many_nks = coreg.NuthKaab() + coreg.NuthKaab() + coreg.NuthKaab()
- many_nks.fit(**self.fit_params, random_state=42)
- aligned_dem, _ = many_nks.apply(self.tba.data, transform=self.ref.transform, crs=self.ref.crs)
-
- # The last steps should have shifts of NEARLY zero
- assert many_nks.pipeline[1]._meta["vshift"] == pytest.approx(0, abs=0.02)
- assert many_nks.pipeline[1]._meta["offset_east_px"] == pytest.approx(0, abs=0.02)
- assert many_nks.pipeline[1]._meta["offset_north_px"] == pytest.approx(0, abs=0.02)
- assert many_nks.pipeline[2]._meta["vshift"] == pytest.approx(0, abs=0.02)
- assert many_nks.pipeline[2]._meta["offset_east_px"] == pytest.approx(0, abs=0.02)
- assert many_nks.pipeline[2]._meta["offset_north_px"] == pytest.approx(0, abs=0.02)
-
- # Test 2: Reflectivity
- # Those two pipelines should give almost the same result
- nk_vshift = coreg.NuthKaab() + coreg.VerticalShift()
- vshift_nk = coreg.VerticalShift() + coreg.NuthKaab()
-
- nk_vshift.fit(**self.fit_params, random_state=42)
- aligned_dem, _ = nk_vshift.apply(self.tba.data, transform=self.ref.transform, crs=self.ref.crs)
- vshift_nk.fit(**self.fit_params, random_state=42)
- aligned_dem, _ = vshift_nk.apply(self.tba.data, transform=self.ref.transform, crs=self.ref.crs)
-
- assert np.allclose(nk_vshift.to_matrix(), vshift_nk.to_matrix(), atol=10e-1)
-
-
-class TestBlockwiseCoreg:
- ref, tba, outlines = load_examples() # Load example reference, to-be-aligned and mask.
- inlier_mask = ~outlines.create_mask(ref)
-
- fit_params = dict(
- reference_dem=ref.data,
- dem_to_be_aligned=tba.data,
- inlier_mask=inlier_mask,
- transform=ref.transform,
- crs=ref.crs,
- verbose=False,
- )
- # Create some 3D coordinates with Z coordinates being 0 to try the apply_pts functions.
- points = np.array([[1, 2, 3, 4], [1, 2, 3, 4], [0, 0, 0, 0]], dtype="float64").T
-
- @pytest.mark.parametrize(
- "pipeline", [coreg.VerticalShift(), coreg.VerticalShift() + coreg.NuthKaab()]
- ) # type: ignore
- @pytest.mark.parametrize("subdivision", [4, 10]) # type: ignore
- def test_blockwise_coreg(self, pipeline: Coreg, subdivision: int) -> None:
- warnings.simplefilter("error")
-
- blockwise = coreg.BlockwiseCoreg(step=pipeline, subdivision=subdivision)
-
- # Results can not yet be extracted (since fit has not been called) and should raise an error
- with pytest.raises(AssertionError, match="No coreg results exist.*"):
- blockwise.to_points()
-
- blockwise.fit(**self.fit_params)
- points = blockwise.to_points()
-
- # Validate that the number of points is equal to the amount of subdivisions.
- assert points.shape[0] == subdivision
-
- # Validate that the points do not represent only the same location.
- assert np.sum(np.linalg.norm(points[:, :, 0] - points[:, :, 1], axis=1)) != 0.0
-
- z_diff = points[:, 2, 1] - points[:, 2, 0]
-
- # Validate that all values are different
- assert np.unique(z_diff).size == z_diff.size, "Each coreg cell should have different results."
-
- # Validate that the BlockwiseCoreg doesn't accept uninstantiated Coreg classes
- with pytest.raises(ValueError, match="instantiated Coreg subclass"):
- coreg.BlockwiseCoreg(step=coreg.VerticalShift, subdivision=1) # type: ignore
-
- # Metadata copying has been an issue. Validate that all chunks have unique ids
- chunk_numbers = [m["i"] for m in blockwise._meta["step_meta"]]
- assert np.unique(chunk_numbers).shape[0] == len(chunk_numbers)
-
- transformed_dem = blockwise.apply(self.tba)
-
- ddem_pre = (self.ref - self.tba)[~self.inlier_mask]
- ddem_post = (self.ref - transformed_dem)[~self.inlier_mask]
-
- # Check that the periglacial difference is lower after coregistration.
- assert abs(np.ma.median(ddem_post)) < abs(np.ma.median(ddem_pre))
-
- stats = blockwise.stats()
-
- # Check that nans don't exist (if they do, something has gone very wrong)
- assert np.all(np.isfinite(stats["nmad"]))
- # Check that offsets were actually calculated.
- assert np.sum(np.abs(np.linalg.norm(stats[["x_off", "y_off", "z_off"]], axis=0))) > 0
-
- def test_blockwise_coreg_large_gaps(self) -> None:
- """Test BlockwiseCoreg when large gaps are encountered, e.g. around the frame of a rotated DEM."""
- warnings.simplefilter("error")
- reference_dem = self.ref.reproject(dst_crs="EPSG:3413", dst_res=self.ref.res, resampling="bilinear")
- dem_to_be_aligned = self.tba.reproject(dst_ref=reference_dem, resampling="bilinear")
-
- blockwise = xdem.coreg.BlockwiseCoreg(xdem.coreg.NuthKaab(), 64, warn_failures=False)
-
- # This should not fail or trigger warnings as warn_failures is False
- blockwise.fit(reference_dem, dem_to_be_aligned)
-
- stats = blockwise.stats()
-
- # We expect holes in the blockwise coregistration, so there should not be 64 "successful" blocks.
- assert stats.shape[0] < 64
-
- # Statistics are only calculated on finite values, so all of these should be finite as well.
- assert np.all(np.isfinite(stats))
-
- # Copy the TBA DEM and set a square portion to nodata
- tba = self.tba.copy()
- mask = np.zeros(np.shape(tba.data), dtype=bool)
- mask[450:500, 450:500] = True
- tba.set_mask(mask=mask)
-
- blockwise = xdem.coreg.BlockwiseCoreg(xdem.coreg.NuthKaab(), 8, warn_failures=False)
-
- # Align the DEM and apply the blockwise to a zero-array (to get the zshift)
- aligned = blockwise.fit(self.ref, tba).apply(tba)
- zshift, _ = blockwise.apply(np.zeros_like(tba.data), transform=tba.transform, crs=tba.crs)
-
- # Validate that the zshift is not something crazy high and that no negative values exist in the data.
- assert np.nanmax(np.abs(zshift)) < 50
- assert np.count_nonzero(aligned.data.compressed() < -50) == 0
-
- # Check that coregistration improved the alignment
- ddem_post = (aligned - self.ref).data.compressed()
- ddem_pre = (tba - self.ref).data.compressed()
- assert abs(np.nanmedian(ddem_pre)) > abs(np.nanmedian(ddem_post))
- assert np.nanstd(ddem_pre) > np.nanstd(ddem_post)
-
-
-def test_apply_matrix() -> None:
- warnings.simplefilter("error")
- ref, tba, outlines = load_examples() # Load example reference, to-be-aligned and mask.
- ref_arr = gu.raster.get_array_and_mask(ref)[0]
-
- # Test only vertical shift (it should just apply the vertical shift and not make anything else)
- vshift = 5
- matrix = np.diag(np.ones(4, float))
- matrix[2, 3] = vshift
- transformed_dem = apply_matrix(ref_arr, ref.transform, matrix)
- reverted_dem = transformed_dem - vshift
-
- # Check that the reverted DEM has the exact same values as the initial one
- # (resampling is not an exact science, so this will only apply for vertical shift corrections)
- assert np.nanmedian(reverted_dem) == np.nanmedian(np.asarray(ref.data))
-
- # Synthesize a shifted and vertically offset DEM
- pixel_shift = 11
- vshift = 5
- shifted_dem = ref_arr.copy()
- shifted_dem[:, pixel_shift:] = shifted_dem[:, :-pixel_shift]
- shifted_dem[:, :pixel_shift] = np.nan
- shifted_dem += vshift
-
- matrix = np.diag(np.ones(4, dtype=float))
- matrix[0, 3] = pixel_shift * tba.res[0]
- matrix[2, 3] = -vshift
-
- transformed_dem = apply_matrix(shifted_dem, ref.transform, matrix, resampling="bilinear")
- diff = np.asarray(ref_arr - transformed_dem)
-
- # Check that the median is very close to zero
- assert np.abs(np.nanmedian(diff)) < 0.01
- # Check that the NMAD is low
- assert spatialstats.nmad(diff) < 0.01
-
- def rotation_matrix(rotation: float = 30) -> NDArrayf:
- rotation = np.deg2rad(rotation)
- matrix = np.array(
- [
- [1, 0, 0, 0],
- [0, np.cos(rotation), -np.sin(rotation), 0],
- [0, np.sin(rotation), np.cos(rotation), 0],
- [0, 0, 0, 1],
- ]
- )
- return matrix
-
- rotation = 4
- centroid = (
- np.mean([ref.bounds.left, ref.bounds.right]),
- np.mean([ref.bounds.top, ref.bounds.bottom]),
- ref.data.mean(),
- )
- rotated_dem = apply_matrix(ref.data.squeeze(), ref.transform, rotation_matrix(rotation), centroid=centroid)
- # Make sure that the rotated DEM is way off, but is centered around the same approximate point.
- assert np.abs(np.nanmedian(rotated_dem - ref.data.data)) < 1
- assert spatialstats.nmad(rotated_dem - ref.data.data) > 500
-
- # Apply a rotation in the opposite direction
- unrotated_dem = (
- apply_matrix(rotated_dem, ref.transform, rotation_matrix(-rotation * 0.99), centroid=centroid) + 4.0
- ) # TODO: Check why the 0.99 rotation and +4 vertical shift were introduced.
-
- diff = np.asarray(ref.data.squeeze() - unrotated_dem)
-
- # if False:
- # import matplotlib.pyplot as plt
- #
- # vmin = 0
- # vmax = 1500
- # extent = (ref.bounds.left, ref.bounds.right, ref.bounds.bottom, ref.bounds.top)
- # plot_params = dict(
- # extent=extent,
- # vmin=vmin,
- # vmax=vmax
- # )
- # plt.figure(figsize=(22, 4), dpi=100)
- # plt.subplot(151)
- # plt.title("Original")
- # plt.imshow(ref.data.squeeze(), **plot_params)
- # plt.xlim(*extent[:2])
- # plt.ylim(*extent[2:])
- # plt.subplot(152)
- # plt.title(f"Rotated {rotation} degrees")
- # plt.imshow(rotated_dem, **plot_params)
- # plt.xlim(*extent[:2])
- # plt.ylim(*extent[2:])
- # plt.subplot(153)
- # plt.title(f"De-rotated {-rotation} degrees")
- # plt.imshow(unrotated_dem, **plot_params)
- # plt.xlim(*extent[:2])
- # plt.ylim(*extent[2:])
- # plt.subplot(154)
- # plt.title("Original vs. de-rotated")
- # plt.imshow(diff, extent=extent, vmin=-10, vmax=10, cmap="coolwarm_r")
- # plt.colorbar()
- # plt.xlim(*extent[:2])
- # plt.ylim(*extent[2:])
- # plt.subplot(155)
- # plt.title("Original vs. de-rotated")
- # plt.hist(diff[np.isfinite(diff)], bins=np.linspace(-10, 10, 100))
- # plt.tight_layout(w_pad=0.05)
- # plt.show()
-
- # Check that the median is very close to zero
- assert np.abs(np.nanmedian(diff)) < 0.5
- # Check that the NMAD is low
- assert spatialstats.nmad(diff) < 5
- print(np.nanmedian(diff), spatialstats.nmad(diff))
-
-
-def test_warp_dem() -> None:
- """Test that the warp_dem function works expectedly."""
- warnings.simplefilter("error")
-
- small_dem = np.zeros((5, 10), dtype="float32")
- small_transform = rio.transform.from_origin(0, 5, 1, 1)
-
- source_coords = np.array([[0, 0, 0], [0, 5, 0], [10, 0, 0], [10, 5, 0]]).astype(small_dem.dtype)
-
- dest_coords = source_coords.copy()
- dest_coords[0, 0] = -1e-5
-
- warped_dem = coreg.base.warp_dem(
- dem=small_dem,
- transform=small_transform,
- source_coords=source_coords,
- destination_coords=dest_coords,
- resampling="linear",
- trim_border=False,
- )
- assert np.nansum(np.abs(warped_dem - small_dem)) < 1e-6
-
- elev_shift = 5.0
- dest_coords[1, 2] = elev_shift
- warped_dem = coreg.base.warp_dem(
- dem=small_dem,
- transform=small_transform,
- source_coords=source_coords,
- destination_coords=dest_coords,
- resampling="linear",
- )
-
- # The warped DEM should have the value 'elev_shift' in the upper left corner.
- assert warped_dem[0, 0] == elev_shift
- # The corner should be zero, so the corner pixel (represents the corner minus resolution / 2) should be close.
- # We select the pixel before the corner (-2 in X-axis) to avoid the NaN propagation on the bottom row.
- assert warped_dem[-2, -1] < 1
-
- # Synthesise some X/Y/Z coordinates on the DEM.
- source_coords = np.array(
- [
- [0, 0, 200],
- [480, 20, 200],
- [460, 480, 200],
- [10, 460, 200],
- [250, 250, 200],
- ]
- )
-
- # Copy the source coordinates and apply some shifts
- dest_coords = source_coords.copy()
- # Apply in the X direction
- dest_coords[0, 0] += 20
- dest_coords[1, 0] += 7
- dest_coords[2, 0] += 10
- dest_coords[3, 0] += 5
-
- # Apply in the Y direction
- dest_coords[4, 1] += 5
-
- # Apply in the Z direction
- dest_coords[3, 2] += 5
- test_shift = 6 # This shift will be validated below
- dest_coords[4, 2] += test_shift
-
- # Generate a semi-random DEM
- transform = rio.transform.from_origin(0, 500, 1, 1)
- shape = (500, 550)
- dem = misc.generate_random_field(shape, 100) * 200 + misc.generate_random_field(shape, 10) * 50
-
- # Warp the DEM using the source-destination coordinates.
- transformed_dem = coreg.base.warp_dem(
- dem=dem, transform=transform, source_coords=source_coords, destination_coords=dest_coords, resampling="linear"
- )
-
- # Try to undo the warp by reversing the source-destination coordinates.
- untransformed_dem = coreg.base.warp_dem(
- dem=transformed_dem,
- transform=transform,
- source_coords=dest_coords,
- destination_coords=source_coords,
- resampling="linear",
- )
- # Validate that the DEM is now more or less the same as the original.
- # Due to the randomness, the threshold is quite high, but would be something like 10+ if it was incorrect.
- assert spatialstats.nmad(dem - untransformed_dem) < 0.5
-
- if False:
- import matplotlib.pyplot as plt
-
- plt.figure(dpi=200)
- plt.subplot(141)
-
- plt.imshow(dem, vmin=0, vmax=300)
- plt.subplot(142)
- plt.imshow(transformed_dem, vmin=0, vmax=300)
- plt.subplot(143)
- plt.imshow(untransformed_dem, vmin=0, vmax=300)
-
- plt.subplot(144)
- plt.imshow(dem - untransformed_dem, cmap="coolwarm_r", vmin=-10, vmax=10)
- plt.show()
diff --git a/tests/test_coreg/test_biascorr.py b/tests/test_coreg/test_biascorr.py
deleted file mode 100644
index b7a7e6b8..00000000
--- a/tests/test_coreg/test_biascorr.py
+++ /dev/null
@@ -1,584 +0,0 @@
-"""Tests for the biascorr module (non-rigid coregistrations)."""
-from __future__ import annotations
-
-import re
-import warnings
-
-import geoutils as gu
-import numpy as np
-import pytest
-import scipy
-
-import xdem.terrain
-
-PLOT = False
-
-with warnings.catch_warnings():
- warnings.simplefilter("ignore")
- from xdem import examples
- from xdem.coreg import biascorr
- from xdem.fit import polynomial_2d, sumsin_1d
-
-
-def load_examples() -> tuple[gu.Raster, gu.Raster, gu.Vector]:
- """Load example files to try coregistration methods with."""
- with warnings.catch_warnings():
- warnings.simplefilter("ignore")
- reference_raster = gu.Raster(examples.get_path("longyearbyen_ref_dem"))
- to_be_aligned_raster = gu.Raster(examples.get_path("longyearbyen_tba_dem"))
- glacier_mask = gu.Vector(examples.get_path("longyearbyen_glacier_outlines"))
-
- return reference_raster, to_be_aligned_raster, glacier_mask
-
-
-class TestBiasCorr:
- ref, tba, outlines = load_examples() # Load example reference, to-be-aligned and mask.
- inlier_mask = ~outlines.create_mask(ref)
-
- fit_params = dict(
- reference_dem=ref,
- dem_to_be_aligned=tba,
- inlier_mask=inlier_mask,
- verbose=True,
- )
- # Create some 3D coordinates with Z coordinates being 0 to try the apply_pts functions.
- points = np.array([[1, 2, 3, 4], [1, 2, 3, 4], [0, 0, 0, 0]], dtype="float64").T
-
- def test_biascorr(self) -> None:
- """Test the parent class BiasCorr instantiation."""
-
- # Create a bias correction instance
- bcorr = biascorr.BiasCorr()
-
- # Check default "fit" metadata was set properly
- assert bcorr._meta["fit_func"] == biascorr.fit_workflows["norder_polynomial"]["func"]
- assert bcorr._meta["fit_optimizer"] == biascorr.fit_workflows["norder_polynomial"]["optimizer"]
- assert bcorr._meta["bias_var_names"] is None
-
- # Check that the _is_affine attribute is set correctly
- assert not bcorr._is_affine
- assert bcorr._fit_or_bin == "fit"
- assert bcorr._needs_vars is True
-
- # Or with default bin arguments
- bcorr2 = biascorr.BiasCorr(fit_or_bin="bin")
-
- assert bcorr2._meta["bin_sizes"] == 10
- assert bcorr2._meta["bin_statistic"] == np.nanmedian
- assert bcorr2._meta["bin_apply_method"] == "linear"
-
- assert bcorr2._fit_or_bin == "bin"
-
- # Or with default bin_and_fit arguments
- bcorr3 = biascorr.BiasCorr(fit_or_bin="bin_and_fit")
-
- assert bcorr3._meta["bin_sizes"] == 10
- assert bcorr3._meta["bin_statistic"] == np.nanmedian
- assert bcorr3._meta["fit_func"] == biascorr.fit_workflows["norder_polynomial"]["func"]
- assert bcorr3._meta["fit_optimizer"] == biascorr.fit_workflows["norder_polynomial"]["optimizer"]
-
- assert bcorr3._fit_or_bin == "bin_and_fit"
-
- # Or defining bias variable names on instantiation as iterable
- bcorr4 = biascorr.BiasCorr(bias_var_names=("slope", "ncc"))
- assert bcorr4._meta["bias_var_names"] == ["slope", "ncc"]
-
- # Same using an array
- bcorr5 = biascorr.BiasCorr(bias_var_names=np.array(["slope", "ncc"]))
- assert bcorr5._meta["bias_var_names"] == ["slope", "ncc"]
-
- def test_biascorr__errors(self) -> None:
- """Test the errors that should be raised by BiasCorr."""
-
- # And raises an error when "fit" or "bin" is wrongly passed
- with pytest.raises(ValueError, match="Argument `fit_or_bin` must be 'bin_and_fit', 'fit' or 'bin'."):
- biascorr.BiasCorr(fit_or_bin=True) # type: ignore
-
- # For fit function
- with pytest.raises(
- TypeError,
- match=re.escape(
- "Argument `fit_func` must be a function (callable) or the string '{}', "
- "got .".format("', '".join(biascorr.fit_workflows.keys()))
- ),
- ):
- biascorr.BiasCorr(fit_func="yay") # type: ignore
-
- # For fit optimizer
- with pytest.raises(
- TypeError, match=re.escape("Argument `fit_optimizer` must be a function (callable), " "got .")
- ):
- biascorr.BiasCorr(fit_optimizer=3) # type: ignore
-
- # For bin sizes
- with pytest.raises(
- TypeError,
- match=re.escape(
- "Argument `bin_sizes` must be an integer, or a dictionary of integers or iterables, "
- "got ."
- ),
- ):
- biascorr.BiasCorr(fit_or_bin="bin", bin_sizes={"a": 1.5}) # type: ignore
-
- # For bin statistic
- with pytest.raises(
- TypeError, match=re.escape("Argument `bin_statistic` must be a function (callable), " "got .")
- ):
- biascorr.BiasCorr(fit_or_bin="bin", bin_statistic="count") # type: ignore
-
- # For bin apply method
- with pytest.raises(
- TypeError,
- match=re.escape(
- "Argument `bin_apply_method` must be the string 'linear' or 'per_bin', " "got ."
- ),
- ):
- biascorr.BiasCorr(fit_or_bin="bin", bin_apply_method=1) # type: ignore
-
- @pytest.mark.parametrize(
- "fit_func", ("norder_polynomial", "nfreq_sumsin", lambda x, a, b: x[0] * a + b)
- ) # type: ignore
- @pytest.mark.parametrize(
- "fit_optimizer",
- [
- scipy.optimize.curve_fit,
- ],
- ) # type: ignore
- def test_biascorr__fit_1d(self, fit_func, fit_optimizer, capsys) -> None:
- """Test the _fit_func and apply_func methods of BiasCorr for the fit case (called by all its subclasses)."""
-
- # Create a bias correction object
- bcorr = biascorr.BiasCorr(fit_or_bin="fit", fit_func=fit_func, fit_optimizer=fit_optimizer)
-
- # Run fit using elevation as input variable
- elev_fit_params = self.fit_params.copy()
- bias_vars_dict = {"elevation": self.ref}
- elev_fit_params.update({"bias_vars": bias_vars_dict})
-
- # To speed up the tests, pass niter to basinhopping through "nfreq_sumsin"
- # Also fix random state for basinhopping
- if fit_func == "nfreq_sumsin":
- elev_fit_params.update({"niter": 1})
-
- # Run with input parameter, and using only 100 subsamples for speed
- bcorr.fit(**elev_fit_params, subsample=100, random_state=42)
-
- # Check that variable names are defined during fit
- assert bcorr._meta["bias_var_names"] == ["elevation"]
-
- # Apply the correction
- bcorr.apply(dem=self.tba, bias_vars=bias_vars_dict)
-
- @pytest.mark.parametrize(
- "fit_func", (polynomial_2d, lambda x, a, b, c, d: a * x[0] + b * x[1] + c**d)
- ) # type: ignore
- @pytest.mark.parametrize(
- "fit_optimizer",
- [
- scipy.optimize.curve_fit,
- ],
- ) # type: ignore
- def test_biascorr__fit_2d(self, fit_func, fit_optimizer) -> None:
- """Test the _fit_func and apply_func methods of BiasCorr for the fit case (called by all its subclasses)."""
-
- # Create a bias correction object
- bcorr = biascorr.BiasCorr(fit_or_bin="fit", fit_func=fit_func, fit_optimizer=fit_optimizer)
-
- # Run fit using elevation as input variable
- elev_fit_params = self.fit_params.copy()
- bias_vars_dict = {"elevation": self.ref, "slope": xdem.terrain.slope(self.ref)}
- elev_fit_params.update({"bias_vars": bias_vars_dict})
-
- # Run with input parameter, and using only 100 subsamples for speed
- # Passing p0 defines the number of parameters to solve for
- bcorr.fit(**elev_fit_params, subsample=100, p0=[0, 0, 0, 0], random_state=42)
-
- # Check that variable names are defined during fit
- assert bcorr._meta["bias_var_names"] == ["elevation", "slope"]
-
- # Apply the correction
- bcorr.apply(dem=self.tba, bias_vars=bias_vars_dict)
-
- @pytest.mark.parametrize("bin_sizes", (10, {"elevation": 20}, {"elevation": (0, 500, 1000)})) # type: ignore
- @pytest.mark.parametrize("bin_statistic", [np.median, np.nanmean]) # type: ignore
- def test_biascorr__bin_1d(self, bin_sizes, bin_statistic) -> None:
- """Test the _fit_func and apply_func methods of BiasCorr for the fit case (called by all its subclasses)."""
-
- # Create a bias correction object
- bcorr = biascorr.BiasCorr(fit_or_bin="bin", bin_sizes=bin_sizes, bin_statistic=bin_statistic)
-
- # Run fit using elevation as input variable
- elev_fit_params = self.fit_params.copy()
- bias_vars_dict = {"elevation": self.ref}
- elev_fit_params.update({"bias_vars": bias_vars_dict})
-
- # Run with input parameter, and using only 100 subsamples for speed
- bcorr.fit(**elev_fit_params, subsample=1000, random_state=42)
-
- # Check that variable names are defined during fit
- assert bcorr._meta["bias_var_names"] == ["elevation"]
-
- # Apply the correction
- bcorr.apply(dem=self.tba, bias_vars=bias_vars_dict)
-
- @pytest.mark.parametrize("bin_sizes", (10, {"elevation": (0, 500, 1000), "slope": (0, 20, 40)})) # type: ignore
- @pytest.mark.parametrize("bin_statistic", [np.median, np.nanmean]) # type: ignore
- def test_biascorr__bin_2d(self, bin_sizes, bin_statistic) -> None:
- """Test the _fit_func and apply_func methods of BiasCorr for the fit case (called by all its subclasses)."""
-
- # Create a bias correction object
- bcorr = biascorr.BiasCorr(fit_or_bin="bin", bin_sizes=bin_sizes, bin_statistic=bin_statistic)
-
- # Run fit using elevation as input variable
- elev_fit_params = self.fit_params.copy()
- bias_vars_dict = {"elevation": self.ref, "slope": xdem.terrain.slope(self.ref)}
- elev_fit_params.update({"bias_vars": bias_vars_dict})
-
- # Run with input parameter, and using only 100 subsamples for speed
- bcorr.fit(**elev_fit_params, subsample=10000, random_state=42)
-
- # Check that variable names are defined during fit
- assert bcorr._meta["bias_var_names"] == ["elevation", "slope"]
-
- # Apply the correction
- bcorr.apply(dem=self.tba, bias_vars=bias_vars_dict)
-
- @pytest.mark.parametrize(
- "fit_func", ("norder_polynomial", "nfreq_sumsin", lambda x, a, b: x[0] * a + b)
- ) # type: ignore
- @pytest.mark.parametrize(
- "fit_optimizer",
- [
- scipy.optimize.curve_fit,
- ],
- ) # type: ignore
- @pytest.mark.parametrize("bin_sizes", (10, {"elevation": np.arange(0, 1000, 100)})) # type: ignore
- @pytest.mark.parametrize("bin_statistic", [np.median, np.nanmean]) # type: ignore
- def test_biascorr__bin_and_fit_1d(self, fit_func, fit_optimizer, bin_sizes, bin_statistic) -> None:
- """Test the _fit_func and apply_func methods of BiasCorr for the bin_and_fit case (called by all subclasses)."""
-
- # Create a bias correction object
- bcorr = biascorr.BiasCorr(
- fit_or_bin="bin_and_fit",
- fit_func=fit_func,
- fit_optimizer=fit_optimizer,
- bin_sizes=bin_sizes,
- bin_statistic=bin_statistic,
- )
-
- # Run fit using elevation as input variable
- elev_fit_params = self.fit_params.copy()
- bias_vars_dict = {"elevation": self.ref}
- elev_fit_params.update({"bias_vars": bias_vars_dict})
-
- # To speed up the tests, pass niter to basinhopping through "nfreq_sumsin"
- # Also fix random state for basinhopping
- if fit_func == "nfreq_sumsin":
- elev_fit_params.update({"niter": 1})
-
- # Run with input parameter, and using only 100 subsamples for speed
- bcorr.fit(**elev_fit_params, subsample=100, random_state=42)
-
- # Check that variable names are defined during fit
- assert bcorr._meta["bias_var_names"] == ["elevation"]
-
- # Apply the correction
- bcorr.apply(dem=self.tba, bias_vars=bias_vars_dict)
-
- @pytest.mark.parametrize(
- "fit_func", (polynomial_2d, lambda x, a, b, c, d: a * x[0] + b * x[1] + c**d)
- ) # type: ignore
- @pytest.mark.parametrize(
- "fit_optimizer",
- [
- scipy.optimize.curve_fit,
- ],
- ) # type: ignore
- @pytest.mark.parametrize("bin_sizes", (10, {"elevation": (0, 500, 1000), "slope": (0, 20, 40)})) # type: ignore
- @pytest.mark.parametrize("bin_statistic", [np.median, np.nanmean]) # type: ignore
- def test_biascorr__bin_and_fit_2d(self, fit_func, fit_optimizer, bin_sizes, bin_statistic) -> None:
- """Test the _fit_func and apply_func methods of BiasCorr for the bin_and_fit case (called by all subclasses)."""
-
- # Create a bias correction object
- bcorr = biascorr.BiasCorr(
- fit_or_bin="bin_and_fit",
- fit_func=fit_func,
- fit_optimizer=fit_optimizer,
- bin_sizes=bin_sizes,
- bin_statistic=bin_statistic,
- )
-
- # Run fit using elevation as input variable
- elev_fit_params = self.fit_params.copy()
- bias_vars_dict = {"elevation": self.ref, "slope": xdem.terrain.slope(self.ref)}
- elev_fit_params.update({"bias_vars": bias_vars_dict})
-
- # Run with input parameter, and using only 100 subsamples for speed
- # Passing p0 defines the number of parameters to solve for
- bcorr.fit(**elev_fit_params, subsample=100, p0=[0, 0, 0, 0], random_state=42)
-
- # Check that variable names are defined during fit
- assert bcorr._meta["bias_var_names"] == ["elevation", "slope"]
-
- # Apply the correction
- bcorr.apply(dem=self.tba, bias_vars=bias_vars_dict)
-
- def test_biascorr1d(self) -> None:
- """
- Test the subclass BiasCorr1D, which defines default parameters for 1D.
- The rest is already tested in test_biascorr.
- """
-
- # Try default "fit" parameters instantiation
- bcorr1d = biascorr.BiasCorr1D()
-
- assert bcorr1d._meta["fit_func"] == biascorr.fit_workflows["norder_polynomial"]["func"]
- assert bcorr1d._meta["fit_optimizer"] == biascorr.fit_workflows["norder_polynomial"]["optimizer"]
- assert bcorr1d._needs_vars is True
-
- # Try default "bin" parameter instantiation
- bcorr1d = biascorr.BiasCorr1D(fit_or_bin="bin")
-
- assert bcorr1d._meta["bin_sizes"] == 10
- assert bcorr1d._meta["bin_statistic"] == np.nanmedian
- assert bcorr1d._meta["bin_apply_method"] == "linear"
-
- elev_fit_params = self.fit_params.copy()
- # Raise error when wrong number of parameters are passed
- with pytest.raises(
- ValueError, match="A single variable has to be provided through the argument 'bias_vars', " "got 2."
- ):
- bias_vars_dict = {"elevation": self.ref, "slope": xdem.terrain.slope(self.ref)}
- bcorr1d.fit(**elev_fit_params, bias_vars=bias_vars_dict)
-
- # Raise error when variables don't match
- with pytest.raises(
- ValueError,
- match=re.escape(
- "The keys of `bias_vars` do not match the `bias_var_names` defined during " "instantiation: ['ncc']."
- ),
- ):
- bcorr1d2 = biascorr.BiasCorr1D(bias_var_names=["ncc"])
- bias_vars_dict = {"elevation": self.ref}
- bcorr1d2.fit(**elev_fit_params, bias_vars=bias_vars_dict)
-
- def test_biascorr2d(self) -> None:
- """
- Test the subclass BiasCorr2D, which defines default parameters for 2D.
- The rest is already tested in test_biascorr.
- """
-
- # Try default "fit" parameters instantiation
- bcorr2d = biascorr.BiasCorr2D()
-
- assert bcorr2d._meta["fit_func"] == polynomial_2d
- assert bcorr2d._meta["fit_optimizer"] == scipy.optimize.curve_fit
- assert bcorr2d._needs_vars is True
-
- # Try default "bin" parameter instantiation
- bcorr2d = biascorr.BiasCorr2D(fit_or_bin="bin")
-
- assert bcorr2d._meta["bin_sizes"] == 10
- assert bcorr2d._meta["bin_statistic"] == np.nanmedian
- assert bcorr2d._meta["bin_apply_method"] == "linear"
-
- elev_fit_params = self.fit_params.copy()
- # Raise error when wrong number of parameters are passed
- with pytest.raises(
- ValueError, match="Exactly two variables have to be provided through the argument " "'bias_vars', got 1."
- ):
- bias_vars_dict = {"elevation": self.ref}
- bcorr2d.fit(**elev_fit_params, bias_vars=bias_vars_dict)
-
- # Raise error when variables don't match
- with pytest.raises(
- ValueError,
- match=re.escape(
- "The keys of `bias_vars` do not match the `bias_var_names` defined during "
- "instantiation: ['elevation', 'ncc']."
- ),
- ):
- bcorr2d2 = biascorr.BiasCorr2D(bias_var_names=["elevation", "ncc"])
- bias_vars_dict = {"elevation": self.ref, "slope": xdem.terrain.slope(self.ref)}
- bcorr2d2.fit(**elev_fit_params, bias_vars=bias_vars_dict)
-
- def test_directionalbias(self) -> None:
- """Test the subclass DirectionalBias."""
-
- # Try default "fit" parameters instantiation
- dirbias = biascorr.DirectionalBias(angle=45)
-
- assert dirbias._fit_or_bin == "bin_and_fit"
- assert dirbias._meta["fit_func"] == biascorr.fit_workflows["nfreq_sumsin"]["func"]
- assert dirbias._meta["fit_optimizer"] == biascorr.fit_workflows["nfreq_sumsin"]["optimizer"]
- assert dirbias._meta["angle"] == 45
- assert dirbias._needs_vars is False
-
- # Check that variable names are defined during instantiation
- assert dirbias._meta["bias_var_names"] == ["angle"]
-
- @pytest.mark.parametrize("angle", [20, 90, 210]) # type: ignore
- @pytest.mark.parametrize("nb_freq", [1, 2, 3]) # type: ignore
- def test_directionalbias__synthetic(self, angle, nb_freq) -> None:
- """Test the subclass DirectionalBias with synthetic data."""
-
- # Get along track
- xx = gu.raster.get_xy_rotated(self.ref, along_track_angle=angle)[0]
-
- # Get random parameters (3 parameters needed per frequency)
- np.random.seed(42)
- params = np.array([(5, 3000, np.pi), (1, 300, 0), (0.5, 100, np.pi / 2)]).flatten()
- nb_freq = 1
- params = params[0 : 3 * nb_freq]
-
- # Create a synthetic bias and add to the DEM
- synthetic_bias = sumsin_1d(xx.flatten(), *params)
- bias_dem = self.ref - synthetic_bias.reshape(np.shape(self.ref.data))
-
- # For debugging
- if PLOT:
- synth = self.ref.copy(new_array=synthetic_bias.reshape(np.shape(self.ref.data)))
- import matplotlib.pyplot as plt
-
- synth.show()
- plt.show()
-
- dirbias = biascorr.DirectionalBias(angle=angle, fit_or_bin="bin", bin_sizes=10000)
- dirbias.fit(reference_dem=self.ref, dem_to_be_aligned=bias_dem, subsample=10000, random_state=42)
- xdem.spatialstats.plot_1d_binning(
- df=dirbias._meta["bin_dataframe"], var_name="angle", statistic_name="nanmedian", min_count=0
- )
- plt.show()
-
- # Try default "fit" parameters instantiation
- dirbias = biascorr.DirectionalBias(angle=angle, bin_sizes=300)
- bounds = [
- (2, 10),
- (500, 5000),
- (0, 2 * np.pi),
- (0.5, 2),
- (100, 500),
- (0, 2 * np.pi),
- (0, 0.5),
- (10, 100),
- (0, 2 * np.pi),
- ]
- dirbias.fit(
- reference_dem=self.ref,
- dem_to_be_aligned=bias_dem,
- subsample=10000,
- random_state=42,
- bounds_amp_wave_phase=bounds,
- niter=10,
- )
-
- # Check all parameters are the same within 10%
- fit_params = dirbias._meta["fit_params"]
- assert np.shape(fit_params) == np.shape(params)
- assert np.allclose(params, fit_params, rtol=0.1)
-
- # Run apply and check that 99% of the variance was corrected
- corrected_dem = dirbias.apply(bias_dem)
- assert np.nanvar(corrected_dem - self.ref) < 0.01 * np.nanvar(synthetic_bias)
-
- def test_deramp(self) -> None:
- """Test the subclass Deramp."""
-
- # Try default "fit" parameters instantiation
- deramp = biascorr.Deramp()
-
- assert deramp._fit_or_bin == "fit"
- assert deramp._meta["fit_func"] == polynomial_2d
- assert deramp._meta["fit_optimizer"] == scipy.optimize.curve_fit
- assert deramp._meta["poly_order"] == 2
- assert deramp._needs_vars is False
-
- # Check that variable names are defined during instantiation
- assert deramp._meta["bias_var_names"] == ["xx", "yy"]
-
- @pytest.mark.parametrize("order", [1, 2, 3, 4]) # type: ignore
- def test_deramp__synthetic(self, order: int) -> None:
- """Run the deramp for varying polynomial orders using a synthetic elevation difference."""
-
- # Get coordinates
- xx, yy = np.meshgrid(np.arange(0, self.ref.shape[1]), np.arange(0, self.ref.shape[0]))
-
- # Number of parameters for a 2D order N polynomial called through np.polyval2d
- nb_params = int((order + 1) * (order + 1))
-
- # Get a random number of parameters
- np.random.seed(42)
- params = np.random.normal(size=nb_params)
-
- # Create a synthetic bias and add to the DEM
- synthetic_bias = polynomial_2d((xx, yy), *params)
- bias_dem = self.ref - synthetic_bias
-
- # Fit
- deramp = biascorr.Deramp(poly_order=order)
- deramp.fit(reference_dem=self.ref, dem_to_be_aligned=bias_dem, subsample=10000, random_state=42)
-
- # Check high-order parameters are the same within 10%
- fit_params = deramp._meta["fit_params"]
- assert np.shape(fit_params) == np.shape(params)
- assert np.allclose(
- params.reshape(order + 1, order + 1)[-1:, -1:], fit_params.reshape(order + 1, order + 1)[-1:, -1:], rtol=0.1
- )
-
- # Run apply and check that 99% of the variance was corrected
- corrected_dem = deramp.apply(bias_dem)
- assert np.nanvar(corrected_dem - self.ref) < 0.01 * np.nanvar(synthetic_bias)
-
- def test_terrainbias(self) -> None:
- """Test the subclass TerrainBias."""
-
- # Try default "fit" parameters instantiation
- tb = biascorr.TerrainBias()
-
- assert tb._fit_or_bin == "bin"
- assert tb._meta["bin_sizes"] == 100
- assert tb._meta["bin_statistic"] == np.nanmedian
- assert tb._meta["terrain_attribute"] == "maximum_curvature"
- assert tb._needs_vars is False
-
- assert tb._meta["bias_var_names"] == ["maximum_curvature"]
-
- def test_terrainbias__synthetic(self) -> None:
- """Test the subclass TerrainBias."""
-
- # Get maximum curvature
- maxc = xdem.terrain.get_terrain_attribute(self.ref, attribute="maximum_curvature")
-
- # Create a bias depending on bins
- synthetic_bias = np.zeros(np.shape(self.ref.data))
-
- # For each bin, a fake bias value is set in the synthetic bias array
- bin_edges = np.array((-1, 0, 0.1, 0.5, 2, 5))
- bias_per_bin = np.array((-5, 10, -2, 25, 5))
- for i in range(len(bin_edges) - 1):
- synthetic_bias[np.logical_and(maxc.data >= bin_edges[i], maxc.data < bin_edges[i + 1])] = bias_per_bin[i]
-
- # Add bias to the second DEM
- bias_dem = self.ref - synthetic_bias
-
- # Run the binning
- tb = biascorr.TerrainBias(
- terrain_attribute="maximum_curvature",
- bin_sizes={"maximum_curvature": bin_edges},
- bin_apply_method="per_bin",
- )
- # We don't want to subsample here, otherwise it might be very hard to derive maximum curvature...
- # TODO: Add the option to get terrain attribute before subsampling in the fit subclassing logic?
- tb.fit(reference_dem=self.ref, dem_to_be_aligned=bias_dem, random_state=42)
-
- # Check high-order parameters are the same within 10%
- bin_df = tb._meta["bin_dataframe"]
- assert [interval.left for interval in bin_df["maximum_curvature"].values] == list(bin_edges[:-1])
- assert [interval.right for interval in bin_df["maximum_curvature"].values] == list(bin_edges[1:])
- assert np.allclose(bin_df["nanmedian"], bias_per_bin, rtol=0.1)
-
- # Run apply and check that 99% of the variance was corrected
- # (we override the bias_var "max_curv" with that of the ref_dem to have a 1 on 1 match with the synthetic bias,
- # otherwise it is derived from the bias_dem which gives slightly different results than with ref_dem)
- corrected_dem = tb.apply(bias_dem, bias_vars={"maximum_curvature": maxc})
- assert np.nanvar(corrected_dem - self.ref) < 0.01 * np.nanvar(synthetic_bias)
diff --git a/tests/test_coreg/test_filters.py b/tests/test_coreg/test_filters.py
deleted file mode 100644
index 9d51106b..00000000
--- a/tests/test_coreg/test_filters.py
+++ /dev/null
@@ -1 +0,0 @@
-"""Functions to test the coregistration filters."""
diff --git a/tests/test_coreg/test_workflows.py b/tests/test_coreg/test_workflows.py
deleted file mode 100644
index f95fbb4e..00000000
--- a/tests/test_coreg/test_workflows.py
+++ /dev/null
@@ -1,265 +0,0 @@
-"""Functions to test the coregistration workflows."""
-from __future__ import annotations
-
-import os
-import tempfile
-import warnings
-
-import numpy as np
-import pandas as pd
-import pytest
-from geoutils import Raster, Vector
-from geoutils.raster import RasterType
-
-import xdem
-from xdem import examples
-from xdem.coreg.workflows import create_inlier_mask, dem_coregistration
-
-
-def load_examples() -> tuple[RasterType, RasterType, Vector]:
- """Load example files to try coregistration methods with."""
- with warnings.catch_warnings():
- warnings.simplefilter("ignore")
- reference_raster = Raster(examples.get_path("longyearbyen_ref_dem"))
- to_be_aligned_raster = Raster(examples.get_path("longyearbyen_tba_dem"))
- glacier_mask = Vector(examples.get_path("longyearbyen_glacier_outlines"))
-
- return reference_raster, to_be_aligned_raster, glacier_mask
-
-
-class TestWorkflows:
- def test_create_inlier_mask(self) -> None:
- """Test that the create_inlier_mask function works expectedly."""
- warnings.simplefilter("error")
-
- ref, tba, outlines = load_examples() # Load example reference, to-be-aligned and outlines
-
- # - Assert that without filtering create_inlier_mask behaves as if calling Vector.create_mask - #
- # Masking inside - using Vector
- inlier_mask_comp = ~outlines.create_mask(ref, as_array=True)
- inlier_mask = create_inlier_mask(
- tba,
- ref,
- [
- outlines,
- ],
- filtering=False,
- )
- assert np.all(inlier_mask_comp == inlier_mask)
-
- # Masking inside - using string
- inlier_mask = create_inlier_mask(
- tba,
- ref,
- [
- outlines.name,
- ],
- filtering=False,
- )
- assert np.all(inlier_mask_comp == inlier_mask)
-
- # Masking outside - using Vector
- inlier_mask = create_inlier_mask(
- tba,
- ref,
- [
- outlines,
- ],
- inout=[
- -1,
- ],
- filtering=False,
- )
- assert np.all(~inlier_mask_comp == inlier_mask)
-
- # Masking outside - using string
- inlier_mask = create_inlier_mask(
- tba,
- ref,
- [
- outlines.name,
- ],
- inout=[-1],
- filtering=False,
- )
- assert np.all(~inlier_mask_comp == inlier_mask)
-
- # - Test filtering options only - #
- # Test the slope filter only
- slope = xdem.terrain.slope(ref)
- slope_lim = [1, 50]
- inlier_mask_comp2 = np.ones(tba.data.shape, dtype=bool)
- inlier_mask_comp2[slope.data < slope_lim[0]] = False
- inlier_mask_comp2[slope.data > slope_lim[1]] = False
- inlier_mask = create_inlier_mask(tba, ref, filtering=True, slope_lim=slope_lim, nmad_factor=np.inf)
- assert np.all(inlier_mask == inlier_mask_comp2)
-
- # Test the nmad_factor filter only
- nmad_factor = 3
- ddem = tba - ref
- inlier_mask_comp3 = (np.abs(ddem.data - np.median(ddem)) < nmad_factor * xdem.spatialstats.nmad(ddem)).filled(
- False
- )
- inlier_mask = create_inlier_mask(tba, ref, filtering=True, slope_lim=[0, 90], nmad_factor=nmad_factor)
- assert np.all(inlier_mask == inlier_mask_comp3)
-
- # Test the sum of both
- inlier_mask = create_inlier_mask(
- tba, ref, shp_list=[], inout=[], filtering=True, slope_lim=slope_lim, nmad_factor=nmad_factor
- )
- inlier_mask_all = inlier_mask_comp2 & inlier_mask_comp3
- assert np.all(inlier_mask == inlier_mask_all)
-
- # Test the dh_max filter only
- dh_max = 200
- inlier_mask_comp4 = (np.abs(ddem.data) < dh_max).filled(False)
- inlier_mask = create_inlier_mask(tba, ref, filtering=True, slope_lim=[0, 90], nmad_factor=np.inf, dh_max=dh_max)
- assert np.all(inlier_mask == inlier_mask_comp4)
-
- # - Test the sum of outlines + dh_max + slope - #
- # nmad_factor will have a different behavior because it calculates nmad from the inliers of previous filters
- inlier_mask = create_inlier_mask(
- tba,
- ref,
- shp_list=[
- outlines,
- ],
- inout=[
- -1,
- ],
- filtering=True,
- slope_lim=slope_lim,
- nmad_factor=np.inf,
- dh_max=dh_max,
- )
- inlier_mask_all = ~inlier_mask_comp & inlier_mask_comp2 & inlier_mask_comp4
- assert np.all(inlier_mask == inlier_mask_all)
-
- # - Test that proper errors are raised for wrong inputs - #
- with pytest.raises(ValueError, match="`shp_list` must be a list/tuple"):
- create_inlier_mask(tba, ref, shp_list=outlines)
-
- with pytest.raises(ValueError, match="`shp_list` must be a list/tuple of strings or geoutils.Vector instance"):
- create_inlier_mask(tba, ref, shp_list=[1])
-
- with pytest.raises(ValueError, match="`inout` must be a list/tuple"):
- create_inlier_mask(
- tba,
- ref,
- shp_list=[
- outlines,
- ],
- inout=1, # type: ignore
- )
-
- with pytest.raises(ValueError, match="`inout` must contain only 1 and -1"):
- create_inlier_mask(
- tba,
- ref,
- shp_list=[
- outlines,
- ],
- inout=[
- 0,
- ],
- )
-
- with pytest.raises(ValueError, match="`inout` must be of same length as shp"):
- create_inlier_mask(
- tba,
- ref,
- shp_list=[
- outlines,
- ],
- inout=[1, 1],
- )
-
- with pytest.raises(ValueError, match="`slope_lim` must be a list/tuple"):
- create_inlier_mask(tba, ref, filtering=True, slope_lim=1) # type: ignore
-
- with pytest.raises(ValueError, match="`slope_lim` must contain 2 elements"):
- create_inlier_mask(tba, ref, filtering=True, slope_lim=[30])
-
- with pytest.raises(ValueError, match=r"`slope_lim` must be a tuple/list of 2 elements in the range \[0-90\]"):
- create_inlier_mask(tba, ref, filtering=True, slope_lim=[-1, 40])
-
- with pytest.raises(ValueError, match=r"`slope_lim` must be a tuple/list of 2 elements in the range \[0-90\]"):
- create_inlier_mask(tba, ref, filtering=True, slope_lim=[1, 120])
-
- @pytest.mark.skip(reason="The test segfaults locally and in CI (2023-08-21)") # type: ignore
- def test_dem_coregistration(self) -> None:
- """
- Test that the dem_coregistration function works expectedly.
- Tests the features that are specific to dem_coregistration.
- For example, many features are tested in create_inlier_mask, so not tested again here.
- TODO: Add DEMs with different projection/grid to test that regridding works as expected.
- """
- # Load example reference, to-be-aligned and outlines
- ref_dem, tba_dem, outlines = load_examples()
-
- # - Check that it works with default parameters - #
- dem_coreg, coreg_method, coreg_stats, inlier_mask = dem_coregistration(tba_dem, ref_dem)
-
- # Assert that outputs have expected format
- assert isinstance(dem_coreg, xdem.DEM)
- assert isinstance(coreg_method, xdem.coreg.Coreg)
- assert isinstance(coreg_stats, pd.DataFrame)
-
- # Assert that default coreg_method is as expected
- assert hasattr(coreg_method, "pipeline")
- assert isinstance(coreg_method.pipeline[0], xdem.coreg.NuthKaab)
- assert isinstance(coreg_method.pipeline[1], xdem.coreg.VerticalShift)
-
- # The result should be similar to applying the same coreg by hand with:
- # - DEMs converted to Float32
- # - default inlier_mask
- # - no resampling
- coreg_method_ref = xdem.coreg.NuthKaab() + xdem.coreg.VerticalShift()
- inlier_mask = create_inlier_mask(tba_dem, ref_dem)
- coreg_method_ref.fit(ref_dem.astype("float32"), tba_dem.astype("float32"), inlier_mask=inlier_mask)
- dem_coreg_ref = coreg_method_ref.apply(tba_dem, resample=False)
- assert dem_coreg == dem_coreg_ref
-
- # Assert that coregistration improved the residuals
- assert abs(coreg_stats["med_orig"].values) > abs(coreg_stats["med_coreg"].values)
- assert coreg_stats["nmad_orig"].values > coreg_stats["nmad_coreg"].values
-
- # - Check some alternative arguments - #
- # Test with filename instead of DEMs
- dem_coreg2, _, _, _ = dem_coregistration(tba_dem.filename, ref_dem.filename)
- assert dem_coreg2 == dem_coreg
-
- # Test saving to file (mode = "w" is necessary to work on Windows)
- outfile = tempfile.NamedTemporaryFile(suffix=".tif", mode="w", delete=False)
- dem_coregistration(tba_dem, ref_dem, out_dem_path=outfile.name)
- dem_coreg2 = xdem.DEM(outfile.name)
- assert dem_coreg2 == dem_coreg
- outfile.close()
-
- # Test that shapefile is properly taken into account - inlier_mask should be False inside outlines
- # Need to use resample=True, to ensure that dem_coreg has same georef as inlier_mask
- dem_coreg, coreg_method, coreg_stats, inlier_mask = dem_coregistration(
- tba_dem,
- ref_dem,
- shp_list=[
- outlines,
- ],
- resample=True,
- )
- gl_mask = outlines.create_mask(dem_coreg, as_array=True)
- assert np.all(~inlier_mask[gl_mask])
-
- # Testing with plot
- out_fig = tempfile.NamedTemporaryFile(suffix=".png", mode="w", delete=False)
- assert os.path.getsize(out_fig.name) == 0
- dem_coregistration(tba_dem, ref_dem, plot=True, out_fig=out_fig.name)
- assert os.path.getsize(out_fig.name) > 0
- out_fig.close()
-
- # Testing different coreg method
- dem_coreg, coreg_method, coreg_stats, inlier_mask = dem_coregistration(
- tba_dem, ref_dem, coreg_method=xdem.coreg.Tilt()
- )
- assert isinstance(coreg_method, xdem.coreg.Tilt)
- assert abs(coreg_stats["med_orig"].values) > abs(coreg_stats["med_coreg"].values)
- assert coreg_stats["nmad_orig"].values > coreg_stats["nmad_coreg"].values
diff --git a/tests/test_doc.py b/tests/test_doc.py
deleted file mode 100644
index 094ba3bd..00000000
--- a/tests/test_doc.py
+++ /dev/null
@@ -1,74 +0,0 @@
-"""Functions to test the documentation."""
-import os
-import platform
-import shutil
-import warnings
-
-import sphinx.cmd.build
-
-
-class TestDocs:
- docs_dir = os.path.join(os.path.dirname(os.path.abspath(__file__)), "../", "doc/")
- n_threads = os.getenv("N_CPUS")
-
- def test_example_code(self) -> None:
- """Try running each python script in the doc/source/code\
- directory and check that it doesn't raise an error."""
- current_dir = os.getcwd()
- os.chdir(os.path.join(self.docs_dir, "source"))
-
- def run_code(filename: str) -> None:
- """Run a python script in one thread."""
- with open(filename) as infile:
- # Run everything except plt.show() calls.
- with warnings.catch_warnings():
- # When running the code asynchronously, matplotlib complains a bit
- ignored_warnings = [
- "Starting a Matplotlib GUI outside of the main thread",
- ".*fetching the attribute.*Polygon.*",
- ]
- # This is a GeoPandas issue
- warnings.simplefilter("error")
-
- for warning_text in ignored_warnings:
- warnings.filterwarnings("ignore", warning_text)
- try:
- exec(infile.read().replace("plt.show()", "plt.close()"))
- except Exception as exception:
- if isinstance(exception, DeprecationWarning):
- print(exception)
- else:
- raise RuntimeError(f"Failed on {filename}") from exception
-
- filenames = [os.path.join("code", filename) for filename in os.listdir("code/") if filename.endswith(".py")]
-
- for filename in filenames:
- run_code(filename)
- """
- with concurrent.futures.ThreadPoolExecutor(
- max_workers=int(self.n_threads) if self.n_threads is not None else None
- ) as executor:
- list(executor.map(run_code, filenames))
- """
-
- os.chdir(current_dir)
-
- def test_build(self) -> None:
- """Try building the doc and see if it works."""
-
- # Test only on Linux
- if platform.system() == "Linux":
- # Remove the build directory if it exists.
- if os.path.isdir(os.path.join(self.docs_dir, "build")):
- shutil.rmtree(os.path.join(self.docs_dir, "build"))
-
- return_code = sphinx.cmd.build.main(
- [
- "-j",
- "1",
- os.path.join(self.docs_dir, "source"),
- os.path.join(self.docs_dir, "build", "html"),
- ]
- )
-
- assert return_code == 0
diff --git a/tests/test_examples.py b/tests/test_examples.py
deleted file mode 100644
index 113d755a..00000000
--- a/tests/test_examples.py
+++ /dev/null
@@ -1,67 +0,0 @@
-"""Functions to test the example data."""
-from __future__ import annotations
-
-import geoutils as gu
-import numpy as np
-import pytest
-from geoutils import Raster, Vector
-
-from xdem import examples
-from xdem._typing import NDArrayf
-
-
-def load_examples() -> tuple[Raster, Raster, Vector, Raster]:
- """Load example files to try coregistration methods with."""
-
- ref_dem = Raster(examples.get_path("longyearbyen_ref_dem"))
- tba_dem = Raster(examples.get_path("longyearbyen_tba_dem"))
- glacier_mask = Vector(examples.get_path("longyearbyen_glacier_outlines"))
- ddem = Raster(examples.get_path("longyearbyen_ddem"))
-
- return ref_dem, tba_dem, glacier_mask, ddem
-
-
-class TestExamples:
-
- ref_dem, tba_dem, glacier_mask, ddem = load_examples()
-
- @pytest.mark.parametrize(
- "rst_and_truevals",
- [
- (ref_dem, np.array([868.6489, 623.42194, 180.57921, 267.30765, 601.67615], dtype=np.float32)),
- (tba_dem, np.array([875.2358, 625.0544, 182.9936, 272.6586, 606.2897], dtype=np.float32)),
- (
- ddem,
- np.array(
- [
- -0.012023926,
- -0.6956787,
- 0.14024353,
- 1.1026001,
- -5.9224243,
- ],
- dtype=np.float32,
- ),
- ),
- ],
- ) # type: ignore
- def test_array_content(self, rst_and_truevals: tuple[Raster, NDArrayf]) -> None:
- """Let's ensure the data arrays in the examples are always the same by checking randomly some values"""
-
- rst = rst_and_truevals[0]
- truevals = rst_and_truevals[1]
- np.random.seed(42)
- values = np.random.choice(rst.data.data.flatten(), size=5, replace=False)
-
- assert values == pytest.approx(truevals)
-
- # Note: Following PR #329, no gaps on DEM edges after coregistration
- @pytest.mark.parametrize("rst_and_truenodata", [(ref_dem, 0), (tba_dem, 0), (ddem, 0)]) # type: ignore
- def test_array_nodata(self, rst_and_truenodata: tuple[Raster, int]) -> None:
- """Let's also check that the data arrays have always the same number of not finite values"""
-
- rst = rst_and_truenodata[0]
- truenodata = rst_and_truenodata[1]
- mask = gu.raster.get_array_and_mask(rst)[1]
-
- assert np.sum(mask) == truenodata
diff --git a/tests/test_vcrs.py b/tests/test_vcrs.py
deleted file mode 100644
index d9f2c61f..00000000
--- a/tests/test_vcrs.py
+++ /dev/null
@@ -1,201 +0,0 @@
-"""Tests for vertical CRS transformation tools."""
-from __future__ import annotations
-
-import pathlib
-import re
-from typing import Any
-
-import numpy as np
-import pytest
-from pyproj import CRS
-
-import xdem
-import xdem.vcrs
-
-
-class TestVCRS:
- def test_parse_vcrs_name_from_product(self) -> None:
- """Test parsing of vertical CRS name from DEM product name."""
-
- # Check that the value for the key is returned by the function
- for product in xdem.vcrs.vcrs_dem_products.keys():
- assert xdem.vcrs._parse_vcrs_name_from_product(product) == xdem.vcrs.vcrs_dem_products[product]
-
- # And that, otherwise, it's a None
- assert xdem.vcrs._parse_vcrs_name_from_product("BESTDEM") is None
-
- # Expect outputs for the inputs
- @pytest.mark.parametrize(
- "input_output",
- [
- (CRS("EPSG:4326"), None),
- (CRS("EPSG:4979"), "Ellipsoid"),
- (CRS("EPSG:4326+5773"), CRS("EPSG:5773")),
- (CRS("EPSG:32610"), None),
- (CRS("EPSG:32610").to_3d(), "Ellipsoid"),
- ],
- ) # type: ignore
- def test_vcrs_from_crs(self, input_output: tuple[CRS, CRS]) -> None:
- """Test the extraction of a vertical CRS from a CRS."""
-
- input = input_output[0]
- output = input_output[1]
-
- # Extract vertical CRS from CRS
- vcrs = xdem.vcrs._vcrs_from_crs(crs=input)
-
- # Check that the result is as expected
- if isinstance(output, CRS):
- assert vcrs.equals(input_output[1])
- elif isinstance(output, str):
- assert vcrs == "Ellipsoid"
- else:
- assert vcrs is None
-
- @pytest.mark.parametrize(
- "vcrs_input",
- [
- "EGM08",
- "EGM96",
- "us_noaa_geoid06_ak.tif",
- pathlib.Path("is_lmi_Icegeoid_ISN93.tif"),
- 3855,
- CRS.from_epsg(5773),
- ],
- ) # type: ignore
- def test_vcrs_from_user_input(self, vcrs_input: str | pathlib.Path | int | CRS) -> None:
- """Tests the function _vcrs_from_user_input for varying user inputs, for which it will return a CRS."""
-
- # Get user input
- vcrs = xdem.dem._vcrs_from_user_input(vcrs_input)
-
- # Check output type
- assert isinstance(vcrs, CRS)
- assert vcrs.is_vertical
-
- @pytest.mark.parametrize(
- "vcrs_input", ["Ellipsoid", "ellipsoid", "wgs84", 4326, 4979, CRS.from_epsg(4326), CRS.from_epsg(4979)]
- ) # type: ignore
- def test_vcrs_from_user_input__ellipsoid(self, vcrs_input: str | int) -> None:
- """Tests the function _vcrs_from_user_input for inputs where it returns "Ellipsoid"."""
-
- # Get user input
- vcrs = xdem.vcrs._vcrs_from_user_input(vcrs_input)
-
- # Check output type
- assert vcrs == "Ellipsoid"
-
- def test_vcrs_from_user_input__errors(self) -> None:
- """Tests errors of vcrs_from_user_input."""
-
- # Check that an error is raised when the type is wrong
- with pytest.raises(TypeError, match="New vertical CRS must be a string, path or VerticalCRS, received.*"):
- xdem.vcrs._vcrs_from_user_input(np.zeros(1)) # type: ignore
-
- # Check that an error is raised if the CRS is not vertical
- with pytest.raises(
- ValueError,
- match=re.escape(
- "New vertical CRS must have a vertical axis, 'WGS 84 / UTM "
- "zone 1N' does not (check with `CRS.is_vertical`)."
- ),
- ):
- xdem.vcrs._vcrs_from_user_input(32601)
-
- # Check that a warning is raised if the CRS has other dimensions than vertical
- with pytest.warns(
- UserWarning,
- match="New vertical CRS has a vertical dimension but also other components, "
- "extracting the vertical reference only.",
- ):
- xdem.vcrs._vcrs_from_user_input(CRS("EPSG:4326+5773"))
-
- @pytest.mark.parametrize(
- "grid", ["us_noaa_geoid06_ak.tif", "is_lmi_Icegeoid_ISN93.tif", "us_nga_egm08_25.tif", "us_nga_egm96_15.tif"]
- ) # type: ignore
- def test_build_vcrs_from_grid(self, grid: str) -> None:
- """Test that vertical CRS are correctly built from grid"""
-
- # Build vertical CRS
- vcrs = xdem.vcrs._build_vcrs_from_grid(grid=grid)
- assert vcrs.is_vertical
-
- # Check that the explicit construction yields the same CRS as "the old init way" (see function description)
- vcrs_oldway = xdem.vcrs._build_vcrs_from_grid(grid=grid, old_way=True)
- assert vcrs.equals(vcrs_oldway)
-
- # Test for WGS84 in 2D and 3D, UTM, CompoundCRS, everything should work
- @pytest.mark.parametrize(
- "crs", [CRS("EPSG:4326"), CRS("EPSG:4979"), CRS("32610"), CRS("EPSG:4326+5773")]
- ) # type: ignore
- @pytest.mark.parametrize("vcrs_input", [CRS("EPSG:5773"), "is_lmi_Icegeoid_ISN93.tif", "EGM96"]) # type: ignore
- def test_build_ccrs_from_crs_and_vcrs(self, crs: CRS, vcrs_input: CRS | str) -> None:
- """Test the function build_ccrs_from_crs_and_vcrs."""
-
- # Get the vertical CRS from user input
- vcrs = xdem.vcrs._vcrs_from_user_input(vcrs_input=vcrs_input)
-
- # Build the compound CRS
-
- # For a 3D horizontal CRS, a condition based on pyproj version is needed
- if len(crs.axis_info) > 2:
- import pyproj
- from packaging.version import Version
-
- # If the version is higher than 3.5.0, it should pass
- if Version(pyproj.__version__) > Version("3.5.0"):
- ccrs = xdem.vcrs._build_ccrs_from_crs_and_vcrs(crs=crs, vcrs=vcrs)
- # Otherwise, it should raise an error
- else:
- with pytest.raises(
- NotImplementedError,
- match="pyproj >= 3.5.1 is required to demote a 3D CRS to 2D and be able to compound "
- "with a new vertical CRS. Update your dependencies or pass the 2D source CRS "
- "manually.",
- ):
- xdem.vcrs._build_ccrs_from_crs_and_vcrs(crs=crs, vcrs=vcrs)
- return None
- # If the CRS is 2D, it should pass
- else:
- ccrs = xdem.vcrs._build_ccrs_from_crs_and_vcrs(crs=crs, vcrs=vcrs)
-
- assert isinstance(ccrs, CRS)
- assert ccrs.is_vertical
-
- def test_build_ccrs_from_crs_and_vcrs__errors(self) -> None:
- """Test errors are correctly raised from the build_ccrs function."""
-
- with pytest.raises(
- ValueError, match="Invalid vcrs given. Must be a vertical " "CRS or the literal string 'Ellipsoid'."
- ):
- xdem.vcrs._build_ccrs_from_crs_and_vcrs(crs=CRS("EPSG:4326"), vcrs="NotAVerticalCRS") # type: ignore
-
- # Compare to manually-extracted shifts at specific coordinates for the geoid grids
- egm96_chile = {"grid": "us_nga_egm96_15.tif", "lon": -68, "lat": -20, "shift": 42}
- egm08_chile = {"grid": "us_nga_egm08_25.tif", "lon": -68, "lat": -20, "shift": 42}
- geoid96_alaska = {"grid": "us_noaa_geoid06_ak.tif", "lon": -145, "lat": 62, "shift": 15}
- isn93_iceland = {"grid": "is_lmi_Icegeoid_ISN93.tif", "lon": -18, "lat": 65, "shift": 68}
-
- @pytest.mark.parametrize("grid_shifts", [egm08_chile, egm08_chile, geoid96_alaska, isn93_iceland]) # type: ignore
- def test_transform_zz(self, grid_shifts: dict[str, Any]) -> None:
- """Tests grids to convert vertical CRS."""
-
- # Using an arbitrary elevation of 100 m (no influence on the transformation)
- zz = 100
- xx = grid_shifts["lon"]
- yy = grid_shifts["lat"]
- crs_from = CRS.from_epsg(4326)
- ccrs_from = xdem.vcrs._build_ccrs_from_crs_and_vcrs(crs=crs_from, vcrs="Ellipsoid")
-
- # Build the compound CRS
- vcrs_to = xdem.vcrs._vcrs_from_user_input(vcrs_input=grid_shifts["grid"])
- ccrs_to = xdem.vcrs._build_ccrs_from_crs_and_vcrs(crs=crs_from, vcrs=vcrs_to)
-
- # Apply the transformation
- zz_trans = xdem.vcrs._transform_zz(crs_from=ccrs_from, crs_to=ccrs_to, xx=xx, yy=yy, zz=zz)
-
- # Compare the elevation difference
- z_diff = 100 - zz_trans
-
- # Check the shift is the one expect within 10%
- assert z_diff == pytest.approx(grid_shifts["shift"], rel=0.1)
diff --git a/xdem/_typing.py b/xdem/_typing.py
deleted file mode 100644
index 13b89715..00000000
--- a/xdem/_typing.py
+++ /dev/null
@@ -1,20 +0,0 @@
-from __future__ import annotations
-
-import sys
-from typing import Any
-
-import numpy as np
-
-# Only for Python >= 3.9
-if sys.version_info.minor >= 9:
-
- from numpy.typing import NDArray # this syntax works starting on Python 3.9
-
- NDArrayf = NDArray[np.floating[Any]]
- NDArrayb = NDArray[np.bool_]
- MArrayf = np.ma.masked_array[Any, np.dtype[np.floating[Any]]]
-
-else:
- NDArrayf = np.ndarray # type: ignore
- NDArrayb = np.ndarray # type: ignore
- MArrayf = np.ma.masked_array # type: ignore
diff --git a/xdem/coreg/__init__.py b/xdem/coreg/__init__.py
deleted file mode 100644
index 06a0b014..00000000
--- a/xdem/coreg/__init__.py
+++ /dev/null
@@ -1,23 +0,0 @@
-"""
-DEM coregistration classes and functions, including affine methods, bias corrections (i.e. non-affine) and filters.
-"""
-
-from xdem.coreg.affine import ( # noqa
- ICP,
- AffineCoreg,
- GradientDescending,
- NuthKaab,
- Tilt,
- VerticalShift,
-)
-from xdem.coreg.base import BlockwiseCoreg, Coreg, CoregPipeline, apply_matrix # noqa
-from xdem.coreg.biascorr import ( # noqa
- BiasCorr,
- BiasCorr1D,
- BiasCorr2D,
- BiasCorrND,
- Deramp,
- DirectionalBias,
- TerrainBias,
-)
-from xdem.coreg.workflows import dem_coregistration # noqa
diff --git a/xdem/coreg/affine.py b/xdem/coreg/affine.py
deleted file mode 100644
index 0061646a..00000000
--- a/xdem/coreg/affine.py
+++ /dev/null
@@ -1,1172 +0,0 @@
-"""Affine coregistration classes."""
-
-from __future__ import annotations
-
-import warnings
-from typing import Any, Callable, TypeVar
-
-try:
- import cv2
-
- _has_cv2 = True
-except ImportError:
- _has_cv2 = False
-import numpy as np
-import pandas as pd
-import rasterio as rio
-import scipy
-import scipy.interpolate
-import scipy.ndimage
-import scipy.optimize
-from geoutils.raster import Raster, RasterType, get_array_and_mask
-from tqdm import trange
-
-from xdem._typing import NDArrayb, NDArrayf
-from xdem.coreg.base import (
- Coreg,
- CoregDict,
- _get_x_and_y_coords,
- _mask_dataframe_by_dem,
- _residuals_df,
- _transform_to_bounds_and_res,
- deramping,
-)
-from xdem.spatialstats import nmad
-
-try:
- import pytransform3d.transformations
-
- _HAS_P3D = True
-except ImportError:
- _HAS_P3D = False
-
-try:
- from noisyopt import minimizeCompass
-
- _has_noisyopt = True
-except ImportError:
- _has_noisyopt = False
-
-######################################
-# Generic functions for affine methods
-######################################
-
-
-def apply_xy_shift(transform: rio.transform.Affine, dx: float, dy: float) -> rio.transform.Affine:
- """
- Apply horizontal shift to a rasterio Affine transform
- :param transform: The Affine transform of the raster
- :param dx: dx shift value
- :param dy: dy shift value
-
- Returns: Updated transform
- """
- transform_shifted = rio.transform.Affine(
- transform.a, transform.b, transform.c + dx, transform.d, transform.e, transform.f + dy
- )
- return transform_shifted
-
-
-######################################
-# Functions for affine coregistrations
-######################################
-
-
-def _calculate_slope_and_aspect_nuthkaab(dem: NDArrayf) -> tuple[NDArrayf, NDArrayf]:
- """
- Calculate the tangent of slope and aspect of a DEM, in radians, as needed for the Nuth & Kaab algorithm.
-
- :param dem: A numpy array of elevation values.
-
- :returns: The tangent of slope and aspect (in radians) of the DEM.
- """
- # Old implementation
- # # Calculate the gradient of the slope
- gradient_y, gradient_x = np.gradient(dem)
- slope_tan = np.sqrt(gradient_x**2 + gradient_y**2)
- aspect = np.arctan2(-gradient_x, gradient_y)
- aspect += np.pi
-
- # xdem implementation
- # slope, aspect = xdem.terrain.get_terrain_attribute(
- # dem, attribute=["slope", "aspect"], resolution=1, degrees=False
- # )
- # slope_tan = np.tan(slope)
- # aspect = (aspect + np.pi) % (2 * np.pi)
-
- return slope_tan, aspect
-
-
-def get_horizontal_shift(
- elevation_difference: NDArrayf, slope: NDArrayf, aspect: NDArrayf, min_count: int = 20
-) -> tuple[float, float, float]:
- """
- Calculate the horizontal shift between two DEMs using the method presented in Nuth and Kääb (2011).
-
- :param elevation_difference: The elevation difference (reference_dem - aligned_dem).
- :param slope: A slope map with the same shape as elevation_difference (units = pixels?).
- :param aspect: An aspect map with the same shape as elevation_difference (units = radians).
- :param min_count: The minimum allowed bin size to consider valid.
-
- :raises ValueError: If very few finite values exist to analyse.
-
- :returns: The pixel offsets in easting, northing, and the c_parameter (altitude?).
- """
- input_x_values = aspect
-
- with np.errstate(divide="ignore", invalid="ignore"):
- input_y_values = elevation_difference / slope
-
- # Remove non-finite values
- x_values = input_x_values[np.isfinite(input_x_values) & np.isfinite(input_y_values)]
- y_values = input_y_values[np.isfinite(input_x_values) & np.isfinite(input_y_values)]
-
- assert y_values.shape[0] > 0
-
- # Remove outliers
- lower_percentile = np.percentile(y_values, 1)
- upper_percentile = np.percentile(y_values, 99)
- valids = np.where((y_values > lower_percentile) & (y_values < upper_percentile) & (np.abs(y_values) < 200))
- x_values = x_values[valids]
- y_values = y_values[valids]
-
- # Slice the dataset into appropriate aspect bins
- step = np.pi / 36
- slice_bounds = np.arange(start=0, stop=2 * np.pi, step=step)
- y_medians = np.zeros([len(slice_bounds)])
- count = y_medians.copy()
- for i, bound in enumerate(slice_bounds):
- y_slice = y_values[(bound < x_values) & (x_values < (bound + step))]
- if y_slice.shape[0] > 0:
- y_medians[i] = np.median(y_slice)
- count[i] = y_slice.shape[0]
-
- # Filter out bins with counts below threshold
- y_medians = y_medians[count > min_count]
- slice_bounds = slice_bounds[count > min_count]
-
- if slice_bounds.shape[0] < 10:
- raise ValueError("Less than 10 different cells exist.")
-
- # Make an initial guess of the a, b, and c parameters
- initial_guess: tuple[float, float, float] = (3 * np.std(y_medians) / (2**0.5), 0.0, np.mean(y_medians))
-
- def estimate_ys(x_values: NDArrayf, parameters: tuple[float, float, float]) -> NDArrayf:
- """
- Estimate y-values from x-values and the current parameters.
-
- y(x) = a * cos(b - x) + c
-
- :param x_values: The x-values to feed the above function.
- :param parameters: The a, b, and c parameters to feed the above function
-
- :returns: Estimated y-values with the same shape as the given x-values
- """
- return parameters[0] * np.cos(parameters[1] - x_values) + parameters[2]
-
- def residuals(parameters: tuple[float, float, float], y_values: NDArrayf, x_values: NDArrayf) -> NDArrayf:
- """
- Get the residuals between the estimated and measured values using the given parameters.
-
- err(x, y) = est_y(x) - y
-
- :param parameters: The a, b, and c parameters to use for the estimation.
- :param y_values: The measured y-values.
- :param x_values: The measured x-values
-
- :returns: An array of residuals with the same shape as the input arrays.
- """
- err = estimate_ys(x_values, parameters) - y_values
- return err
-
- # Estimate the a, b, and c parameters with least square minimisation
- results = scipy.optimize.least_squares(
- fun=residuals, x0=initial_guess, args=(y_medians, slice_bounds), xtol=1e-8, gtol=None, ftol=None
- )
-
- # Round results above the tolerance to get fixed results on different OS
- a_parameter, b_parameter, c_parameter = results.x
- c_parameter = np.round(c_parameter, 3)
-
- # Calculate the easting and northing offsets from the above parameters
- east_offset = np.round(a_parameter * np.sin(b_parameter), 3)
- north_offset = np.round(a_parameter * np.cos(b_parameter), 3)
-
- return east_offset, north_offset, c_parameter
-
-
-##################################
-# Affine coregistration subclasses
-##################################
-
-AffineCoregType = TypeVar("AffineCoregType", bound="AffineCoreg")
-
-
-class AffineCoreg(Coreg):
- """
- Generic affine coregistration class.
-
- Builds additional common affine methods on top of the generic Coreg class.
- Made to be subclassed.
- """
-
- _fit_called: bool = False # Flag to check if the .fit() method has been called.
- _is_affine: bool | None = None
-
- def __init__(
- self,
- subsample: float | int = 1.0,
- matrix: NDArrayf | None = None,
- meta: CoregDict | None = None,
- ) -> None:
- """Instantiate a generic AffineCoreg method."""
-
- super().__init__(meta=meta)
-
- # Define subsample size
- self._meta["subsample"] = subsample
-
- if matrix is not None:
- with warnings.catch_warnings():
- # This error is fixed in the upcoming 1.8
- warnings.filterwarnings("ignore", message="`np.float` is a deprecated alias for the builtin `float`")
- valid_matrix = pytransform3d.transformations.check_transform(matrix)
- self._meta["matrix"] = valid_matrix
- self._is_affine = True
-
- def to_matrix(self) -> NDArrayf:
- """Convert the transform to a 4x4 transformation matrix."""
- return self._to_matrix_func()
-
- def centroid(self) -> tuple[float, float, float] | None:
- """Get the centroid of the coregistration, if defined."""
- meta_centroid = self._meta.get("centroid")
-
- if meta_centroid is None:
- return None
-
- # Unpack the centroid in case it is in an unexpected format (an array, list or something else).
- return meta_centroid[0], meta_centroid[1], meta_centroid[2]
-
- @classmethod
- def from_matrix(cls, matrix: NDArrayf) -> AffineCoreg:
- """
- Instantiate a generic Coreg class from a transformation matrix.
-
- :param matrix: A 4x4 transformation matrix. Shape must be (4,4).
-
- :raises ValueError: If the matrix is incorrectly formatted.
-
- :returns: The instantiated generic Coreg class.
- """
- if np.any(~np.isfinite(matrix)):
- raise ValueError(f"Matrix has non-finite values:\n{matrix}")
- with warnings.catch_warnings():
- # This error is fixed in the upcoming 1.8
- warnings.filterwarnings("ignore", message="`np.float` is a deprecated alias for the builtin `float`")
- valid_matrix = pytransform3d.transformations.check_transform(matrix)
- return cls(matrix=valid_matrix)
-
- @classmethod
- def from_translation(cls, x_off: float = 0.0, y_off: float = 0.0, z_off: float = 0.0) -> AffineCoreg:
- """
- Instantiate a generic Coreg class from a X/Y/Z translation.
-
- :param x_off: The offset to apply in the X (west-east) direction.
- :param y_off: The offset to apply in the Y (south-north) direction.
- :param z_off: The offset to apply in the Z (vertical) direction.
-
- :raises ValueError: If the given translation contained invalid values.
-
- :returns: An instantiated generic Coreg class.
- """
- matrix = np.diag(np.ones(4, dtype=float))
- matrix[0, 3] = x_off
- matrix[1, 3] = y_off
- matrix[2, 3] = z_off
-
- return cls.from_matrix(matrix)
-
- def _to_matrix_func(self) -> NDArrayf:
- # FOR DEVELOPERS: This function needs to be implemented if the `self._meta['matrix']` keyword is not None.
-
- # Try to see if a matrix exists.
- meta_matrix = self._meta.get("matrix")
- if meta_matrix is not None:
- assert meta_matrix.shape == (4, 4), f"Invalid _meta matrix shape. Expected: (4, 4), got {meta_matrix.shape}"
- return meta_matrix
-
- raise NotImplementedError("This should be implemented by subclassing")
-
- def _fit_func(
- self,
- ref_dem: NDArrayf,
- tba_dem: NDArrayf,
- inlier_mask: NDArrayb,
- transform: rio.transform.Affine,
- crs: rio.crs.CRS,
- weights: NDArrayf | None,
- bias_vars: dict[str, NDArrayf] | None = None,
- verbose: bool = False,
- **kwargs: Any,
- ) -> None:
- # FOR DEVELOPERS: This function needs to be implemented.
- raise NotImplementedError("This step has to be implemented by subclassing.")
-
- def _apply_func(
- self,
- dem: NDArrayf,
- transform: rio.transform.Affine,
- crs: rio.crs.CRS,
- bias_vars: dict[str, NDArrayf] | None = None,
- **kwargs: Any,
- ) -> tuple[NDArrayf, rio.transform.Affine]:
- # FOR DEVELOPERS: This function is only needed for non-rigid transforms.
- raise NotImplementedError("This should have been implemented by subclassing")
-
- def _apply_pts_func(self, coords: NDArrayf) -> NDArrayf:
- # FOR DEVELOPERS: This function is only needed for non-rigid transforms.
- raise NotImplementedError("This should have been implemented by subclassing")
-
-
-class VerticalShift(AffineCoreg):
- """
- DEM vertical shift correction.
-
- Estimates the mean (or median, weighted avg., etc.) vertical offset between two DEMs.
- """
-
- def __init__(
- self, vshift_func: Callable[[NDArrayf], np.floating[Any]] = np.average, subsample: float | int = 1.0
- ) -> None: # pylint:
- # disable=super-init-not-called
- """
- Instantiate a vertical shift correction object.
-
- :param vshift_func: The function to use for calculating the vertical shift. Default: (weighted) average.
- :param subsample: Subsample the input for speed-up. <1 is parsed as a fraction. >1 is a pixel count.
- """
- self._meta: CoregDict = {} # All __init__ functions should instantiate an empty dict.
-
- super().__init__(meta={"vshift_func": vshift_func}, subsample=subsample)
-
- def _fit_func(
- self,
- ref_dem: NDArrayf,
- tba_dem: NDArrayf,
- inlier_mask: NDArrayb,
- transform: rio.transform.Affine,
- crs: rio.crs.CRS,
- weights: NDArrayf | None,
- bias_vars: dict[str, NDArrayf] | None = None,
- verbose: bool = False,
- **kwargs: Any,
- ) -> None:
- """Estimate the vertical shift using the vshift_func."""
-
- if verbose:
- print("Estimating the vertical shift...")
- diff = ref_dem - tba_dem
-
- valid_mask = np.logical_and.reduce((inlier_mask, np.isfinite(diff)))
- subsample_mask = self._get_subsample_on_valid_mask(valid_mask=valid_mask)
-
- diff = diff[subsample_mask]
-
- if np.count_nonzero(np.isfinite(diff)) == 0:
- raise ValueError("No finite values in vertical shift comparison.")
-
- # Use weights if those were provided.
- vshift = (
- self._meta["vshift_func"](diff)
- if weights is None
- else self._meta["vshift_func"](diff, weights) # type: ignore
- )
-
- # TODO: We might need to define the type of bias_func with Callback protocols to get the optional argument,
- # TODO: once we have the weights implemented
-
- if verbose:
- print("Vertical shift estimated")
-
- self._meta["vshift"] = vshift
-
- def _apply_func(
- self,
- dem: NDArrayf,
- transform: rio.transform.Affine,
- crs: rio.crs.CRS,
- bias_vars: dict[str, NDArrayf] | None = None,
- **kwargs: Any,
- ) -> tuple[NDArrayf, rio.transform.Affine]:
- """Apply the VerticalShift function to a DEM."""
- return dem + self._meta["vshift"], transform
-
- def _apply_pts_func(self, coords: NDArrayf) -> NDArrayf:
- """Apply the VerticalShift function to a set of points."""
- new_coords = coords.copy()
- new_coords[:, 2] += self._meta["vshift"]
- return new_coords
-
- def _to_matrix_func(self) -> NDArrayf:
- """Convert the vertical shift to a transform matrix."""
- empty_matrix = np.diag(np.ones(4, dtype=float))
-
- empty_matrix[2, 3] += self._meta["vshift"]
-
- return empty_matrix
-
-
-class ICP(AffineCoreg):
- """
- Iterative Closest Point DEM coregistration.
- Based on 3D registration of Besl and McKay (1992), https://doi.org/10.1117/12.57955.
-
- Estimates a rigid transform (rotation + translation) between two DEMs.
-
- Requires 'opencv'
- See opencv doc for more info: https://docs.opencv.org/master/dc/d9b/classcv_1_1ppf__match__3d_1_1ICP.html
- """
-
- def __init__(
- self,
- max_iterations: int = 100,
- tolerance: float = 0.05,
- rejection_scale: float = 2.5,
- num_levels: int = 6,
- subsample: float | int = 5e5,
- ) -> None:
- """
- Instantiate an ICP coregistration object.
-
- :param max_iterations: The maximum allowed iterations before stopping.
- :param tolerance: The residual change threshold after which to stop the iterations.
- :param rejection_scale: The threshold (std * rejection_scale) to consider points as outliers.
- :param num_levels: Number of octree levels to consider. A higher number is faster but may be more inaccurate.
- :param subsample: Subsample the input for speed-up. <1 is parsed as a fraction. >1 is a pixel count.
- """
- if not _has_cv2:
- raise ValueError("Optional dependency needed. Install 'opencv'")
-
- # TODO: Move these to _meta?
- self.max_iterations = max_iterations
- self.tolerance = tolerance
- self.rejection_scale = rejection_scale
- self.num_levels = num_levels
-
- super().__init__(subsample=subsample)
-
- def _fit_func(
- self,
- ref_dem: NDArrayf,
- tba_dem: NDArrayf,
- inlier_mask: NDArrayb,
- transform: rio.transform.Affine,
- crs: rio.crs.CRS,
- weights: NDArrayf | None,
- bias_vars: dict[str, NDArrayf] | None = None,
- verbose: bool = False,
- **kwargs: Any,
- ) -> None:
- """Estimate the rigid transform from tba_dem to ref_dem."""
-
- if weights is not None:
- warnings.warn("ICP was given weights, but does not support it.")
-
- bounds, resolution = _transform_to_bounds_and_res(ref_dem.shape, transform)
- # Generate the x and y coordinates for the reference_dem
- x_coords, y_coords = _get_x_and_y_coords(ref_dem.shape, transform)
- gradient_x, gradient_y = np.gradient(ref_dem)
-
- normal_east = np.sin(np.arctan(gradient_y / resolution)) * -1
- normal_north = np.sin(np.arctan(gradient_x / resolution))
- normal_up = 1 - np.linalg.norm([normal_east, normal_north], axis=0)
-
- valid_mask = np.logical_and.reduce(
- (inlier_mask, np.isfinite(ref_dem), np.isfinite(normal_east), np.isfinite(normal_north))
- )
- subsample_mask = self._get_subsample_on_valid_mask(valid_mask=valid_mask)
-
- ref_pts = pd.DataFrame(
- np.dstack(
- [
- x_coords[subsample_mask],
- y_coords[subsample_mask],
- ref_dem[subsample_mask],
- normal_east[subsample_mask],
- normal_north[subsample_mask],
- normal_up[subsample_mask],
- ]
- ).squeeze(),
- columns=["E", "N", "z", "nx", "ny", "nz"],
- )
-
- self._fit_pts_func(ref_dem=ref_pts, tba_dem=tba_dem, transform=transform, verbose=verbose, z_name="z")
-
- def _fit_pts_func(
- self,
- ref_dem: pd.DataFrame,
- tba_dem: RasterType | NDArrayf,
- transform: rio.transform.Affine | None,
- verbose: bool = False,
- z_name: str = "z",
- **kwargs: Any,
- ) -> None:
-
- if transform is None and hasattr(tba_dem, "transform"):
- transform = tba_dem.transform # type: ignore
- if hasattr(tba_dem, "transform"):
- tba_dem = tba_dem.data
-
- ref_dem = ref_dem.dropna(how="any", subset=["E", "N", z_name])
- bounds, resolution = _transform_to_bounds_and_res(tba_dem.shape, transform)
- points: dict[str, NDArrayf] = {}
- # Generate the x and y coordinates for the TBA DEM
- x_coords, y_coords = _get_x_and_y_coords(tba_dem.shape, transform)
- centroid = (np.mean([bounds.left, bounds.right]), np.mean([bounds.bottom, bounds.top]), 0.0)
- # Subtract by the bounding coordinates to avoid float32 rounding errors.
- x_coords -= centroid[0]
- y_coords -= centroid[1]
-
- gradient_x, gradient_y = np.gradient(tba_dem)
-
- # This CRS is temporary and doesn't affect the result. It's just needed for Raster instantiation.
- dem_kwargs = {"transform": transform, "crs": rio.CRS.from_epsg(32633), "nodata": -9999.0}
- normal_east = Raster.from_array(np.sin(np.arctan(gradient_y / resolution)) * -1, **dem_kwargs)
- normal_north = Raster.from_array(np.sin(np.arctan(gradient_x / resolution)), **dem_kwargs)
- normal_up = Raster.from_array(1 - np.linalg.norm([normal_east.data, normal_north.data], axis=0), **dem_kwargs)
-
- valid_mask = ~np.isnan(tba_dem) & ~np.isnan(normal_east.data) & ~np.isnan(normal_north.data)
-
- points["tba"] = np.dstack(
- [
- x_coords[valid_mask],
- y_coords[valid_mask],
- tba_dem[valid_mask],
- normal_east.data[valid_mask],
- normal_north.data[valid_mask],
- normal_up.data[valid_mask],
- ]
- ).squeeze()
-
- if any(col not in ref_dem for col in ["nx", "ny", "nz"]):
- for key, raster in [("nx", normal_east), ("ny", normal_north), ("nz", normal_up)]:
- raster.tags["AREA_OR_POINT"] = "Area"
- ref_dem[key] = raster.interp_points(
- ref_dem[["E", "N"]].values, shift_area_or_point=True, mode="nearest"
- )
-
- ref_dem["E"] -= centroid[0]
- ref_dem["N"] -= centroid[1]
-
- points["ref"] = ref_dem[["E", "N", z_name, "nx", "ny", "nz"]].values
-
- for key in points:
- points[key] = points[key][~np.any(np.isnan(points[key]), axis=1)].astype("float32")
- points[key][:, :2] -= resolution / 2
-
- icp = cv2.ppf_match_3d_ICP(self.max_iterations, self.tolerance, self.rejection_scale, self.num_levels)
- if verbose:
- print("Running ICP...")
- try:
- _, residual, matrix = icp.registerModelToScene(points["tba"], points["ref"])
- except cv2.error as exception:
- if "(expected: 'n > 0'), where" not in str(exception):
- raise exception
-
- raise ValueError(
- "Not enough valid points in input data."
- f"'reference_dem' had {points['ref'].size} valid points."
- f"'dem_to_be_aligned' had {points['tba'].size} valid points."
- )
-
- if verbose:
- print("ICP finished")
-
- assert residual < 1000, f"ICP coregistration failed: residual={residual}, threshold: 1000"
-
- self._meta["centroid"] = centroid
- self._meta["matrix"] = matrix
-
-
-class Tilt(AffineCoreg):
- """
- DEM tilting.
-
- Estimates an 2-D plan correction between the difference of two DEMs.
- """
-
- def __init__(self, subsample: int | float = 5e5) -> None:
- """
- Instantiate a tilt correction object.
-
- :param subsample: Subsample the input for speed-up. <1 is parsed as a fraction. >1 is a pixel count.
- """
- self.poly_order = 1
-
- super().__init__(subsample=subsample)
-
- def _fit_func(
- self,
- ref_dem: NDArrayf,
- tba_dem: NDArrayf,
- inlier_mask: NDArrayb,
- transform: rio.transform.Affine,
- crs: rio.crs.CRS,
- weights: NDArrayf | None,
- bias_vars: dict[str, NDArrayf] | None = None,
- verbose: bool = False,
- **kwargs: Any,
- ) -> None:
- """Fit the dDEM between the DEMs to a least squares polynomial equation."""
- ddem = ref_dem - tba_dem
- ddem[~inlier_mask] = np.nan
- x_coords, y_coords = _get_x_and_y_coords(ref_dem.shape, transform)
- fit_ramp, coefs = deramping(
- ddem, x_coords, y_coords, degree=self.poly_order, subsample=self._meta["subsample"], verbose=verbose
- )
-
- self._meta["coefficients"] = coefs[0]
- self._meta["func"] = fit_ramp
-
- def _apply_func(
- self,
- dem: NDArrayf,
- transform: rio.transform.Affine,
- crs: rio.crs.CRS,
- bias_vars: dict[str, NDArrayf] | None = None,
- **kwargs: Any,
- ) -> tuple[NDArrayf, rio.transform.Affine]:
- """Apply the deramp function to a DEM."""
- x_coords, y_coords = _get_x_and_y_coords(dem.shape, transform)
-
- ramp = self._meta["func"](x_coords, y_coords)
-
- return dem + ramp, transform
-
- def _apply_pts_func(self, coords: NDArrayf) -> NDArrayf:
- """Apply the deramp function to a set of points."""
- new_coords = coords.copy()
-
- new_coords[:, 2] += self._meta["func"](new_coords[:, 0], new_coords[:, 1])
-
- return new_coords
-
- def _to_matrix_func(self) -> NDArrayf:
- """Return a transform matrix if possible."""
- if self.degree > 1:
- raise ValueError(
- "Nonlinear deramping degrees cannot be represented as transformation matrices."
- f" (max 1, given: {self.poly_order})"
- )
- if self.degree == 1:
- raise NotImplementedError("Vertical shift, rotation and horizontal scaling has to be implemented.")
-
- # If degree==0, it's just a bias correction
- empty_matrix = np.diag(np.ones(4, dtype=float))
-
- empty_matrix[2, 3] += self._meta["coefficients"][0]
-
- return empty_matrix
-
-
-class NuthKaab(AffineCoreg):
- """
- Nuth and Kääb (2011) DEM coregistration.
-
- Implemented after the paper:
- https://doi.org/10.5194/tc-5-271-2011
- """
-
- def __init__(self, max_iterations: int = 10, offset_threshold: float = 0.05, subsample: int | float = 5e5) -> None:
- """
- Instantiate a new Nuth and Kääb (2011) coregistration object.
-
- :param max_iterations: The maximum allowed iterations before stopping.
- :param offset_threshold: The residual offset threshold after which to stop the iterations.
- :param subsample: Subsample the input for speed-up. <1 is parsed as a fraction. >1 is a pixel count.
- """
- self._meta: CoregDict
- self.max_iterations = max_iterations
- self.offset_threshold = offset_threshold
-
- super().__init__(subsample=subsample)
-
- def _fit_func(
- self,
- ref_dem: NDArrayf,
- tba_dem: NDArrayf,
- inlier_mask: NDArrayb,
- transform: rio.transform.Affine,
- crs: rio.crs.CRS,
- weights: NDArrayf | None,
- bias_vars: dict[str, NDArrayf] | None = None,
- verbose: bool = False,
- **kwargs: Any,
- ) -> None:
- """Estimate the x/y/z offset between two DEMs."""
- if verbose:
- print("Running Nuth and Kääb (2011) coregistration")
-
- bounds, resolution = _transform_to_bounds_and_res(ref_dem.shape, transform)
- # Make a new DEM which will be modified inplace
- aligned_dem = tba_dem.copy()
-
- # Check that DEM CRS is projected, otherwise slope is not correctly calculated
- if not crs.is_projected:
- raise NotImplementedError(
- f"DEMs CRS is {crs}. NuthKaab coregistration only works with \
-projected CRS. First, reproject your DEMs in a local projected CRS, e.g. UTM, and re-run."
- )
-
- # Calculate slope and aspect maps from the reference DEM
- if verbose:
- print(" Calculate slope and aspect")
-
- slope_tan, aspect = _calculate_slope_and_aspect_nuthkaab(ref_dem)
-
- valid_mask = np.logical_and.reduce(
- (inlier_mask, np.isfinite(ref_dem), np.isfinite(tba_dem), np.isfinite(slope_tan))
- )
- subsample_mask = self._get_subsample_on_valid_mask(valid_mask=valid_mask)
-
- ref_dem[~subsample_mask] = np.nan
-
- # Make index grids for the east and north dimensions
- east_grid = np.arange(ref_dem.shape[1])
- north_grid = np.arange(ref_dem.shape[0])
-
- # Make a function to estimate the aligned DEM (used to construct an offset DEM)
- elevation_function = scipy.interpolate.RectBivariateSpline(
- x=north_grid, y=east_grid, z=np.where(np.isnan(aligned_dem), -9999, aligned_dem), kx=1, ky=1
- )
-
- # Make a function to estimate nodata gaps in the aligned DEM (used to fix the estimated offset DEM)
- # Use spline degree 1, as higher degrees will create instabilities around 1 and mess up the nodata mask
- nodata_function = scipy.interpolate.RectBivariateSpline(
- x=north_grid, y=east_grid, z=np.isnan(aligned_dem), kx=1, ky=1
- )
-
- # Initialise east and north pixel offset variables (these will be incremented up and down)
- offset_east, offset_north = 0.0, 0.0
-
- # Calculate initial dDEM statistics
- elevation_difference = ref_dem - aligned_dem
-
- vshift = np.nanmedian(elevation_difference)
- nmad_old = nmad(elevation_difference)
-
- if verbose:
- print(" Statistics on initial dh:")
- print(f" Median = {vshift:.2f} - NMAD = {nmad_old:.2f}")
-
- # Iteratively run the analysis until the maximum iterations or until the error gets low enough
- if verbose:
- print(" Iteratively estimating horizontal shift:")
-
- # If verbose is True, will use progressbar and print additional statements
- pbar = trange(self.max_iterations, disable=not verbose, desc=" Progress")
- for i in pbar:
-
- # Calculate the elevation difference and the residual (NMAD) between them.
- elevation_difference = ref_dem - aligned_dem
- vshift = np.nanmedian(elevation_difference)
- # Correct potential vertical shifts
- elevation_difference -= vshift
-
- # Estimate the horizontal shift from the implementation by Nuth and Kääb (2011)
- east_diff, north_diff, _ = get_horizontal_shift( # type: ignore
- elevation_difference=elevation_difference, slope=slope_tan, aspect=aspect
- )
- if verbose:
- pbar.write(f" #{i + 1:d} - Offset in pixels : ({east_diff:.2f}, {north_diff:.2f})")
-
- # Increment the offsets with the overall offset
- offset_east += east_diff
- offset_north += north_diff
-
- # Calculate new elevations from the offset x- and y-coordinates
- new_elevation = elevation_function(y=east_grid + offset_east, x=north_grid - offset_north)
-
- # Set NaNs where NaNs were in the original data
- new_nans = nodata_function(y=east_grid + offset_east, x=north_grid - offset_north)
- new_elevation[new_nans > 0] = np.nan
-
- # Assign the newly calculated elevations to the aligned_dem
- aligned_dem = new_elevation
-
- # Update statistics
- elevation_difference = ref_dem - aligned_dem
-
- vshift = np.nanmedian(elevation_difference)
- nmad_new = nmad(elevation_difference)
-
- nmad_gain = (nmad_new - nmad_old) / nmad_old * 100
-
- if verbose:
- pbar.write(f" Median = {vshift:.2f} - NMAD = {nmad_new:.2f} ==> Gain = {nmad_gain:.2f}%")
-
- # Stop if the NMAD is low and a few iterations have been made
- assert ~np.isnan(nmad_new), (offset_east, offset_north)
-
- offset = np.sqrt(east_diff**2 + north_diff**2)
- if i > 1 and offset < self.offset_threshold:
- if verbose:
- pbar.write(
- f" Last offset was below the residual offset threshold of {self.offset_threshold} -> stopping"
- )
- break
-
- nmad_old = nmad_new
-
- # Print final results
- if verbose:
- print(f"\n Final offset in pixels (east, north) : ({offset_east:f}, {offset_north:f})")
- print(" Statistics on coregistered dh:")
- print(f" Median = {vshift:.2f} - NMAD = {nmad_new:.2f}")
-
- self._meta["offset_east_px"] = offset_east
- self._meta["offset_north_px"] = offset_north
- self._meta["vshift"] = vshift
- self._meta["resolution"] = resolution
-
- def _fit_pts_func(
- self,
- ref_dem: pd.DataFrame,
- tba_dem: RasterType,
- transform: rio.transform.Affine | None,
- weights: NDArrayf | None,
- verbose: bool = False,
- order: int = 1,
- z_name: str = "z",
- ) -> None:
- """
- Estimate the x/y/z offset between a DEM and points cloud.
- 1. deleted elevation_function and nodata_function, shifting dataframe (points) instead of DEM.
- 2. do not support latitude and longitude as inputs.
-
- :param z_name: the column name of dataframe used for elevation differencing
-
- """
-
- if verbose:
- print("Running Nuth and Kääb (2011) coregistration. Shift pts instead of shifting dem")
-
- tba_arr, _ = get_array_and_mask(tba_dem)
-
- resolution = tba_dem.res[0]
- x_coords, y_coords = (ref_dem["E"].values, ref_dem["N"].values)
-
- # Assume that the coordinates represent the center of a theoretical pixel.
- # The raster sampling is done in the upper left corner, meaning all point have to be respectively shifted
- x_coords -= resolution / 2
- y_coords += resolution / 2
-
- pts = np.array((x_coords, y_coords)).T
- # This needs to be consistent, so it's cardcoded here
- area_or_point = "Area"
- # Make a new DEM which will be modified inplace
- aligned_dem = tba_dem.copy()
- aligned_dem.tags["AREA_OR_POINT"] = area_or_point
-
- # Calculate slope and aspect maps from the reference DEM
- if verbose:
- print(" Calculate slope and aspect")
- slope, aspect = _calculate_slope_and_aspect_nuthkaab(tba_arr)
-
- slope_r = tba_dem.copy(new_array=np.ma.masked_array(slope[None, :, :], mask=~np.isfinite(slope[None, :, :])))
- slope_r.tags["AREA_OR_POINT"] = area_or_point
- aspect_r = tba_dem.copy(new_array=np.ma.masked_array(aspect[None, :, :], mask=~np.isfinite(aspect[None, :, :])))
- aspect_r.tags["AREA_OR_POINT"] = area_or_point
-
- # Initialise east and north pixel offset variables (these will be incremented up and down)
- offset_east, offset_north, vshift = 0.0, 0.0, 0.0
-
- # Calculate initial DEM statistics
- slope_pts = slope_r.interp_points(pts, mode="nearest", shift_area_or_point=True)
- aspect_pts = aspect_r.interp_points(pts, mode="nearest", shift_area_or_point=True)
- tba_pts = aligned_dem.interp_points(pts, mode="nearest", shift_area_or_point=True)
-
- # Treat new_pts as a window, every time we shift it a little bit to fit the correct view
- new_pts = pts.copy()
-
- elevation_difference = ref_dem[z_name].values - tba_pts
- vshift = float(np.nanmedian(elevation_difference))
- nmad_old = nmad(elevation_difference)
-
- if verbose:
- print(" Statistics on initial dh:")
- print(f" Median = {vshift:.3f} - NMAD = {nmad_old:.3f}")
-
- # Iteratively run the analysis until the maximum iterations or until the error gets low enough
- if verbose:
- print(" Iteratively estimating horizontal shit:")
-
- # If verbose is True, will use progressbar and print additional statements
- pbar = trange(self.max_iterations, disable=not verbose, desc=" Progress")
- for i in pbar:
-
- # Estimate the horizontal shift from the implementation by Nuth and Kääb (2011)
- east_diff, north_diff, _ = get_horizontal_shift( # type: ignore
- elevation_difference=elevation_difference, slope=slope_pts, aspect=aspect_pts
- )
- if verbose:
- pbar.write(f" #{i + 1:d} - Offset in pixels : ({east_diff:.3f}, {north_diff:.3f})")
-
- # Increment the offsets with the overall offset
- offset_east += east_diff
- offset_north += north_diff
-
- # Assign offset to the coordinates of the pts
- # Treat new_pts as a window, every time we shift it a little bit to fit the correct view
- new_pts += [east_diff * resolution, north_diff * resolution]
-
- # Get new values
- tba_pts = aligned_dem.interp_points(new_pts, mode="nearest", shift_area_or_point=True)
- elevation_difference = ref_dem[z_name].values - tba_pts
-
- # Mask out no data by dem's mask
- pts_, mask_ = _mask_dataframe_by_dem(new_pts, tba_dem)
-
- # Update values relataed to shifted pts
- elevation_difference = elevation_difference[mask_]
- slope_pts = slope_r.interp_points(pts_, mode="nearest", shift_area_or_point=True)
- aspect_pts = aspect_r.interp_points(pts_, mode="nearest", shift_area_or_point=True)
- vshift = float(np.nanmedian(elevation_difference))
-
- # Update statistics
- elevation_difference -= vshift
- nmad_new = nmad(elevation_difference)
- nmad_gain = (nmad_new - nmad_old) / nmad_old * 100
-
- if verbose:
- pbar.write(f" Median = {vshift:.3f} - NMAD = {nmad_new:.3f} ==> Gain = {nmad_gain:.3f}%")
-
- # Stop if the NMAD is low and a few iterations have been made
- assert ~np.isnan(nmad_new), (offset_east, offset_north)
-
- offset = np.sqrt(east_diff**2 + north_diff**2)
- if i > 1 and offset < self.offset_threshold:
- if verbose:
- pbar.write(
- f" Last offset was below the residual offset threshold of {self.offset_threshold} -> stopping"
- )
- break
-
- nmad_old = nmad_new
-
- # Print final results
- if verbose:
- print(
- "\n Final offset in pixels (east, north, bais) : ({:f}, {:f},{:f})".format(
- offset_east, offset_north, vshift
- )
- )
- print(" Statistics on coregistered dh:")
- print(f" Median = {vshift:.3f} - NMAD = {nmad_new:.3f}")
-
- self._meta["offset_east_px"] = offset_east
- self._meta["offset_north_px"] = offset_north
- self._meta["vshift"] = vshift
- self._meta["resolution"] = resolution
- self._meta["nmad"] = nmad_new
-
- def _to_matrix_func(self) -> NDArrayf:
- """Return a transformation matrix from the estimated offsets."""
- offset_east = self._meta["offset_east_px"] * self._meta["resolution"]
- offset_north = self._meta["offset_north_px"] * self._meta["resolution"]
-
- matrix = np.diag(np.ones(4, dtype=float))
- matrix[0, 3] += offset_east
- matrix[1, 3] += offset_north
- matrix[2, 3] += self._meta["vshift"]
-
- return matrix
-
- def _apply_func(
- self,
- dem: NDArrayf,
- transform: rio.transform.Affine,
- crs: rio.crs.CRS,
- bias_vars: dict[str, NDArrayf] | None = None,
- **kwargs: Any,
- ) -> tuple[NDArrayf, rio.transform.Affine]:
- """Apply the Nuth & Kaab shift to a DEM."""
- offset_east = self._meta["offset_east_px"] * self._meta["resolution"]
- offset_north = self._meta["offset_north_px"] * self._meta["resolution"]
-
- updated_transform = apply_xy_shift(transform, -offset_east, -offset_north)
- vshift = self._meta["vshift"]
- return dem + vshift, updated_transform
-
- def _apply_pts_func(self, coords: NDArrayf) -> NDArrayf:
- """Apply the Nuth & Kaab shift to a set of points."""
- offset_east = self._meta["offset_east_px"] * self._meta["resolution"]
- offset_north = self._meta["offset_north_px"] * self._meta["resolution"]
-
- new_coords = coords.copy()
- new_coords[:, 0] += offset_east
- new_coords[:, 1] += offset_north
- new_coords[:, 2] += self._meta["vshift"]
-
- return new_coords
-
-
-class GradientDescending(AffineCoreg):
- """
- Gradient Descending coregistration by Zhihao
- """
-
- def __init__(
- self,
- x0: tuple[float, float] = (0, 0),
- bounds: tuple[float, float] = (-3, 3),
- deltainit: int = 2,
- deltatol: float = 0.004,
- feps: float = 0.0001,
- subsample: int | float = 6000,
- ) -> None:
- """
- Instantiate gradient descending coregistration object.
-
- :param x0: The initial point of gradient descending iteration.
- :param bounds: The boundary of the maximum shift.
- :param deltainit: Initial pattern size.
- :param deltatol: Target pattern size, or the precision you want achieve.
- :param feps: Parameters for algorithm. Smallest difference in function value to resolve.
- :param subsample: Subsample the input for speed-up. <1 is parsed as a fraction. >1 is a pixel count.
-
- The algorithm terminates when the iteration is locally optimal at the target pattern size 'deltatol',
- or when the function value differs by less than the tolerance 'feps' along all directions.
-
- """
- self._meta: CoregDict
- self.bounds = bounds
- self.x0 = x0
- self.deltainit = deltainit
- self.deltatol = deltatol
- self.feps = feps
-
- super().__init__(subsample=subsample)
-
- def _fit_pts_func(
- self,
- ref_dem: pd.DataFrame,
- tba_dem: RasterType,
- verbose: bool = False,
- z_name: str = "z",
- weights: str | None = None,
- random_state: int = 42,
- **kwargs: Any,
- ) -> None:
- """Estimate the x/y/z offset between two DEMs.
- :param ref_dem: the dataframe used as ref
- :param tba_dem: the dem to be aligned
- :param z_name: the column name of dataframe used for elevation differencing
- :param weights: the column name of dataframe used for weight, should have the same length with z_name columns
- :param random_state: The random state of the subsampling.
- """
- if not _has_noisyopt:
- raise ValueError("Optional dependency needed. Install 'noisyopt'")
-
- # Perform downsampling if subsample != None
- if self._meta["subsample"] and len(ref_dem) > self._meta["subsample"]:
- ref_dem = ref_dem.sample(frac=self._meta["subsample"] / len(ref_dem), random_state=random_state).copy()
- else:
- ref_dem = ref_dem.copy()
-
- resolution = tba_dem.res[0]
- # Assume that the coordinates represent the center of a theoretical pixel.
- # The raster sampling is done in the upper left corner, meaning all point have to be respectively shifted
- ref_dem["E"] -= resolution / 2
- ref_dem["N"] += resolution / 2
- area_or_point = "Area"
-
- old_aop = tba_dem.tags.get("AREA_OR_POINT", None)
- tba_dem.tags["AREA_OR_POINT"] = area_or_point
-
- if verbose:
- print("Running Gradient Descending Coreg - Zhihao (in preparation) ")
- if self._meta["subsample"]:
- print("Running on downsampling. The length of the gdf:", len(ref_dem))
-
- elevation_difference = _residuals_df(tba_dem, ref_dem, (0, 0), 0, z_name=z_name)
- nmad_old = nmad(elevation_difference)
- vshift = np.nanmedian(elevation_difference)
- print(" Statistics on initial dh:")
- print(f" Median = {vshift:.4f} - NMAD = {nmad_old:.4f}")
-
- # start iteration, find the best shifting px
- def func_cost(x: tuple[float, float]) -> np.floating[Any]:
- return nmad(_residuals_df(tba_dem, ref_dem, x, 0, z_name=z_name, weight=weights))
-
- res = minimizeCompass(
- func_cost,
- x0=self.x0,
- deltainit=self.deltainit,
- deltatol=self.deltatol,
- feps=self.feps,
- bounds=(self.bounds, self.bounds),
- disp=verbose,
- errorcontrol=False,
- )
-
- # Send the best solution to find all results
- elevation_difference = _residuals_df(tba_dem, ref_dem, (res.x[0], res.x[1]), 0, z_name=z_name)
-
- if old_aop is None:
- del tba_dem.tags["AREA_OR_POINT"]
- else:
- tba_dem.tags["AREA_OR_POINT"] = old_aop
-
- # results statistics
- vshift = np.nanmedian(elevation_difference)
- nmad_new = nmad(elevation_difference)
-
- # Print final results
- if verbose:
-
- print(f"\n Final offset in pixels (east, north) : ({res.x[0]:f}, {res.x[1]:f})")
- print(" Statistics on coregistered dh:")
- print(f" Median = {vshift:.4f} - NMAD = {nmad_new:.4f}")
-
- self._meta["offset_east_px"] = res.x[0]
- self._meta["offset_north_px"] = res.x[1]
- self._meta["vshift"] = vshift
- self._meta["resolution"] = resolution
-
- def _fit_func(
- self,
- ref_dem: NDArrayf,
- tba_dem: NDArrayf,
- inlier_mask: NDArrayb,
- transform: rio.transform.Affine,
- crs: rio.crs.CRS,
- weights: NDArrayf | None,
- bias_vars: dict[str, NDArrayf] | None = None,
- verbose: bool = False,
- **kwargs: Any,
- ) -> None:
-
- ref_dem = (
- Raster.from_array(ref_dem, transform=transform, crs=crs, nodata=-9999.0)
- .to_points(as_array=False, pixel_offset="center")
- .ds
- )
- ref_dem["E"] = ref_dem.geometry.x
- ref_dem["N"] = ref_dem.geometry.y
- ref_dem.rename(columns={"b1": "z"}, inplace=True)
- tba_dem = Raster.from_array(tba_dem, transform=transform, crs=crs, nodata=-9999.0)
- self._fit_pts_func(ref_dem=ref_dem, tba_dem=tba_dem, transform=transform, **kwargs)
-
- def _to_matrix_func(self) -> NDArrayf:
- """Return a transformation matrix from the estimated offsets."""
- offset_east = self._meta["offset_east_px"] * self._meta["resolution"]
- offset_north = self._meta["offset_north_px"] * self._meta["resolution"]
-
- matrix = np.diag(np.ones(4, dtype=float))
- matrix[0, 3] += offset_east
- matrix[1, 3] += offset_north
- matrix[2, 3] += self._meta["vshift"]
-
- return matrix
diff --git a/xdem/coreg/base.py b/xdem/coreg/base.py
deleted file mode 100644
index 0ce059c1..00000000
--- a/xdem/coreg/base.py
+++ /dev/null
@@ -1,2155 +0,0 @@
-"""Base coregistration classes to define generic methods and pre/post-processing of input data."""
-
-from __future__ import annotations
-
-import concurrent.futures
-import copy
-import inspect
-import warnings
-from typing import (
- Any,
- Callable,
- Generator,
- Iterable,
- Literal,
- TypedDict,
- TypeVar,
- overload,
-)
-
-import affine
-
-try:
- import cv2
-
- _has_cv2 = True
-except ImportError:
- _has_cv2 = False
-import fiona
-import geoutils as gu
-import numpy as np
-import pandas as pd
-import rasterio as rio
-import rasterio.warp # pylint: disable=unused-import
-import scipy
-import scipy.interpolate
-import scipy.ndimage
-import scipy.optimize
-import skimage.transform
-from geoutils._typing import Number
-from geoutils.raster import (
- Mask,
- RasterType,
- get_array_and_mask,
- raster,
- subdivide_array,
- subsample_array,
-)
-from tqdm import tqdm
-
-from xdem._typing import MArrayf, NDArrayb, NDArrayf
-from xdem.spatialstats import nmad
-from xdem.terrain import get_terrain_attribute
-
-try:
- import pytransform3d.transformations
- from pytransform3d.transform_manager import TransformManager
-
- _HAS_P3D = True
-except ImportError:
- _HAS_P3D = False
-
-
-###########################################
-# Generic functions for preprocessing
-###########################################
-
-
-def _transform_to_bounds_and_res(
- shape: tuple[int, ...], transform: rio.transform.Affine
-) -> tuple[rio.coords.BoundingBox, float]:
- """Get the bounding box and (horizontal) resolution from a transform and the shape of a DEM."""
- bounds = rio.coords.BoundingBox(*rio.transform.array_bounds(shape[0], shape[1], transform=transform))
- resolution = (bounds.right - bounds.left) / shape[1]
-
- return bounds, resolution
-
-
-def _get_x_and_y_coords(shape: tuple[int, ...], transform: rio.transform.Affine) -> tuple[NDArrayf, NDArrayf]:
- """Generate center coordinates from a transform and the shape of a DEM."""
- bounds, resolution = _transform_to_bounds_and_res(shape, transform)
- x_coords, y_coords = np.meshgrid(
- np.linspace(bounds.left + resolution / 2, bounds.right - resolution / 2, num=shape[1]),
- np.linspace(bounds.bottom + resolution / 2, bounds.top - resolution / 2, num=shape[0])[::-1],
- )
- return x_coords, y_coords
-
-
-def _apply_xyz_shift_df(df: pd.DataFrame, dx: float, dy: float, dz: float, z_name: str) -> NDArrayf:
- """
- Apply shift to dataframe using Transform affine matrix
-
- :param df: DataFrame with columns 'E','N',z_name (height)
- :param dz: dz shift value
- """
-
- new_df = df.copy()
- new_df["E"] += dx
- new_df["N"] += dy
- new_df[z_name] -= dz
-
- return new_df
-
-
-def _residuals_df(
- dem: NDArrayf,
- df: pd.DataFrame,
- shift_px: tuple[float, float],
- dz: float,
- z_name: str,
- weight: str = None,
- **kwargs: Any,
-) -> pd.DataFrame:
- """
- Calculate the difference between the DEM and points (a dataframe has 'E','N','z') after applying a shift.
-
- :param dem: DEM
- :param df: A dataframe has 'E','N' and has been subseted according to DEM bonds and masks.
- :param shift_px: The coordinates of shift pixels (e_px,n_px).
- :param dz: The bias.
- :param z_name: The column that be used to compare with dem_h.
- :param weight: The column that be used as weights
- :param area_or_point: Use the GDAL Area or Point sampling method.
-
- :returns: An array of residuals.
- """
-
- # shift ee,nn
- ee, nn = (i * dem.res[0] for i in shift_px)
- df_shifted = _apply_xyz_shift_df(df, ee, nn, dz, z_name=z_name)
-
- # prepare DEM
- arr_ = dem.data.astype(np.float32)
-
- # get residual error at the point on DEM.
- i, j = dem.xy2ij(
- df_shifted["E"].values, df_shifted["N"].values, op=np.float32, shift_area_or_point=("AREA_OR_POINT" in dem.tags)
- )
-
- # ndimage return
- dem_h = scipy.ndimage.map_coordinates(arr_, [i, j], order=1, mode="nearest", **kwargs)
- weight_ = df[weight] if weight else 1
-
- return (df_shifted[z_name].values - dem_h) * weight_
-
-
-def _df_sampling_from_dem(
- dem: RasterType, tba_dem: RasterType, subsample: float | int = 10000, order: int = 1, offset: str | None = None
-) -> pd.DataFrame:
- """
- Generate a dataframe from a dem by random sampling.
-
- :param offset: The pixel’s center is returned by default, but a corner can be returned
- by setting offset to one of ul, ur, ll, lr.
-
- :returns dataframe: N,E coordinates and z of DEM at sampling points.
- """
-
- if offset is None:
- if dem.tags.get("AREA_OR_POINT", "").lower() == "area":
- offset = "ul"
- else:
- offset = "center"
-
- # Convert subsample to int
- valid_mask = np.logical_and(~dem.mask, ~tba_dem.mask)
- if (subsample <= 1) & (subsample > 0):
- npoints = int(subsample * np.count_nonzero(valid_mask))
- elif subsample > 1:
- npoints = int(subsample)
- else:
- raise ValueError("`subsample` must be > 0")
-
- # Avoid edge, and mask-out area in sampling
- width, length = dem.shape
- i, j = np.random.randint(10, width - 10, npoints), np.random.randint(10, length - 10, npoints)
- mask = dem.data.mask
-
- # Get value
- x, y = dem.ij2xy(i[~mask[i, j]], j[~mask[i, j]], offset=offset)
- z = scipy.ndimage.map_coordinates(
- dem.data.astype(np.float32), [i[~mask[i, j]], j[~mask[i, j]]], order=order, mode="nearest"
- )
- df = pd.DataFrame({"z": z, "N": y, "E": x})
-
- # mask out from tba_dem
- if tba_dem is not None:
- df, _ = _mask_dataframe_by_dem(df, tba_dem)
-
- return df
-
-
-def _mask_dataframe_by_dem(df: pd.DataFrame | NDArrayf, dem: RasterType) -> pd.DataFrame | NDArrayf:
- """
- Mask out the dataframe (has 'E','N' columns), or np.ndarray ([E,N]) by DEM's mask.
-
- Return new dataframe and mask.
- """
-
- final_mask = ~dem.data.mask
- mask_raster = dem.copy(new_array=final_mask.astype(np.float32))
-
- if isinstance(df, pd.DataFrame):
- pts = np.array((df["E"].values, df["N"].values)).T
- elif isinstance(df, np.ndarray):
- pts = df
-
- ref_inlier = mask_raster.interp_points(pts, input_latlon=False, order=0)
- new_df = df[ref_inlier.astype(bool)].copy()
-
- return new_df, ref_inlier.astype(bool)
-
-
-def _calculate_ddem_stats(
- ddem: NDArrayf | MArrayf,
- inlier_mask: NDArrayb | None = None,
- stats_list: tuple[Callable[[NDArrayf], Number], ...] | None = None,
- stats_labels: tuple[str, ...] | None = None,
-) -> dict[str, float]:
- """
- Calculate standard statistics of ddem, e.g., to be used to compare before/after coregistration.
- Default statistics are: count, mean, median, NMAD and std.
-
- :param ddem: The DEM difference to be analyzed.
- :param inlier_mask: 2D boolean array of areas to include in the analysis (inliers=True).
- :param stats_list: Statistics to compute on the DEM difference.
- :param stats_labels: Labels of the statistics to compute (same length as stats_list).
-
- Returns: a dictionary containing the statistics
- """
- # Default stats - Cannot be put in default args due to circular import with xdem.spatialstats.nmad.
- if (stats_list is None) or (stats_labels is None):
- stats_list = (np.size, np.mean, np.median, nmad, np.std)
- stats_labels = ("count", "mean", "median", "nmad", "std")
-
- # Check that stats_list and stats_labels are correct
- if len(stats_list) != len(stats_labels):
- raise ValueError("Number of items in `stats_list` and `stats_labels` should be identical.")
- for stat, label in zip(stats_list, stats_labels):
- if not callable(stat):
- raise ValueError(f"Item {stat} in `stats_list` should be a callable/function.")
- if not isinstance(label, str):
- raise ValueError(f"Item {label} in `stats_labels` should be a string.")
-
- # Get the mask of valid and inliers pixels
- nan_mask = ~np.isfinite(ddem)
- if inlier_mask is None:
- inlier_mask = np.ones(ddem.shape, dtype="bool")
- valid_ddem = ddem[~nan_mask & inlier_mask]
-
- # Calculate stats
- stats = {}
- for stat, label in zip(stats_list, stats_labels):
- stats[label] = stat(valid_ddem)
-
- return stats
-
-
-def _mask_as_array(reference_raster: gu.Raster, mask: str | gu.Vector | gu.Raster) -> NDArrayf:
- """
- Convert a given mask into an array.
-
- :param reference_raster: The raster to use for rasterizing the mask if the mask is a vector.
- :param mask: A valid Vector, Raster or a respective filepath to a mask.
-
- :raises: ValueError: If the mask path is invalid.
- :raises: TypeError: If the wrong mask type was given.
-
- :returns: The mask as a squeezed array.
- """
- # Try to load the mask file if it's a filepath
- if isinstance(mask, str):
- # First try to load it as a Vector
- try:
- mask = gu.Vector(mask)
- # If the format is unsopported, try loading as a Raster
- except fiona.errors.DriverError:
- try:
- mask = gu.Raster(mask)
- # If that fails, raise an error
- except rio.errors.RasterioIOError:
- raise ValueError(f"Mask path not in a supported Raster or Vector format: {mask}")
-
- # At this point, the mask variable is either a Raster or a Vector
- # Now, convert the mask into an array by either rasterizing a Vector or by fetching a Raster's data
- if isinstance(mask, gu.Vector):
- mask_array = mask.create_mask(reference_raster, as_array=True)
- elif isinstance(mask, gu.Raster):
- # The true value is the maximum value in the raster, unless the maximum value is 0 or False
- true_value = np.nanmax(mask.data) if not np.nanmax(mask.data) in [0, False] else True
- mask_array = (mask.data == true_value).squeeze()
- else:
- raise TypeError(
- f"Mask has invalid type: {type(mask)}. Expected one of: " f"{[gu.Raster, gu.Vector, str, type(None)]}"
- )
-
- return mask_array
-
-
-def _preprocess_coreg_raster_input(
- reference_dem: NDArrayf | MArrayf | RasterType,
- dem_to_be_aligned: NDArrayf | MArrayf | RasterType,
- inlier_mask: NDArrayb | Mask | None = None,
- transform: rio.transform.Affine | None = None,
- crs: rio.crs.CRS | None = None,
-) -> tuple[NDArrayf, NDArrayf, NDArrayb, affine.Affine, rio.crs.CRS]:
-
- # Validate that both inputs are valid array-like (or Raster) types.
- if not all(isinstance(dem, (np.ndarray, gu.Raster)) for dem in (reference_dem, dem_to_be_aligned)):
- raise ValueError(
- "Both DEMs need to be array-like (implement a numpy array interface)."
- f"'reference_dem': {reference_dem}, 'dem_to_be_aligned': {dem_to_be_aligned}"
- )
-
- # If both DEMs are Rasters, validate that 'dem_to_be_aligned' is in the right grid. Then extract its data.
- if isinstance(dem_to_be_aligned, gu.Raster) and isinstance(reference_dem, gu.Raster):
- dem_to_be_aligned = dem_to_be_aligned.reproject(reference_dem, silent=True)
-
- # If any input is a Raster, use its transform if 'transform is None'.
- # If 'transform' was given and any input is a Raster, trigger a warning.
- # Finally, extract only the data of the raster.
- new_transform = None
- new_crs = None
- for name, dem in [("reference_dem", reference_dem), ("dem_to_be_aligned", dem_to_be_aligned)]:
- if isinstance(dem, gu.Raster):
- # If a raster was passed, override the transform, reference raster has priority to set new_transform.
- if transform is None:
- new_transform = dem.transform
- elif transform is not None and new_transform is None:
- new_transform = dem.transform
- warnings.warn(f"'{name}' of type {type(dem)} overrides the given 'transform'")
- # Same for crs
- if crs is None:
- new_crs = dem.crs
- elif crs is not None and new_crs is None:
- new_crs = dem.crs
- warnings.warn(f"'{name}' of type {type(dem)} overrides the given 'crs'")
- # Override transform and CRS
- if new_transform is not None:
- transform = new_transform
- if new_crs is not None:
- crs = new_crs
-
- if transform is None:
- raise ValueError("'transform' must be given if both DEMs are array-like.")
-
- if crs is None:
- raise ValueError("'crs' must be given if both DEMs are array-like.")
-
- # Get a NaN array covering nodatas from the raster, masked array or integer-type array
- with warnings.catch_warnings():
- warnings.filterwarnings(action="ignore", category=UserWarning)
- ref_dem, ref_mask = get_array_and_mask(reference_dem, copy=True)
- tba_dem, tba_mask = get_array_and_mask(dem_to_be_aligned, copy=True)
-
- # Make sure that the mask has an expected format.
- if inlier_mask is not None:
- if isinstance(inlier_mask, Mask):
- inlier_mask = inlier_mask.data.filled(False).squeeze()
- else:
- inlier_mask = np.asarray(inlier_mask).squeeze()
- assert inlier_mask.dtype == bool, f"Invalid mask dtype: '{inlier_mask.dtype}'. Expected 'bool'"
-
- if np.all(~inlier_mask):
- raise ValueError("'inlier_mask' had no inliers.")
- else:
- inlier_mask = np.ones(np.shape(ref_dem), dtype=bool)
-
- if np.all(ref_mask):
- raise ValueError("'reference_dem' had only NaNs")
- if np.all(tba_mask):
- raise ValueError("'dem_to_be_aligned' had only NaNs")
-
- # Isolate all invalid values
- invalid_mask = np.logical_or.reduce((~inlier_mask, ref_mask, tba_mask))
-
- if np.all(invalid_mask):
- raise ValueError("All values of the inlier mask are NaNs in either 'reference_dem' or 'dem_to_be_aligned'.")
-
- return ref_dem, tba_dem, inlier_mask, transform, crs
-
-
-# TODO: Re-structure AffineCoreg apply function and move there?
-
-
-def deramping(
- ddem: NDArrayf | MArrayf,
- x_coords: NDArrayf,
- y_coords: NDArrayf,
- degree: int,
- subsample: float | int = 1.0,
- verbose: bool = False,
-) -> tuple[Callable[[NDArrayf, NDArrayf], NDArrayf], tuple[NDArrayf, int]]:
- """
- Calculate a deramping function to remove spatially correlated elevation differences that can be explained by \
- a polynomial of degree `degree`.
-
- :param ddem: The elevation difference array to analyse.
- :param x_coords: x-coordinates of the above array (must have the same shape as elevation_difference)
- :param y_coords: y-coordinates of the above array (must have the same shape as elevation_difference)
- :param degree: The polynomial degree to estimate the ramp.
- :param subsample: Subsample the input to increase performance. <1 is parsed as a fraction. >1 is a pixel count.
- :param verbose: Print the least squares optimization progress.
-
- :returns: A callable function to estimate the ramp and the output of scipy.optimize.leastsq
- """
- # Extract only valid pixels
- valid_mask = np.isfinite(ddem)
- ddem = ddem[valid_mask]
- x_coords = x_coords[valid_mask]
- y_coords = y_coords[valid_mask]
-
- # Formulate the 2D polynomial whose coefficients will be solved for.
- def poly2d(x_coords: NDArrayf, y_coords: NDArrayf, coefficients: NDArrayf) -> NDArrayf:
- """
- Estimate values from a 2D-polynomial.
-
- :param x_coords: x-coordinates of the difference array (must have the same shape as
- elevation_difference).
- :param y_coords: y-coordinates of the difference array (must have the same shape as
- elevation_difference).
- :param coefficients: The coefficients (a, b, c, etc.) of the polynomial.
- :param degree: The degree of the polynomial.
-
- :raises ValueError: If the length of the coefficients list is not compatible with the degree.
-
- :returns: The values estimated by the polynomial.
- """
- # Check that the coefficient size is correct.
- coefficient_size = (degree + 1) * (degree + 2) / 2
- if len(coefficients) != coefficient_size:
- raise ValueError()
-
- # Build the polynomial of degree `degree`
- estimated_values = np.sum(
- [
- coefficients[k * (k + 1) // 2 + j] * x_coords ** (k - j) * y_coords**j
- for k in range(degree + 1)
- for j in range(k + 1)
- ],
- axis=0,
- )
- return estimated_values # type: ignore
-
- def residuals(coefs: NDArrayf, x_coords: NDArrayf, y_coords: NDArrayf, targets: NDArrayf) -> NDArrayf:
- """Return the optimization residuals"""
- res = targets - poly2d(x_coords, y_coords, coefs)
- return res[np.isfinite(res)]
-
- if verbose:
- print("Estimating deramp function...")
-
- # reduce number of elements for speed
- rand_indices = subsample_array(x_coords, subsample=subsample, return_indices=True)
- x_coords = x_coords[rand_indices]
- y_coords = y_coords[rand_indices]
- ddem = ddem[rand_indices]
-
- # Optimize polynomial parameters
- coefs = scipy.optimize.leastsq(
- func=residuals,
- x0=np.zeros(shape=((degree + 1) * (degree + 2) // 2)),
- args=(x_coords, y_coords, ddem),
- )
-
- def fit_ramp(x: NDArrayf, y: NDArrayf) -> NDArrayf:
- """
- Get the elevation difference biases (ramp) at the given coordinates.
-
- :param x_coordinates: x-coordinates of interest.
- :param y_coordinates: y-coordinates of interest.
-
- :returns: The estimated elevation difference bias.
- """
- return poly2d(x, y, coefs[0])
-
- return fit_ramp, coefs
-
-
-def invert_matrix(matrix: NDArrayf) -> NDArrayf:
- """Invert a transformation matrix."""
- with warnings.catch_warnings():
- # Deprecation warning from pytransform3d. Let's hope that is fixed in the near future.
- warnings.filterwarnings("ignore", message="`np.float` is a deprecated alias for the builtin `float`")
-
- checked_matrix = pytransform3d.transformations.check_matrix(matrix)
- # Invert the transform if wanted.
- return pytransform3d.transformations.invert_transform(checked_matrix)
-
-
-def apply_matrix(
- dem: NDArrayf,
- transform: rio.transform.Affine,
- matrix: NDArrayf,
- invert: bool = False,
- centroid: tuple[float, float, float] | None = None,
- resampling: int | str = "bilinear",
- fill_max_search: int = 0,
-) -> NDArrayf:
- """
- Apply a 3D transformation matrix to a 2.5D DEM.
-
- The transformation is applied as a value correction using linear deramping, and 2D image warping.
-
- 1. Convert the DEM into a point cloud (not for gridding; for estimating the DEM shifts).
- 2. Transform the point cloud in 3D using the 4x4 matrix.
- 3. Measure the difference in elevation between the original and transformed points.
- 4. Estimate a linear deramp from the elevation difference, and apply the correction to the DEM values.
- 5. Convert the horizontal coordinates of the transformed points to pixel index coordinates.
- 6. Apply the pixel-wise displacement in 2D using the new pixel coordinates.
- 7. Apply the same displacement to a nodata-mask to exclude previous and/or new nans.
-
- :param dem: The DEM to transform.
- :param transform: The Affine transform object (georeferencing) of the DEM.
- :param matrix: A 4x4 transformation matrix to apply to the DEM.
- :param invert: Invert the transformation matrix.
- :param centroid: The X/Y/Z transformation centroid. Irrelevant for pure translations. Defaults to the midpoint (Z=0)
- :param resampling: The resampling method to use. Can be `nearest`, `bilinear`, `cubic` or an integer from 0-5.
- :param fill_max_search: Set to > 0 value to fill the DEM before applying the transformation, to avoid spreading\
- gaps. The DEM will be filled with rasterio.fill.fillnodata with max_search_distance set to fill_max_search.\
- This is experimental, use at your own risk !
-
- :returns: The transformed DEM with NaNs as nodata values (replaces a potential mask of the input `dem`).
- """
- # Parse the resampling argument given.
- if isinstance(resampling, (int, np.integer)):
- resampling_order = resampling
- elif resampling == "cubic":
- resampling_order = 3
- elif resampling == "bilinear":
- resampling_order = 1
- elif resampling == "nearest":
- resampling_order = 0
- else:
- raise ValueError(
- f"`{resampling}` is not a valid resampling mode."
- " Choices: [`nearest`, `bilinear`, `cubic`] or an integer."
- )
- # Copy the DEM to make sure the original is not modified, and convert it into an ndarray
- demc = np.array(dem)
-
- # Check if the matrix only contains a Z correction. In that case, only shift the DEM values by the vertical shift.
- empty_matrix = np.diag(np.ones(4, float))
- empty_matrix[2, 3] = matrix[2, 3]
- if np.mean(np.abs(empty_matrix - matrix)) == 0.0:
- return demc + matrix[2, 3]
-
- # Opencv is required down from here
- if not _has_cv2:
- raise ValueError("Optional dependency needed. Install 'opencv'")
-
- nan_mask = ~np.isfinite(dem)
- assert np.count_nonzero(~nan_mask) > 0, "Given DEM had all nans."
- # Optionally, fill DEM around gaps to reduce spread of gaps
- if fill_max_search > 0:
- filled_dem = rio.fill.fillnodata(demc, mask=(~nan_mask).astype("uint8"), max_search_distance=fill_max_search)
- else:
- filled_dem = demc # np.where(~nan_mask, demc, np.nan) # I don't know why this was needed - to delete
-
- # Get the centre coordinates of the DEM pixels.
- x_coords, y_coords = _get_x_and_y_coords(demc.shape, transform)
-
- bounds, resolution = _transform_to_bounds_and_res(dem.shape, transform)
-
- # If a centroid was not given, default to the center of the DEM (at Z=0).
- if centroid is None:
- centroid = (np.mean([bounds.left, bounds.right]), np.mean([bounds.bottom, bounds.top]), 0.0)
- else:
- assert len(centroid) == 3, f"Expected centroid to be 3D X/Y/Z coordinate. Got shape of {len(centroid)}"
-
- # Shift the coordinates to centre around the centroid.
- x_coords -= centroid[0]
- y_coords -= centroid[1]
-
- # Create a point cloud of X/Y/Z coordinates
- point_cloud = np.dstack((x_coords, y_coords, filled_dem))
-
- # Shift the Z components by the centroid.
- point_cloud[:, 2] -= centroid[2]
-
- if invert:
- matrix = invert_matrix(matrix)
-
- # Transform the point cloud using the matrix.
- transformed_points = cv2.perspectiveTransform(
- point_cloud.reshape((1, -1, 3)),
- matrix,
- ).reshape(point_cloud.shape)
-
- # Estimate the vertical difference of old and new point cloud elevations.
- deramp, coeffs = deramping(
- (point_cloud[:, :, 2] - transformed_points[:, :, 2])[~nan_mask].flatten(),
- point_cloud[:, :, 0][~nan_mask].flatten(),
- point_cloud[:, :, 1][~nan_mask].flatten(),
- degree=1,
- )
- # Shift the elevation values of the soon-to-be-warped DEM.
- filled_dem -= deramp(x_coords, y_coords)
-
- # Create arrays of x and y coordinates to be converted into index coordinates.
- x_inds = transformed_points[:, :, 0].copy()
- x_inds[x_inds == 0] = np.nan
- y_inds = transformed_points[:, :, 1].copy()
- y_inds[y_inds == 0] = np.nan
-
- # Divide the coordinates by the resolution to create index coordinates.
- x_inds /= resolution
- y_inds /= resolution
- # Shift the x coords so that bounds.left is equivalent to xindex -0.5
- x_inds -= x_coords.min() / resolution
- # Shift the y coords so that bounds.top is equivalent to yindex -0.5
- y_inds = (y_coords.max() / resolution) - y_inds
-
- # Create a skimage-compatible array of the new index coordinates that the pixels shall have after warping.
- inds = np.vstack((y_inds.reshape((1,) + y_inds.shape), x_inds.reshape((1,) + x_inds.shape)))
-
- with warnings.catch_warnings():
- # An skimage warning that will hopefully be fixed soon. (2021-07-30)
- warnings.filterwarnings("ignore", message="Passing `np.nan` to mean no clipping in np.clip")
- # Warp the DEM
- transformed_dem = skimage.transform.warp(
- filled_dem, inds, order=resampling_order, mode="constant", cval=np.nan, preserve_range=True
- )
-
- assert np.count_nonzero(~np.isnan(transformed_dem)) > 0, "Transformed DEM has all nans."
-
- return transformed_dem
-
-
-###########################################
-# Generic coregistration processing classes
-###########################################
-
-
-class CoregDict(TypedDict, total=False):
- """
- Defining the type of each possible key in the metadata dictionary of Process classes.
- The parameter total=False means that the key are not required. In the recent PEP 655 (
- https://peps.python.org/pep-0655/) there is an easy way to specific Required or NotRequired for each key, if we
- want to change this in the future.
- """
-
- # TODO: homogenize the naming mess!
- vshift_func: Callable[[NDArrayf], np.floating[Any]]
- func: Callable[[NDArrayf, NDArrayf], NDArrayf]
- vshift: np.floating[Any] | float | np.integer[Any] | int
- matrix: NDArrayf
- centroid: tuple[float, float, float]
- offset_east_px: float
- offset_north_px: float
- coefficients: NDArrayf
- step_meta: list[Any]
- resolution: float
- nmad: np.floating[Any]
-
- # The pipeline metadata can have any value of the above
- pipeline: list[Any]
-
- # Affine + BiasCorr classes
- subsample: int | float
- random_state: np.random.RandomState | np.random.Generator | int | None
-
- # BiasCorr classes generic metadata
-
- # 1/ Inputs
- fit_or_bin: Literal["fit"] | Literal["bin"]
- fit_func: Callable[..., NDArrayf]
- fit_optimizer: Callable[..., tuple[NDArrayf, Any]]
- bin_sizes: int | dict[str, int | Iterable[float]]
- bin_statistic: Callable[[NDArrayf], np.floating[Any]]
- bin_apply_method: Literal["linear"] | Literal["per_bin"]
- bias_var_names: list[str]
-
- # 2/ Outputs
- fit_params: NDArrayf
- fit_perr: NDArrayf
- bin_dataframe: pd.DataFrame
-
- # 3/ Specific inputs or outputs
- terrain_attribute: str
- angle: float
- poly_order: int
- nb_sin_freq: int
-
-
-CoregType = TypeVar("CoregType", bound="Coreg")
-
-
-class Coreg:
- """
- Generic co-registration processing class.
-
- Used to implement methods common to all processing steps (rigid alignment, bias corrections, filtering).
- Those are: instantiation, copying and addition (which casts to a Pipeline object).
-
- Made to be subclassed.
- """
-
- _fit_called: bool = False # Flag to check if the .fit() method has been called.
- _is_affine: bool | None = None
- _needs_vars: bool = False
-
- def __init__(self, meta: CoregDict | None = None) -> None:
- """Instantiate a generic processing step method."""
- self._meta: CoregDict = meta or {} # All __init__ functions should instantiate an empty dict.
-
- def copy(self: CoregType) -> CoregType:
- """Return an identical copy of the class."""
- new_coreg = self.__new__(type(self))
-
- new_coreg.__dict__ = {key: copy.copy(value) for key, value in self.__dict__.items()}
-
- return new_coreg
-
- def __add__(self, other: CoregType) -> CoregPipeline:
- """Return a pipeline consisting of self and the other processing function."""
- if not isinstance(other, Coreg):
- raise ValueError(f"Incompatible add type: {type(other)}. Expected 'Coreg' subclass")
- return CoregPipeline([self, other])
-
- @property
- def is_affine(self) -> bool:
- """Check if the transform be explained by a 3D affine transform."""
- # _is_affine is found by seeing if to_matrix() raises an error.
- # If this hasn't been done yet, it will be None
- if self._is_affine is None:
- try: # See if to_matrix() raises an error.
- self.to_matrix()
- self._is_affine = True
- except (ValueError, NotImplementedError):
- self._is_affine = False
-
- return self._is_affine
-
- def _get_subsample_on_valid_mask(self, valid_mask: NDArrayb, verbose: bool = False) -> NDArrayb:
- """
- Get mask of values to subsample on valid mask.
-
- :param valid_mask: Mask of valid values (inlier and not nodata).
- """
-
- # This should never happen
- if self._meta["subsample"] is None:
- raise ValueError("Subsample should have been defined in metadata before reaching this class method.")
-
- # If subsample is not equal to one, subsampling should be performed.
- elif self._meta["subsample"] != 1.0:
-
- # Build a low memory masked array with invalid values masked to pass to subsampling
- ma_valid = np.ma.masked_array(data=np.ones(np.shape(valid_mask), dtype=bool), mask=~valid_mask)
- # Take a subsample within the valid values
- indices = gu.raster.subsample_array(
- ma_valid,
- subsample=self._meta["subsample"],
- return_indices=True,
- random_state=self._meta["random_state"],
- )
-
- # We return a boolean mask of the subsample within valid values
- subsample_mask = np.zeros(np.shape(valid_mask), dtype=bool)
- subsample_mask[indices[0], indices[1]] = True
- else:
- # If no subsample is taken, use all valid values
- subsample_mask = valid_mask
-
- if verbose:
- print(
- "Using a subsample of {} among {} valid values.".format(
- np.count_nonzero(valid_mask), np.count_nonzero(subsample_mask)
- )
- )
-
- return subsample_mask
-
- def fit(
- self: CoregType,
- reference_dem: NDArrayf | MArrayf | RasterType,
- dem_to_be_aligned: NDArrayf | MArrayf | RasterType,
- inlier_mask: NDArrayb | Mask | None = None,
- transform: rio.transform.Affine | None = None,
- crs: rio.crs.CRS | None = None,
- bias_vars: dict[str, NDArrayf | MArrayf | RasterType] | None = None,
- weights: NDArrayf | None = None,
- subsample: float | int | None = None,
- verbose: bool = False,
- random_state: None | np.random.RandomState | np.random.Generator | int = None,
- **kwargs: Any,
- ) -> CoregType:
- """
- Estimate the coregistration transform on the given DEMs.
-
- :param reference_dem: 2D array of elevation values acting reference.
- :param dem_to_be_aligned: 2D array of elevation values to be aligned.
- :param inlier_mask: Optional. 2D boolean array of areas to include in the analysis (inliers=True).
- :param transform: Optional. Transform of the reference_dem. Mandatory if DEM provided as array.
- :param crs: Optional. CRS of the reference_dem. Mandatory if DEM provided as array.
- :param bias_vars: Optional, only for some bias correction classes. 2D array of bias variables used.
- :param weights: Optional. Per-pixel weights for the coregistration.
- :param subsample: Subsample the input to increase performance. <1 is parsed as a fraction. >1 is a pixel count.
- :param verbose: Print progress messages to stdout.
- :param random_state: Random state or seed number to use for calculations (to fix random sampling during testing)
- """
-
- if weights is not None:
- raise NotImplementedError("Weights have not yet been implemented")
-
- # Override subsample argument of instantiation if passed to fit
- if subsample is not None:
-
- # Check if subsample argument was also defined at instantiation (not default value), and raise warning
- argspec = inspect.getfullargspec(self.__class__)
- sub_meta = self._meta["subsample"]
- if argspec.defaults is None or "subsample" not in argspec.args:
- raise ValueError("The subsample argument and default need to be defined in this Coreg class.")
- sub_is_default = argspec.defaults[argspec.args.index("subsample") - 1] == sub_meta # type: ignore
- if not sub_is_default:
- warnings.warn(
- "Subsample argument passed to fit() will override non-default subsample value defined at "
- "instantiation. To silence this warning: only define 'subsample' in either fit(subsample=...) or "
- "instantiation e.g. VerticalShift(subsample=...)."
- )
-
- # In any case, override!
- self._meta["subsample"] = subsample
-
- # Save random_state if a subsample is used
- if self._meta["subsample"] != 1:
- self._meta["random_state"] = random_state
-
- # Pre-process the inputs, by reprojecting and subsampling
- ref_dem, tba_dem, inlier_mask, transform, crs = _preprocess_coreg_raster_input(
- reference_dem=reference_dem,
- dem_to_be_aligned=dem_to_be_aligned,
- inlier_mask=inlier_mask,
- transform=transform,
- crs=crs,
- )
-
- main_args = {
- "ref_dem": ref_dem,
- "tba_dem": tba_dem,
- "inlier_mask": inlier_mask,
- "transform": transform,
- "crs": crs,
- "weights": weights,
- "verbose": verbose,
- }
-
- # If bias_vars are defined, update dictionary content to array
- if bias_vars is not None:
- # Check if the current class actually requires bias_vars
- if self._is_affine:
- warnings.warn("This coregistration method is affine, ignoring `bias_vars` passed to fit().")
-
- for var in bias_vars.keys():
- bias_vars[var] = gu.raster.get_array_and_mask(bias_vars[var])[0]
-
- main_args.update({"bias_vars": bias_vars})
-
- # Run the associated fitting function
- self._fit_func(
- **main_args,
- **kwargs,
- )
-
- # Flag that the fitting function has been called.
- self._fit_called = True
-
- return self
-
- def residuals(
- self,
- reference_dem: NDArrayf,
- dem_to_be_aligned: NDArrayf,
- inlier_mask: NDArrayb | None = None,
- transform: rio.transform.Affine | None = None,
- crs: rio.crs.CRS | None = None,
- subsample: float | int = 1.0,
- random_state: None | np.random.RandomState | np.random.Generator | int = None,
- ) -> NDArrayf:
- """
- Calculate the residual offsets (the difference) between two DEMs after applying the transformation.
-
- :param reference_dem: 2D array of elevation values acting reference.
- :param dem_to_be_aligned: 2D array of elevation values to be aligned.
- :param inlier_mask: Optional. 2D boolean array of areas to include in the analysis (inliers=True).
- :param transform: Optional. Transform of the reference_dem. Mandatory in some cases.
- :param crs: Optional. CRS of the reference_dem. Mandatory in some cases.
- :param subsample: Subsample the input to increase performance. <1 is parsed as a fraction. >1 is a pixel count.
- :param random_state: Random state or seed number to use for calculations (to fix random sampling during testing)
-
- :returns: A 1D array of finite residuals.
- """
-
- # Apply the transformation to the dem to be aligned
- aligned_dem = self.apply(dem_to_be_aligned, transform=transform, crs=crs)[0]
-
- # Pre-process the inputs, by reprojecting and subsampling
- ref_dem, align_dem, inlier_mask, transform, crs = _preprocess_coreg_raster_input(
- reference_dem=reference_dem,
- dem_to_be_aligned=aligned_dem,
- inlier_mask=inlier_mask,
- transform=transform,
- crs=crs,
- )
-
- # Calculate the DEM difference
- diff = ref_dem - align_dem
-
- # Sometimes, the float minimum (for float32 = -3.4028235e+38) is returned. This and inf should be excluded.
- full_mask = np.isfinite(diff)
- if "float" in str(diff.dtype):
- full_mask[(diff == np.finfo(diff.dtype).min) | np.isinf(diff)] = False
-
- # Return the difference values within the full inlier mask
- return diff[full_mask]
-
- def fit_pts(
- self: CoregType,
- reference_dem: NDArrayf | MArrayf | RasterType | pd.DataFrame,
- dem_to_be_aligned: RasterType,
- inlier_mask: NDArrayb | Mask | None = None,
- transform: rio.transform.Affine | None = None,
- subsample: float | int = 1.0,
- verbose: bool = False,
- mask_high_curv: bool = False,
- order: int = 1,
- z_name: str = "z",
- weights: str | None = None,
- random_state: None | np.random.RandomState | np.random.Generator | int = None,
- ) -> CoregType:
- """
- Estimate the coregistration transform between a DEM and a reference point elevation data.
-
- :param reference_dem: Point elevation data acting reference.
- :param dem_to_be_aligned: 2D array of elevation values to be aligned.
- :param inlier_mask: Optional. 2D boolean array of areas to include in the analysis (inliers=True).
- :param transform: Optional. Transform of the reference_dem. Mandatory in some cases.
- :param subsample: Subsample the input to increase performance. <1 is parsed as a fraction. >1 is a pixel count.
- :param verbose: Print progress messages to stdout.
- :param order: interpolation 0=nearest, 1=linear, 2=cubic.
- :param z_name: the column name of dataframe used for elevation differencing
- :param mask_high_curv: Mask out high-curvature points (>5 maxc) to increase the robustness.
- :param weights: the column name of dataframe used for weight, should have the same length with z_name columns
- :param random_state: Random state or seed number to use for calculations (to fix random sampling during testing)
- """
-
- # Validate that at least one input is a valid array-like (or Raster) types.
- if not isinstance(dem_to_be_aligned, (np.ndarray, gu.Raster)):
- raise ValueError(
- "The dem_to_be_aligned needs to be array-like (implement a numpy array interface)."
- f"'dem_to_be_aligned': {dem_to_be_aligned}"
- )
-
- # DEM to dataframe if ref_dem is raster
- # How to make sure sample point locates in stable terrain?
- if isinstance(reference_dem, (np.ndarray, gu.Raster)):
- reference_dem = _df_sampling_from_dem(
- reference_dem, dem_to_be_aligned, subsample=subsample, order=1, offset=None
- )
-
- # Validate that at least one input is a valid point data type.
- if not isinstance(reference_dem, pd.DataFrame):
- raise ValueError(
- "The reference_dem needs to be point data format (pd.Dataframe)." f"'reference_dem': {reference_dem}"
- )
-
- # If any input is a Raster, use its transform if 'transform is None'.
- # If 'transform' was given and any input is a Raster, trigger a warning.
- # Finally, extract only the data of the raster.
- for name, dem in [("dem_to_be_aligned", dem_to_be_aligned)]:
- if hasattr(dem, "transform"):
- if transform is None:
- transform = dem.transform
- elif transform is not None:
- warnings.warn(f"'{name}' of type {type(dem)} overrides the given 'transform'")
-
- if transform is None:
- raise ValueError("'transform' must be given if the dem_to_be_align DEM is array-like.")
-
- _, tba_mask = get_array_and_mask(dem_to_be_aligned)
-
- if np.all(tba_mask):
- raise ValueError("'dem_to_be_aligned' had only NaNs")
-
- tba_dem = dem_to_be_aligned.copy()
- ref_valid = np.isfinite(reference_dem[z_name].values)
-
- if np.all(~ref_valid):
- raise ValueError("'reference_dem' point data only contains NaNs")
-
- ref_dem = reference_dem[ref_valid]
-
- if mask_high_curv:
- maxc = np.maximum(
- np.abs(get_terrain_attribute(tba_dem, attribute=["planform_curvature", "profile_curvature"])), axis=0
- )
- # Mask very high curvatures to avoid resolution biases
- mask_hc = maxc.data > 5.0
- else:
- mask_hc = np.zeros(tba_dem.data.mask.shape, dtype=bool)
- if "planc" in ref_dem.columns and "profc" in ref_dem.columns:
- ref_dem = ref_dem.query("planc < 5 and profc < 5")
- else:
- print("Warning: There is no curvature in dataframe. Set mask_high_curv=True for more robust results")
-
- if any(col not in ref_dem for col in ["E", "N"]):
- if "geometry" in ref_dem:
- ref_dem["E"] = ref_dem.geometry.x
- ref_dem["N"] = ref_dem.geometry.y
- else:
- raise ValueError("Reference points need E/N columns or point geometries")
-
- points = np.array((ref_dem["E"].values, ref_dem["N"].values)).T
-
- # Make sure that the mask has an expected format.
- if inlier_mask is not None:
- if isinstance(inlier_mask, Mask):
- inlier_mask = inlier_mask.data.filled(False).squeeze()
- else:
- inlier_mask = np.asarray(inlier_mask).squeeze()
- assert inlier_mask.dtype == bool, f"Invalid mask dtype: '{inlier_mask.dtype}'. Expected 'bool'"
-
- if np.all(~inlier_mask):
- raise ValueError("'inlier_mask' had no inliers.")
-
- final_mask = np.logical_and.reduce((~tba_dem.data.mask, inlier_mask, ~mask_hc))
- else:
- final_mask = np.logical_and(~tba_dem.data.mask, ~mask_hc)
-
- mask_raster = tba_dem.copy(new_array=final_mask.astype(np.float32))
-
- ref_inlier = mask_raster.interp_points(points, order=0)
- ref_inlier = ref_inlier.astype(bool)
-
- if np.all(~ref_inlier):
- raise ValueError("Intersection of 'reference_dem' and 'dem_to_be_aligned' had only NaNs")
-
- ref_dem = ref_dem[ref_inlier]
-
- # If subsample is not equal to one, subsampling should be performed.
- if subsample != 1.0:
-
- # Randomly pick N inliers in the full_mask where N=subsample
- random_valids = subsample_array(
- ref_dem[z_name].values, subsample=subsample, return_indices=True, random_state=random_state
- )
-
- # Subset to the N random inliers
- ref_dem = ref_dem.iloc[random_valids]
-
- # Run the associated fitting function
- self._fit_pts_func(
- ref_dem=ref_dem,
- tba_dem=tba_dem,
- transform=transform,
- weights=weights,
- verbose=verbose,
- order=order,
- z_name=z_name,
- )
-
- # Flag that the fitting function has been called.
- self._fit_called = True
-
- return self
-
- @overload
- def apply(
- self,
- dem: MArrayf,
- transform: rio.transform.Affine | None = None,
- crs: rio.crs.CRS | None = None,
- bias_vars: dict[str, NDArrayf | MArrayf | RasterType] | None = None,
- resample: bool = True,
- **kwargs: Any,
- ) -> tuple[MArrayf, rio.transform.Affine]:
- ...
-
- @overload
- def apply(
- self,
- dem: NDArrayf,
- transform: rio.transform.Affine | None = None,
- crs: rio.crs.CRS | None = None,
- bias_vars: dict[str, NDArrayf | MArrayf | RasterType] | None = None,
- resample: bool = True,
- **kwargs: Any,
- ) -> tuple[NDArrayf, rio.transform.Affine]:
- ...
-
- @overload
- def apply(
- self,
- dem: RasterType,
- transform: rio.transform.Affine | None = None,
- crs: rio.crs.CRS | None = None,
- bias_vars: dict[str, NDArrayf | MArrayf | RasterType] | None = None,
- resample: bool = True,
- **kwargs: Any,
- ) -> RasterType:
- ...
-
- def apply(
- self,
- dem: RasterType | NDArrayf | MArrayf,
- transform: rio.transform.Affine | None = None,
- crs: rio.crs.CRS | None = None,
- bias_vars: dict[str, NDArrayf | MArrayf | RasterType] | None = None,
- resample: bool = True,
- **kwargs: Any,
- ) -> RasterType | tuple[NDArrayf, rio.transform.Affine] | tuple[MArrayf, rio.transform.Affine]:
- """
- Apply the estimated transform to a DEM.
-
- :param dem: A DEM array or Raster to apply the transform on.
- :param transform: Optional. The transform object of the DEM. Mandatory if 'dem' provided as array.
- :param crs: Optional. CRS of the reference_dem. Mandatory if 'dem' provided as array.
- :param bias_vars: Optional, only for some bias correction classes. 2D array of bias variables used.
- :param resample: If set to True, will reproject output Raster on the same grid as input. Otherwise, \
- only the transform might be updated and no resampling is done.
- :param kwargs: Any optional arguments to be passed to either self._apply_func or apply_matrix.
- Kwarg `resampling` can be set to any rio.warp.Resampling to use a different resampling in case \
- `resample` is True, default is bilinear.
-
- :returns: The transformed DEM.
- """
- if not self._fit_called and self._meta.get("matrix") is None:
- raise AssertionError(".fit() does not seem to have been called yet")
-
- if isinstance(dem, gu.Raster):
- if transform is None:
- transform = dem.transform
- else:
- warnings.warn(f"DEM of type {type(dem)} overrides the given 'transform'")
- if crs is None:
- crs = dem.crs
- else:
- warnings.warn(f"DEM of type {type(dem)} overrides the given 'crs'")
-
- else:
- if transform is None:
- raise ValueError("'transform' must be given if DEM is array-like.")
- if crs is None:
- raise ValueError("'crs' must be given if DEM is array-like.")
-
- # The array to provide the functions will be an ndarray with NaNs for masked out areas.
- dem_array, dem_mask = get_array_and_mask(dem)
-
- if np.all(dem_mask):
- raise ValueError("'dem' had only NaNs")
-
- main_args = {"dem": dem_array, "transform": transform, "crs": crs}
-
- # If bias_vars are defined, update dictionary content to array
- if bias_vars is not None:
- # Check if the current class actually requires bias_vars
- if self._is_affine:
- warnings.warn("This coregistration method is affine, ignoring `bias_vars` passed to apply().")
-
- for var in bias_vars.keys():
- bias_vars[var] = gu.raster.get_array_and_mask(bias_vars[var])[0]
-
- main_args.update({"bias_vars": bias_vars})
-
- # See if a _apply_func exists
- try:
- # arg `resample` must be passed to _apply_func, otherwise will be overwritten in CoregPipeline
- kwargs["resample"] = resample
-
- # Run the associated apply function
- applied_dem, out_transform = self._apply_func(
- **main_args, **kwargs
- ) # pylint: disable=assignment-from-no-return
-
- # If it doesn't exist, use apply_matrix()
- except NotImplementedError:
-
- # In this case, resampling is necessary
- if not resample:
- raise NotImplementedError(f"Option `resample=False` not implemented for coreg method {self.__class__}")
- kwargs.pop("resample") # Need to removed before passing to apply_matrix
-
- if self.is_affine: # This only works on it's affine, however.
-
- # Apply the matrix around the centroid (if defined, otherwise just from the center).
- applied_dem = apply_matrix(
- dem_array,
- transform=transform,
- matrix=self.to_matrix(),
- centroid=self._meta.get("centroid"),
- **kwargs,
- )
- out_transform = transform
- else:
- raise ValueError("Coreg method is non-rigid but has no implemented _apply_func")
-
- # Ensure the dtype is OK
- applied_dem = applied_dem.astype("float32")
-
- # Set default dst_nodata
- if isinstance(dem, gu.Raster):
- dst_nodata = dem.nodata
- else:
- dst_nodata = raster._default_nodata(applied_dem.dtype)
-
- # Resample the array on the original grid
- if resample:
- # Set default resampling method if not specified in kwargs
- resampling = kwargs.get("resampling", rio.warp.Resampling.bilinear)
- if not isinstance(resampling, rio.warp.Resampling):
- raise ValueError("`resampling` must be a rio.warp.Resampling algorithm")
-
- applied_dem, out_transform = rio.warp.reproject(
- applied_dem,
- destination=applied_dem,
- src_transform=out_transform,
- dst_transform=transform,
- src_crs=crs,
- dst_crs=crs,
- resampling=resampling,
- dst_nodata=dst_nodata,
- )
-
- # Calculate final mask
- final_mask = np.logical_or(~np.isfinite(applied_dem), applied_dem == dst_nodata)
-
- # If the DEM was a masked_array, copy the mask to the new DEM
- if isinstance(dem, (np.ma.masked_array, gu.Raster)):
- applied_dem = np.ma.masked_array(applied_dem, mask=final_mask) # type: ignore
- else:
- applied_dem[final_mask] = np.nan
-
- # If the input was a Raster, returns a Raster, else returns array and transform
- if isinstance(dem, gu.Raster):
- out_dem = dem.from_array(applied_dem, out_transform, crs, nodata=dem.nodata)
- return out_dem
- else:
- return applied_dem, out_transform
-
- def apply_pts(self, coords: NDArrayf) -> NDArrayf:
- """
- Apply the estimated transform to a set of 3D points.
-
- :param coords: A (N, 3) array of X/Y/Z coordinates or one coordinate of shape (3,).
-
- :returns: The transformed coordinates.
- """
- if not self._fit_called and self._meta.get("matrix") is None:
- raise AssertionError(".fit() does not seem to have been called yet")
- # If the coordinates represent just one coordinate
- if np.shape(coords) == (3,):
- coords = np.reshape(coords, (1, 3))
-
- assert (
- len(np.shape(coords)) == 2 and np.shape(coords)[1] == 3
- ), f"'coords' shape must be (N, 3). Given shape: {np.shape(coords)}"
-
- coords_c = coords.copy()
-
- # See if an _apply_pts_func exists
- try:
- transformed_points = self._apply_pts_func(coords)
- # If it doesn't exist, use opencv's perspectiveTransform
- except NotImplementedError:
- if self.is_affine: # This only works on it's rigid, however.
- # Transform the points (around the centroid if it exists).
- if self._meta.get("centroid") is not None:
- coords_c -= self._meta["centroid"]
- transformed_points = cv2.perspectiveTransform(coords_c.reshape(1, -1, 3), self.to_matrix()).squeeze()
- if self._meta.get("centroid") is not None:
- transformed_points += self._meta["centroid"]
-
- else:
- raise ValueError("Coreg method is non-rigid but has not implemented _apply_pts_func")
-
- return transformed_points
-
- @overload
- def error(
- self,
- reference_dem: NDArrayf,
- dem_to_be_aligned: NDArrayf,
- error_type: list[str],
- inlier_mask: NDArrayb | None = None,
- transform: rio.transform.Affine | None = None,
- crs: rio.crs.CRS | None = None,
- ) -> list[np.floating[Any] | float | np.integer[Any] | int]:
- ...
-
- @overload
- def error(
- self,
- reference_dem: NDArrayf,
- dem_to_be_aligned: NDArrayf,
- error_type: str = "nmad",
- inlier_mask: NDArrayb | None = None,
- transform: rio.transform.Affine | None = None,
- crs: rio.crs.CRS | None = None,
- ) -> np.floating[Any] | float | np.integer[Any] | int:
- ...
-
- def error(
- self,
- reference_dem: NDArrayf,
- dem_to_be_aligned: NDArrayf,
- error_type: str | list[str] = "nmad",
- inlier_mask: NDArrayb | None = None,
- transform: rio.transform.Affine | None = None,
- crs: rio.crs.CRS | None = None,
- ) -> np.floating[Any] | float | np.integer[Any] | int | list[np.floating[Any] | float | np.integer[Any] | int]:
- """
- Calculate the error of a coregistration approach.
-
- Choices:
- - "nmad": Default. The Normalized Median Absolute Deviation of the residuals.
- - "median": The median of the residuals.
- - "mean": The mean/average of the residuals
- - "std": The standard deviation of the residuals.
- - "rms": The root mean square of the residuals.
- - "mae": The mean absolute error of the residuals.
- - "count": The residual count.
-
- :param reference_dem: 2D array of elevation values acting reference.
- :param dem_to_be_aligned: 2D array of elevation values to be aligned.
- :param error_type: The type of error measure to calculate. May be a list of error types.
- :param inlier_mask: Optional. 2D boolean array of areas to include in the analysis (inliers=True).
- :param transform: Optional. Transform of the reference_dem. Mandatory in some cases.
- :param crs: Optional. CRS of the reference_dem. Mandatory in some cases.
-
- :returns: The error measure of choice for the residuals.
- """
- if isinstance(error_type, str):
- error_type = [error_type]
-
- residuals = self.residuals(
- reference_dem=reference_dem,
- dem_to_be_aligned=dem_to_be_aligned,
- inlier_mask=inlier_mask,
- transform=transform,
- crs=crs,
- )
-
- def rms(res: NDArrayf) -> np.floating[Any]:
- return np.sqrt(np.mean(np.square(res)))
-
- def mae(res: NDArrayf) -> np.floating[Any]:
- return np.mean(np.abs(res))
-
- def count(res: NDArrayf) -> int:
- return res.size
-
- error_functions: dict[str, Callable[[NDArrayf], np.floating[Any] | float | np.integer[Any] | int]] = {
- "nmad": nmad,
- "median": np.median,
- "mean": np.mean,
- "std": np.std,
- "rms": rms,
- "mae": mae,
- "count": count,
- }
-
- try:
- errors = [error_functions[err_type](residuals) for err_type in error_type]
- except KeyError as exception:
- raise ValueError(
- f"Invalid 'error_type'{'s' if len(error_type) > 1 else ''}: "
- f"'{error_type}'. Choices: {list(error_functions.keys())}"
- ) from exception
-
- return errors if len(errors) > 1 else errors[0]
-
- def _fit_func(
- self,
- ref_dem: NDArrayf,
- tba_dem: NDArrayf,
- inlier_mask: NDArrayb,
- transform: rio.transform.Affine,
- crs: rio.crs.CRS,
- weights: NDArrayf | None,
- bias_vars: dict[str, NDArrayf] | None = None,
- verbose: bool = False,
- **kwargs: Any,
- ) -> None:
- # FOR DEVELOPERS: This function needs to be implemented.
- raise NotImplementedError("This step has to be implemented by subclassing.")
-
- def _apply_func(
- self,
- dem: NDArrayf,
- transform: rio.transform.Affine,
- crs: rio.crs.CRS,
- bias_vars: dict[str, NDArrayf] | None = None,
- **kwargs: Any,
- ) -> tuple[NDArrayf, rio.transform.Affine]:
- # FOR DEVELOPERS: This function is only needed for non-rigid transforms.
- raise NotImplementedError("This should have been implemented by subclassing")
-
- def _apply_pts_func(self, coords: NDArrayf) -> NDArrayf:
- # FOR DEVELOPERS: This function is only needed for non-rigid transforms.
- raise NotImplementedError("This should have been implemented by subclassing")
-
-
-class CoregPipeline(Coreg):
- """
- A sequential set of co-registration processing steps.
- """
-
- def __init__(self, pipeline: list[Coreg]) -> None:
- """
- Instantiate a new processing pipeline.
-
- :param: Processing steps to run in the sequence they are given.
- """
- self.pipeline = pipeline
-
- super().__init__()
-
- def __repr__(self) -> str:
- return f"Pipeline: {self.pipeline}"
-
- def copy(self: CoregType) -> CoregType:
- """Return an identical copy of the class."""
- new_coreg = self.__new__(type(self))
-
- new_coreg.__dict__ = {key: copy.copy(value) for key, value in self.__dict__.items() if key != "pipeline"}
- new_coreg.pipeline = [step.copy() for step in self.pipeline]
-
- return new_coreg
-
- def _parse_bias_vars(self, step: int, bias_vars: dict[str, NDArrayf] | None) -> dict[str, NDArrayf]:
- """Parse bias variables for a pipeline step requiring them."""
-
- # Get number of non-affine coregistration requiring bias variables to be passed
- nb_needs_vars = sum(c._needs_vars for c in self.pipeline)
-
- # Get step object
- coreg = self.pipeline[step]
-
- # Check that all variable names of this were passed
- var_names = coreg._meta["bias_var_names"]
-
- # Raise error if bias_vars is None
- if bias_vars is None:
- msg = f"No `bias_vars` passed to .fit() for bias correction step {coreg.__class__} of the pipeline."
- if nb_needs_vars > 1:
- msg += (
- " As you are using several bias correction steps requiring `bias_vars`, don't forget to "
- "explicitly define their `bias_var_names` during "
- "instantiation, e.g. {}(bias_var_names=['slope']).".format(coreg.__class__.__name__)
- )
- raise ValueError(msg)
-
- # Raise error if no variable were explicitly assigned and there is more than 1 step with bias_vars
- if var_names is None and nb_needs_vars > 1:
- raise ValueError(
- "When using several bias correction steps requiring `bias_vars` in a pipeline,"
- "the `bias_var_names` need to be explicitly defined at each step's "
- "instantiation, e.g. {}(bias_var_names=['slope']).".format(coreg.__class__.__name__)
- )
-
- # Raise error if the variables explicitly assigned don't match the ones passed in bias_vars
- if not all(n in bias_vars.keys() for n in var_names):
- raise ValueError(
- "Not all keys of `bias_vars` in .fit() match the `bias_var_names` defined during "
- "instantiation of the bias correction step {}: {}.".format(coreg.__class__, var_names)
- )
-
- # Add subset dict for this pipeline step to args of fit and apply
- return {n: bias_vars[n] for n in var_names}
-
- def fit(
- self: CoregType,
- reference_dem: NDArrayf | MArrayf | RasterType,
- dem_to_be_aligned: NDArrayf | MArrayf | RasterType,
- inlier_mask: NDArrayb | Mask | None = None,
- transform: rio.transform.Affine | None = None,
- crs: rio.crs.CRS | None = None,
- bias_vars: dict[str, NDArrayf | MArrayf | RasterType] | None = None,
- weights: NDArrayf | None = None,
- subsample: float | int | None = None,
- verbose: bool = False,
- random_state: None | np.random.RandomState | np.random.Generator | int = None,
- **kwargs: Any,
- ) -> CoregType:
-
- # Check if subsample arguments are different from their default value for any of the coreg steps:
- # get default value in argument spec and "subsample" stored in meta, and compare both are consistent
- argspec = [inspect.getfullargspec(c.__class__) for c in self.pipeline]
- sub_meta = [c._meta["subsample"] for c in self.pipeline]
- sub_is_default = [
- argspec[i].defaults[argspec[i].args.index("subsample") - 1] == sub_meta[i] # type: ignore
- for i in range(len(argspec))
- ]
- if subsample is not None and not all(sub_is_default):
- warnings.warn(
- "Subsample argument passed to fit() will override non-default subsample values defined for"
- " individual steps of the pipeline. To silence this warning: only define 'subsample' in "
- "either fit(subsample=...) or instantiation e.g., VerticalShift(subsample=...)."
- )
-
- # Pre-process the inputs, by reprojecting and subsampling, without any subsampling (done in each step)
- ref_dem, tba_dem, inlier_mask, transform, crs = _preprocess_coreg_raster_input(
- reference_dem=reference_dem,
- dem_to_be_aligned=dem_to_be_aligned,
- inlier_mask=inlier_mask,
- transform=transform,
- crs=crs,
- )
-
- tba_dem_mod = tba_dem.copy()
- out_transform = transform
-
- for i, coreg in enumerate(self.pipeline):
- if verbose:
- print(f"Running pipeline step: {i + 1} / {len(self.pipeline)}")
-
- main_args_fit = {
- "reference_dem": ref_dem,
- "dem_to_be_aligned": tba_dem_mod,
- "inlier_mask": inlier_mask,
- "transform": out_transform,
- "crs": crs,
- "weights": weights,
- "verbose": verbose,
- "subsample": subsample,
- "random_state": random_state,
- }
-
- main_args_apply = {"dem": tba_dem_mod, "transform": out_transform, "crs": crs}
-
- # If non-affine method that expects a bias_vars argument
- if coreg._needs_vars:
- step_bias_vars = self._parse_bias_vars(step=i, bias_vars=bias_vars)
-
- main_args_fit.update({"bias_vars": step_bias_vars})
- main_args_apply.update({"bias_vars": step_bias_vars})
-
- coreg.fit(**main_args_fit)
-
- tba_dem_mod, out_transform = coreg.apply(**main_args_apply)
-
- # Flag that the fitting function has been called.
- self._fit_called = True
-
- return self
-
- def _fit_pts_func(
- self: CoregType,
- ref_dem: NDArrayf | MArrayf | RasterType | pd.DataFrame,
- tba_dem: RasterType,
- verbose: bool = False,
- **kwargs: Any,
- ) -> CoregType:
-
- tba_dem_mod = tba_dem.copy()
-
- for i, coreg in enumerate(self.pipeline):
- if verbose:
- print(f"Running pipeline step: {i + 1} / {len(self.pipeline)}")
-
- coreg._fit_pts_func(ref_dem=ref_dem, tba_dem=tba_dem_mod, verbose=verbose, **kwargs)
- coreg._fit_called = True
-
- tba_dem_mod = coreg.apply(tba_dem_mod)
- return self
-
- def _apply_func(
- self,
- dem: NDArrayf,
- transform: rio.transform.Affine,
- crs: rio.crs.CRS,
- bias_vars: dict[str, NDArrayf] | None = None,
- **kwargs: Any,
- ) -> tuple[NDArrayf, rio.transform.Affine]:
- """Apply the coregistration steps sequentially to a DEM."""
- dem_mod = dem.copy()
- out_transform = copy.copy(transform)
-
- for i, coreg in enumerate(self.pipeline):
-
- main_args_apply = {"dem": dem_mod, "transform": out_transform, "crs": crs}
-
- # If non-affine method that expects a bias_vars argument
- if coreg._needs_vars:
- step_bias_vars = self._parse_bias_vars(step=i, bias_vars=bias_vars)
- main_args_apply.update({"bias_vars": step_bias_vars})
-
- dem_mod, out_transform = coreg.apply(**main_args_apply, **kwargs)
-
- return dem_mod, out_transform
-
- def _apply_pts_func(self, coords: NDArrayf) -> NDArrayf:
- """Apply the coregistration steps sequentially to a set of points."""
- coords_mod = coords.copy()
-
- for coreg in self.pipeline:
- coords_mod = coreg.apply_pts(coords_mod).reshape(coords_mod.shape)
-
- return coords_mod
-
- def __iter__(self) -> Generator[Coreg, None, None]:
- """Iterate over the pipeline steps."""
- yield from self.pipeline
-
- def __add__(self, other: list[Coreg] | Coreg | CoregPipeline) -> CoregPipeline:
- """Append a processing step or a pipeline to the pipeline."""
- if not isinstance(other, Coreg):
- other = list(other)
- else:
- other = [other]
-
- pipelines = self.pipeline + other
-
- return CoregPipeline(pipelines)
-
- def to_matrix(self) -> NDArrayf:
- """Convert the transform to a 4x4 transformation matrix."""
- return self._to_matrix_func()
-
- def _to_matrix_func(self) -> NDArrayf:
- """Try to join the coregistration steps to a single transformation matrix."""
- if not _HAS_P3D:
- raise ValueError("Optional dependency needed. Install 'pytransform3d'")
-
- transform_mgr = TransformManager()
-
- with warnings.catch_warnings():
- # Deprecation warning from pytransform3d. Let's hope that is fixed in the near future.
- warnings.filterwarnings("ignore", message="`np.float` is a deprecated alias for the builtin `float`")
- for i, coreg in enumerate(self.pipeline):
- new_matrix = coreg.to_matrix()
-
- transform_mgr.add_transform(i, i + 1, new_matrix)
-
- return transform_mgr.get_transform(0, len(self.pipeline))
-
-
-class BlockwiseCoreg(Coreg):
- """
- Block-wise co-registration processing class to run a step in segmented parts of the grid.
-
- A processing class of choice is run on an arbitrary subdivision of the raster. When later applying the step
- the optimal warping is interpolated based on X/Y/Z shifts from the coreg algorithm at the grid points.
-
- For instance: a subdivision of 4 triggers a division of the DEM in four equally sized parts. These parts are then
- processed separately, with 4 .fit() results. If the subdivision is not divisible by the raster shape,
- subdivision is made as good as possible to have approximately equal pixel counts.
- """
-
- def __init__(
- self,
- step: Coreg | CoregPipeline,
- subdivision: int,
- success_threshold: float = 0.8,
- n_threads: int | None = None,
- warn_failures: bool = False,
- ) -> None:
- """
- Instantiate a blockwise processing object.
-
- :param step: An instantiated co-registration step object to fit in the subdivided DEMs.
- :param subdivision: The number of chunks to divide the DEMs in. E.g. 4 means four different transforms.
- :param success_threshold: Raise an error if fewer chunks than the fraction failed for any reason.
- :param n_threads: The maximum amount of threads to use. Default=auto
- :param warn_failures: Trigger or ignore warnings for each exception/warning in each block.
- """
- if isinstance(step, type):
- raise ValueError(
- "The 'step' argument must be an instantiated Coreg subclass. " "Hint: write e.g. ICP() instead of ICP"
- )
- self.procstep = step
- self.subdivision = subdivision
- self.success_threshold = success_threshold
- self.n_threads = n_threads
- self.warn_failures = warn_failures
-
- super().__init__()
-
- self._meta: CoregDict = {"step_meta": []}
-
- def fit(
- self: CoregType,
- reference_dem: NDArrayf | MArrayf | RasterType,
- dem_to_be_aligned: NDArrayf | MArrayf | RasterType,
- inlier_mask: NDArrayb | Mask | None = None,
- transform: rio.transform.Affine | None = None,
- crs: rio.crs.CRS | None = None,
- bias_vars: dict[str, NDArrayf | MArrayf | RasterType] | None = None,
- weights: NDArrayf | None = None,
- subsample: float | int | None = None,
- verbose: bool = False,
- random_state: None | np.random.RandomState | np.random.Generator | int = None,
- **kwargs: Any,
- ) -> CoregType:
-
- # Check if subsample arguments are different from their default value for any of the coreg steps:
- # get default value in argument spec and "subsample" stored in meta, and compare both are consistent
- if not isinstance(self.procstep, CoregPipeline):
- steps = [self.procstep]
- else:
- steps = list(self.procstep.pipeline)
- argspec = [inspect.getfullargspec(s.__class__) for s in steps]
- sub_meta = [s._meta["subsample"] for s in steps]
- sub_is_default = [
- argspec[i].defaults[argspec[i].args.index("subsample") - 1] == sub_meta[i] # type: ignore
- for i in range(len(argspec))
- ]
- if subsample is not None and not all(sub_is_default):
- warnings.warn(
- "Subsample argument passed to fit() will override non-default subsample values defined in the"
- " step within the blockwise method. To silence this warning: only define 'subsample' in "
- "either fit(subsample=...) or instantiation e.g., VerticalShift(subsample=...)."
- )
-
- # Pre-process the inputs, by reprojecting and subsampling, without any subsampling (done in each step)
- ref_dem, tba_dem, inlier_mask, transform, crs = _preprocess_coreg_raster_input(
- reference_dem=reference_dem,
- dem_to_be_aligned=dem_to_be_aligned,
- inlier_mask=inlier_mask,
- transform=transform,
- crs=crs,
- )
- groups = self.subdivide_array(tba_dem.shape)
-
- indices = np.unique(groups)
-
- progress_bar = tqdm(total=indices.size, desc="Processing chunks", disable=(not verbose))
-
- def process(i: int) -> dict[str, Any] | BaseException | None:
- """
- Process a chunk in a thread-safe way.
-
- :returns:
- * If it succeeds: A dictionary of the fitting metadata.
- * If it fails: The associated exception.
- * If the block is empty: None
- """
- inlier_mask = groups == i
-
- # Find the corresponding slice of the inlier_mask to subset the data
- rows, cols = np.where(inlier_mask)
- arrayslice = np.s_[rows.min() : rows.max() + 1, cols.min() : cols.max() + 1]
-
- # Copy a subset of the two DEMs, the mask, the coreg instance, and make a new subset transform
- ref_subset = ref_dem[arrayslice].copy()
- tba_subset = tba_dem[arrayslice].copy()
-
- if any(np.all(~np.isfinite(dem)) for dem in (ref_subset, tba_subset)):
- return None
- mask_subset = inlier_mask[arrayslice].copy()
- west, top = rio.transform.xy(transform, min(rows), min(cols), offset="ul")
- transform_subset = rio.transform.from_origin(west, top, transform.a, -transform.e) # type: ignore
- procstep = self.procstep.copy()
-
- # Try to run the coregistration. If it fails for any reason, skip it and save the exception.
- try:
- procstep.fit(
- reference_dem=ref_subset,
- dem_to_be_aligned=tba_subset,
- transform=transform_subset,
- inlier_mask=mask_subset,
- bias_vars=bias_vars,
- weights=weights,
- crs=crs,
- subsample=subsample,
- random_state=random_state,
- verbose=verbose,
- )
- nmad, median = procstep.error(
- reference_dem=ref_subset,
- dem_to_be_aligned=tba_subset,
- error_type=["nmad", "median"],
- inlier_mask=mask_subset,
- transform=transform_subset,
- crs=crs,
- )
- except Exception as exception:
- return exception
-
- meta: dict[str, Any] = {
- "i": i,
- "transform": transform_subset,
- "inlier_count": np.count_nonzero(mask_subset & np.isfinite(ref_subset) & np.isfinite(tba_subset)),
- "nmad": nmad,
- "median": median,
- }
- # Find the center of the inliers.
- inlier_positions = np.argwhere(mask_subset)
- mid_row = np.mean(inlier_positions[:, 0]).astype(int)
- mid_col = np.mean(inlier_positions[:, 1]).astype(int)
-
- # Find the indices of all finites within the mask
- finites = np.argwhere(np.isfinite(tba_subset) & mask_subset)
- # Calculate the distance between the approximate center and all finite indices
- distances = np.linalg.norm(finites - np.array([mid_row, mid_col]), axis=1)
- # Find the index representing the closest finite value to the center.
- closest = np.argwhere(distances == distances.min())
-
- # Assign the closest finite value as the representative point
- representative_row, representative_col = finites[closest][0][0]
- meta["representative_x"], meta["representative_y"] = rio.transform.xy(
- transform_subset, representative_row, representative_col
- )
-
- repr_val = ref_subset[representative_row, representative_col]
- if ~np.isfinite(repr_val):
- repr_val = 0
- meta["representative_val"] = repr_val
-
- # If the coreg is a pipeline, copy its metadatas to the output meta
- if hasattr(procstep, "pipeline"):
- meta["pipeline"] = [step._meta.copy() for step in procstep.pipeline]
-
- # Copy all current metadata (except for the already existing keys like "i", "min_row", etc, and the
- # "coreg_meta" key)
- # This can then be iteratively restored when the apply function should be called.
- meta.update(
- {key: value for key, value in procstep._meta.items() if key not in ["step_meta"] + list(meta.keys())}
- )
-
- progress_bar.update()
-
- return meta.copy()
-
- # Catch warnings; only show them if
- exceptions: list[BaseException | warnings.WarningMessage] = []
- with warnings.catch_warnings(record=True) as caught_warnings:
- warnings.simplefilter("default")
- with concurrent.futures.ThreadPoolExecutor(max_workers=None) as executor:
- results = executor.map(process, indices)
-
- exceptions += list(caught_warnings)
-
- empty_blocks = 0
- for result in results:
- if isinstance(result, BaseException):
- exceptions.append(result)
- elif result is None:
- empty_blocks += 1
- continue
- else:
- self._meta["step_meta"].append(result)
-
- progress_bar.close()
-
- # Stop if the success rate was below the threshold
- if ((len(self._meta["step_meta"]) + empty_blocks) / self.subdivision) <= self.success_threshold:
- raise ValueError(
- f"Fitting failed for {len(exceptions)} chunks:\n"
- + "\n".join(map(str, exceptions[:5]))
- + f"\n... and {len(exceptions) - 5} more"
- if len(exceptions) > 5
- else ""
- )
-
- if self.warn_failures:
- for exception in exceptions:
- warnings.warn(str(exception))
-
- # Set the _fit_called parameters (only identical copies of self.coreg have actually been called)
- self.procstep._fit_called = True
- if isinstance(self.procstep, CoregPipeline):
- for step in self.procstep.pipeline:
- step._fit_called = True
-
- # Flag that the fitting function has been called.
- self._fit_called = True
-
- return self
-
- def _restore_metadata(self, meta: CoregDict) -> None:
- """
- Given some metadata, set it in the right place.
-
- :param meta: A metadata file to update self._meta
- """
- self.procstep._meta.update(meta)
-
- if isinstance(self.procstep, CoregPipeline) and "pipeline" in meta:
- for i, step in enumerate(self.procstep.pipeline):
- step._meta.update(meta["pipeline"][i])
-
- def to_points(self) -> NDArrayf:
- """
- Convert the blockwise coregistration matrices to 3D (source -> destination) points.
-
- The returned shape is (N, 3, 2) where the dimensions represent:
- 0. The point index where N is equal to the amount of subdivisions.
- 1. The X/Y/Z coordinate of the point.
- 2. The old/new position of the point.
-
- To acquire the first point's original position: points[0, :, 0]
- To acquire the first point's new position: points[0, :, 1]
- To acquire the first point's Z difference: points[0, 2, 1] - points[0, 2, 0]
-
- :returns: An array of 3D source -> destination points.
- """
- if len(self._meta["step_meta"]) == 0:
- raise AssertionError("No coreg results exist. Has '.fit()' been called?")
- points = np.empty(shape=(0, 3, 2))
- for meta in self._meta["step_meta"]:
- self._restore_metadata(meta)
-
- # x_coord, y_coord = rio.transform.xy(meta["transform"], meta["representative_row"],
- # meta["representative_col"])
- x_coord, y_coord = meta["representative_x"], meta["representative_y"]
-
- old_position = np.reshape([x_coord, y_coord, meta["representative_val"]], (1, 3))
- new_position = self.procstep.apply_pts(old_position)
-
- points = np.append(points, np.dstack((old_position, new_position)), axis=0)
-
- return points
-
- def stats(self) -> pd.DataFrame:
- """
- Return statistics for each chunk in the blockwise coregistration.
-
- * center_{x,y,z}: The center coordinate of the chunk in georeferenced units.
- * {x,y,z}_off: The calculated offset in georeferenced units.
- * inlier_count: The number of pixels that were inliers in the chunk.
- * nmad: The NMAD of elevation differences (robust dispersion) after coregistration.
- * median: The median of elevation differences (vertical shift) after coregistration.
-
- :raises ValueError: If no coregistration results exist yet.
-
- :returns: A dataframe of statistics for each chunk.
- """
- points = self.to_points()
-
- chunk_meta = {meta["i"]: meta for meta in self._meta["step_meta"]}
-
- statistics: list[dict[str, Any]] = []
- for i in range(points.shape[0]):
- if i not in chunk_meta:
- continue
- statistics.append(
- {
- "center_x": points[i, 0, 0],
- "center_y": points[i, 1, 0],
- "center_z": points[i, 2, 0],
- "x_off": points[i, 0, 1] - points[i, 0, 0],
- "y_off": points[i, 1, 1] - points[i, 1, 0],
- "z_off": points[i, 2, 1] - points[i, 2, 0],
- "inlier_count": chunk_meta[i]["inlier_count"],
- "nmad": chunk_meta[i]["nmad"],
- "median": chunk_meta[i]["median"],
- }
- )
-
- stats_df = pd.DataFrame(statistics)
- stats_df.index.name = "chunk"
-
- return stats_df
-
- def subdivide_array(self, shape: tuple[int, ...]) -> NDArrayf:
- """
- Return the grid subdivision for a given DEM shape.
-
- :param shape: The shape of the input DEM.
-
- :returns: An array of shape 'shape' with 'self.subdivision' unique indices.
- """
- if len(shape) == 3 and shape[0] == 1: # Account for (1, row, col) shapes
- shape = (shape[1], shape[2])
- return subdivide_array(shape, count=self.subdivision)
-
- def _apply_func(
- self,
- dem: NDArrayf,
- transform: rio.transform.Affine,
- crs: rio.crs.CRS,
- bias_vars: dict[str, NDArrayf] | None = None,
- **kwargs: Any,
- ) -> tuple[NDArrayf, rio.transform.Affine]:
-
- if np.count_nonzero(np.isfinite(dem)) == 0:
- return dem, transform
-
- # Other option than resample=True is not implemented for this case
- if "resample" in kwargs and kwargs["resample"] is not True:
- raise NotImplementedError()
-
- points = self.to_points()
-
- bounds, resolution = _transform_to_bounds_and_res(dem.shape, transform)
-
- representative_height = np.nanmean(dem)
- edges_source = np.array(
- [
- [bounds.left + resolution / 2, bounds.top - resolution / 2, representative_height],
- [bounds.right - resolution / 2, bounds.top - resolution / 2, representative_height],
- [bounds.left + resolution / 2, bounds.bottom + resolution / 2, representative_height],
- [bounds.right - resolution / 2, bounds.bottom + resolution / 2, representative_height],
- ]
- )
- edges_dest = self.apply_pts(edges_source)
- edges = np.dstack((edges_source, edges_dest))
-
- all_points = np.append(points, edges, axis=0)
-
- warped_dem = warp_dem(
- dem=dem,
- transform=transform,
- source_coords=all_points[:, :, 0],
- destination_coords=all_points[:, :, 1],
- resampling="linear",
- )
-
- return warped_dem, transform
-
- def _apply_pts_func(self, coords: NDArrayf) -> NDArrayf:
- """Apply the scaling model to a set of points."""
- points = self.to_points()
-
- new_coords = coords.copy()
-
- for dim in range(0, 3):
- with warnings.catch_warnings():
- # ZeroDivisionErrors may happen when the transformation is empty (which is fine)
- warnings.filterwarnings("ignore", message="ZeroDivisionError")
- model = scipy.interpolate.Rbf(
- points[:, 0, 0],
- points[:, 1, 0],
- points[:, dim, 1] - points[:, dim, 0],
- function="linear",
- )
-
- new_coords[:, dim] += model(coords[:, 0], coords[:, 1])
-
- return new_coords
-
-
-def warp_dem(
- dem: NDArrayf,
- transform: rio.transform.Affine,
- source_coords: NDArrayf,
- destination_coords: NDArrayf,
- resampling: str = "cubic",
- trim_border: bool = True,
- dilate_mask: bool = True,
-) -> NDArrayf:
- """
- Warp a DEM using a set of source-destination 2D or 3D coordinates.
-
- :param dem: The DEM to warp. Allowed shapes are (1, row, col) or (row, col)
- :param transform: The Affine transform of the DEM.
- :param source_coords: The source 2D or 3D points. must be X/Y/(Z) coords of shape (N, 2) or (N, 3).
- :param destination_coords: The destination 2D or 3D points. Must have the exact same shape as 'source_coords'
- :param resampling: The resampling order to use. Choices: ['nearest', 'linear', 'cubic'].
- :param trim_border: Remove values outside of the interpolation regime (True) or leave them unmodified (False).
- :param dilate_mask: Dilate the nan mask to exclude edge pixels that could be wrong.
-
- :raises ValueError: If the inputs are poorly formatted.
- :raises AssertionError: For unexpected outputs.
-
- :returns: A warped DEM with the same shape as the input.
- """
- if source_coords.shape != destination_coords.shape:
- raise ValueError(
- f"Incompatible shapes: source_coords '({source_coords.shape})' and "
- f"destination_coords '({destination_coords.shape})' shapes must be the same"
- )
- if (len(source_coords.shape) > 2) or (source_coords.shape[1] < 2) or (source_coords.shape[1] > 3):
- raise ValueError(
- "Invalid coordinate shape. Expected 2D or 3D coordinates of shape (N, 2) or (N, 3). "
- f"Got '{source_coords.shape}'"
- )
- allowed_resampling_strs = ["nearest", "linear", "cubic"]
- if resampling not in allowed_resampling_strs:
- raise ValueError(f"Resampling type '{resampling}' not understood. Choices: {allowed_resampling_strs}")
-
- dem_arr, dem_mask = get_array_and_mask(dem)
-
- bounds, resolution = _transform_to_bounds_and_res(dem_arr.shape, transform)
-
- no_horizontal = np.sum(np.linalg.norm(destination_coords[:, :2] - source_coords[:, :2], axis=1)) < 1e-6
- no_vertical = source_coords.shape[1] > 2 and np.sum(np.abs(destination_coords[:, 2] - source_coords[:, 2])) < 1e-6
-
- if no_horizontal and no_vertical:
- warnings.warn("No difference between source and destination coordinates. Returning self.")
- return dem
-
- source_coords_scaled = source_coords.copy()
- destination_coords_scaled = destination_coords.copy()
- # Scale the coordinates to index-space
- for coords in (source_coords_scaled, destination_coords_scaled):
- coords[:, 0] = dem_arr.shape[1] * (coords[:, 0] - bounds.left) / (bounds.right - bounds.left)
- coords[:, 1] = dem_arr.shape[0] * (1 - (coords[:, 1] - bounds.bottom) / (bounds.top - bounds.bottom))
-
- # Generate a grid of x and y index coordinates.
- grid_y, grid_x = np.mgrid[0 : dem_arr.shape[0], 0 : dem_arr.shape[1]]
-
- if no_horizontal:
- warped = dem_arr.copy()
- else:
- # Interpolate the sparse source-destination points to a grid.
- # (row, col, 0) represents the destination y-coordinates of the pixels.
- # (row, col, 1) represents the destination x-coordinates of the pixels.
- new_indices = scipy.interpolate.griddata(
- source_coords_scaled[:, [1, 0]],
- destination_coords_scaled[:, [1, 0]], # Coordinates should be in y/x (not x/y) for some reason..
- (grid_y, grid_x),
- method="linear",
- )
-
- # If the border should not be trimmed, just assign the original indices to the missing values.
- if not trim_border:
- missing_ys = np.isnan(new_indices[:, :, 0])
- missing_xs = np.isnan(new_indices[:, :, 1])
- new_indices[:, :, 0][missing_ys] = grid_y[missing_ys]
- new_indices[:, :, 1][missing_xs] = grid_x[missing_xs]
-
- order = {"nearest": 0, "linear": 1, "cubic": 3}
-
- with warnings.catch_warnings():
- # An skimage warning that will hopefully be fixed soon. (2021-06-08)
- warnings.filterwarnings("ignore", message="Passing `np.nan` to mean no clipping in np.clip")
- warped = skimage.transform.warp(
- image=np.where(dem_mask, np.nan, dem_arr),
- inverse_map=np.moveaxis(new_indices, 2, 0),
- output_shape=dem_arr.shape,
- preserve_range=True,
- order=order[resampling],
- cval=np.nan,
- )
- new_mask = (
- skimage.transform.warp(
- image=dem_mask, inverse_map=np.moveaxis(new_indices, 2, 0), output_shape=dem_arr.shape, cval=False
- )
- > 0
- )
-
- if dilate_mask:
- new_mask = scipy.ndimage.binary_dilation(new_mask, iterations=order[resampling]).astype(new_mask.dtype)
-
- warped[new_mask] = np.nan
-
- # If the coordinates are 3D (N, 3), apply a Z correction as well.
- if not no_vertical:
- grid_offsets = scipy.interpolate.griddata(
- points=destination_coords_scaled[:, :2],
- values=destination_coords_scaled[:, 2] - source_coords_scaled[:, 2],
- xi=(grid_x, grid_y),
- method=resampling,
- fill_value=np.nan,
- )
- if not trim_border:
- grid_offsets[np.isnan(grid_offsets)] = np.nanmean(grid_offsets)
-
- warped += grid_offsets
-
- assert not np.all(np.isnan(warped)), "All-NaN output."
-
- return warped.reshape(dem.shape)
diff --git a/xdem/coreg/biascorr.py b/xdem/coreg/biascorr.py
deleted file mode 100644
index d601d600..00000000
--- a/xdem/coreg/biascorr.py
+++ /dev/null
@@ -1,886 +0,0 @@
-"""Bias corrections (i.e., non-affine coregistration) classes."""
-from __future__ import annotations
-
-import inspect
-from typing import Any, Callable, Iterable, Literal, TypeVar
-
-import geoutils as gu
-import numpy as np
-import pandas as pd
-import rasterio as rio
-import scipy
-
-import xdem.spatialstats
-from xdem._typing import NDArrayb, NDArrayf
-from xdem.coreg.base import Coreg
-from xdem.fit import (
- polynomial_1d,
- polynomial_2d,
- robust_nfreq_sumsin_fit,
- robust_norder_polynomial_fit,
- sumsin_1d,
-)
-
-fit_workflows = {
- "norder_polynomial": {"func": polynomial_1d, "optimizer": robust_norder_polynomial_fit},
- "nfreq_sumsin": {"func": sumsin_1d, "optimizer": robust_nfreq_sumsin_fit},
-}
-
-BiasCorrType = TypeVar("BiasCorrType", bound="BiasCorr")
-
-
-class BiasCorr(Coreg):
- """
- Parent class of bias correction methods: non-rigid coregistrations.
-
- Made to be subclassed to pass default parameters/dimensions more intuitively, or to provide wrappers for specific
- types of bias corrections (directional, terrain, etc).
- """
-
- def __init__(
- self,
- fit_or_bin: Literal["bin_and_fit"] | Literal["fit"] | Literal["bin"] = "fit",
- fit_func: Callable[..., NDArrayf]
- | Literal["norder_polynomial"]
- | Literal["nfreq_sumsin"] = "norder_polynomial",
- fit_optimizer: Callable[..., tuple[NDArrayf, Any]] = scipy.optimize.curve_fit,
- bin_sizes: int | dict[str, int | Iterable[float]] = 10,
- bin_statistic: Callable[[NDArrayf], np.floating[Any]] = np.nanmedian,
- bin_apply_method: Literal["linear"] | Literal["per_bin"] = "linear",
- bias_var_names: Iterable[str] = None,
- subsample: float | int = 1.0,
- ):
- """
- Instantiate a bias correction object.
- """
- # Raise error if fit_or_bin is not defined
- if fit_or_bin not in ["fit", "bin", "bin_and_fit"]:
- raise ValueError(f"Argument `fit_or_bin` must be 'bin_and_fit', 'fit' or 'bin', got {fit_or_bin}.")
-
- # Pass the arguments to the class metadata
- if fit_or_bin in ["fit", "bin_and_fit"]:
-
- # Check input types for "fit" to raise user-friendly errors
- if not (callable(fit_func) or (isinstance(fit_func, str) and fit_func in fit_workflows.keys())):
- raise TypeError(
- "Argument `fit_func` must be a function (callable) "
- "or the string '{}', got {}.".format("', '".join(fit_workflows.keys()), type(fit_func))
- )
- if not callable(fit_optimizer):
- raise TypeError(
- "Argument `fit_optimizer` must be a function (callable), " "got {}.".format(type(fit_optimizer))
- )
-
- # If a workflow was called, override optimizer and pass proper function
- if isinstance(fit_func, str) and fit_func in fit_workflows.keys():
- # Looks like a typing bug here, see: https://github.com/python/mypy/issues/10740
- fit_optimizer = fit_workflows[fit_func]["optimizer"] # type: ignore
- fit_func = fit_workflows[fit_func]["func"] # type: ignore
-
- if fit_or_bin in ["bin", "bin_and_fit"]:
-
- # Check input types for "bin" to raise user-friendly errors
- if not (
- isinstance(bin_sizes, int)
- or (isinstance(bin_sizes, dict) and all(isinstance(val, (int, Iterable)) for val in bin_sizes.values()))
- ):
- raise TypeError(
- "Argument `bin_sizes` must be an integer, or a dictionary of integers or iterables, "
- "got {}.".format(type(bin_sizes))
- )
-
- if not callable(bin_statistic):
- raise TypeError(
- "Argument `bin_statistic` must be a function (callable), " "got {}.".format(type(bin_statistic))
- )
-
- if not isinstance(bin_apply_method, str):
- raise TypeError(
- "Argument `bin_apply_method` must be the string 'linear' or 'per_bin', "
- "got {}.".format(type(bin_apply_method))
- )
-
- list_bias_var_names = list(bias_var_names) if bias_var_names is not None else None
-
- # Now we write the relevant attributes to the class metadata
- # For fitting
- if fit_or_bin == "fit":
- meta_fit = {"fit_func": fit_func, "fit_optimizer": fit_optimizer, "bias_var_names": list_bias_var_names}
- # Somehow mypy doesn't understand that fit_func and fit_optimizer can only be callables now,
- # even writing the above "if" in a more explicit "if; else" loop with new variables names and typing
- super().__init__(meta=meta_fit) # type: ignore
-
- # For binning
- elif fit_or_bin == "bin":
- meta_bin = {
- "bin_sizes": bin_sizes,
- "bin_statistic": bin_statistic,
- "bin_apply_method": bin_apply_method,
- "bias_var_names": list_bias_var_names,
- }
- super().__init__(meta=meta_bin) # type: ignore
-
- # For both
- else:
- meta_bin_and_fit = {
- "fit_func": fit_func,
- "fit_optimizer": fit_optimizer,
- "bin_sizes": bin_sizes,
- "bin_statistic": bin_statistic,
- "bias_var_names": list_bias_var_names,
- }
- super().__init__(meta=meta_bin_and_fit) # type: ignore
-
- # Add subsample attribute
- self._meta["subsample"] = subsample
-
- # Update attributes
- self._fit_or_bin = fit_or_bin
- self._is_affine = False
- self._needs_vars = True
-
- def _fit_func( # type: ignore
- self,
- ref_dem: NDArrayf,
- tba_dem: NDArrayf,
- inlier_mask: NDArrayb,
- transform: rio.transform.Affine, # Never None thanks to Coreg.fit() pre-process
- crs: rio.crs.CRS, # Never None thanks to Coreg.fit() pre-process
- bias_vars: None | dict[str, NDArrayf] = None,
- weights: None | NDArrayf = None,
- verbose: bool = False,
- **kwargs,
- ) -> None:
- """Should only be called through subclassing."""
-
- # This is called by subclasses, so the bias_var should always be defined
- if bias_vars is None:
- raise ValueError("At least one `bias_var` should be passed to the fitting function, got None.")
-
- # If bias var names were explicitly passed at instantiation, check that they match the one from the dict
- if self._meta["bias_var_names"] is not None:
- if not sorted(bias_vars.keys()) == sorted(self._meta["bias_var_names"]):
- raise ValueError(
- "The keys of `bias_vars` do not match the `bias_var_names` defined during "
- "instantiation: {}.".format(self._meta["bias_var_names"])
- )
- # Otherwise, store bias variable names from the dictionary
- else:
- self._meta["bias_var_names"] = list(bias_vars.keys())
-
- # Compute difference and mask of valid data
- # TODO: Move the check up to Coreg.fit()?
-
- diff = ref_dem - tba_dem
- valid_mask = np.logical_and.reduce(
- (inlier_mask, np.isfinite(diff), *(np.isfinite(var) for var in bias_vars.values()))
- )
-
- # Raise errors if all values are NaN after introducing masks from the variables
- # (Others are already checked in Coreg.fit())
- if np.all(~valid_mask):
- raise ValueError("Some 'bias_vars' have only NaNs in the inlier mask.")
-
- subsample_mask = self._get_subsample_on_valid_mask(valid_mask=valid_mask, verbose=verbose)
-
- # Get number of variables
- nd = len(bias_vars)
-
- # Remove random state for keyword argument if its value is not in the optimizer function
- if self._fit_or_bin in ["fit", "bin_and_fit"]:
- fit_func_args = inspect.getfullargspec(self._meta["fit_optimizer"]).args
- if "random_state" not in fit_func_args and "random_state" in kwargs:
- kwargs.pop("random_state")
-
- # We need to sort the bin sizes in the same order as the bias variables if a dict is passed for bin_sizes
- if self._fit_or_bin in ["bin", "bin_and_fit"]:
- if isinstance(self._meta["bin_sizes"], dict):
- var_order = list(bias_vars.keys())
- # Declare type to write integer or tuple to the variable
- bin_sizes: int | tuple[int, ...] | tuple[NDArrayf, ...] = tuple(
- np.array(self._meta["bin_sizes"][var]) for var in var_order
- )
- # Otherwise, write integer directly
- else:
- bin_sizes = self._meta["bin_sizes"]
-
- # Option 1: Run fit and save optimized function parameters
- if self._fit_or_bin == "fit":
-
- # Print if verbose
- if verbose:
- print(
- "Estimating bias correction along variables {} by fitting "
- "with function {}.".format(", ".join(list(bias_vars.keys())), self._meta["fit_func"].__name__)
- )
-
- results = self._meta["fit_optimizer"](
- f=self._meta["fit_func"],
- xdata=np.array([var[subsample_mask].flatten() for var in bias_vars.values()]).squeeze(),
- ydata=diff[subsample_mask].flatten(),
- sigma=weights[subsample_mask].flatten() if weights is not None else None,
- absolute_sigma=True,
- **kwargs,
- )
-
- # Option 2: Run binning and save dataframe of result
- elif self._fit_or_bin == "bin":
-
- if verbose:
- print(
- "Estimating bias correction along variables {} by binning "
- "with statistic {}.".format(", ".join(list(bias_vars.keys())), self._meta["bin_statistic"].__name__)
- )
-
- df = xdem.spatialstats.nd_binning(
- values=diff[subsample_mask],
- list_var=[var[subsample_mask] for var in bias_vars.values()],
- list_var_names=list(bias_vars.keys()),
- list_var_bins=bin_sizes,
- statistics=(self._meta["bin_statistic"], "count"),
- )
-
- # Option 3: Run binning, then fitting, and save both results
- else:
-
- # Print if verbose
- if verbose:
- print(
- "Estimating bias correction along variables {} by binning with statistic {} and then fitting "
- "with function {}.".format(
- ", ".join(list(bias_vars.keys())),
- self._meta["bin_statistic"].__name__,
- self._meta["fit_func"].__name__,
- )
- )
-
- df = xdem.spatialstats.nd_binning(
- values=diff[subsample_mask],
- list_var=[var[subsample_mask] for var in bias_vars.values()],
- list_var_names=list(bias_vars.keys()),
- list_var_bins=bin_sizes,
- statistics=(self._meta["bin_statistic"], "count"),
- )
-
- # Now, we need to pass this new data to the fitting function and optimizer
- # We use only the N-D binning estimates (maximum dimension, equal to length of variable list)
- df_nd = df[df.nd == len(bias_vars)]
-
- # We get the middle of bin values for variable, and statistic for the diff
- new_vars = [pd.IntervalIndex(df_nd[var_name]).mid.values for var_name in bias_vars.keys()]
- new_diff = df_nd[self._meta["bin_statistic"].__name__].values
- # TODO: pass a new sigma based on "count" and original sigma (and correlation?)?
- # sigma values would have to be binned above also
-
- # Valid values for the binning output
- ind_valid = np.logical_and.reduce((np.isfinite(new_diff), *(np.isfinite(var) for var in new_vars)))
-
- if np.all(~ind_valid):
- raise ValueError("Only NaNs values after binning, did you pass the right bin edges?")
-
- results = self._meta["fit_optimizer"](
- f=self._meta["fit_func"],
- xdata=np.array([var[ind_valid].flatten() for var in new_vars]).squeeze(),
- ydata=new_diff[ind_valid].flatten(),
- sigma=weights[ind_valid].flatten() if weights is not None else None,
- absolute_sigma=True,
- **kwargs,
- )
-
- if verbose:
- print(f"{nd}D bias estimated.")
-
- # Save results if fitting was performed
- if self._fit_or_bin in ["fit", "bin_and_fit"]:
-
- # Write the results to metadata in different ways depending on optimizer returns
- if self._meta["fit_optimizer"] in (w["optimizer"] for w in fit_workflows.values()):
- params = results[0]
- order_or_freq = results[1]
- if self._meta["fit_optimizer"] == robust_norder_polynomial_fit:
- self._meta["poly_order"] = order_or_freq
- else:
- self._meta["nb_sin_freq"] = order_or_freq
-
- elif self._meta["fit_optimizer"] == scipy.optimize.curve_fit:
- params = results[0]
- # Calculation to get the error on parameters (see description of scipy.optimize.curve_fit)
- perr = np.sqrt(np.diag(results[1]))
- self._meta["fit_perr"] = perr
-
- else:
- params = results[0]
-
- self._meta["fit_params"] = params
-
- # Save results of binning if it was perfrmed
- elif self._fit_or_bin in ["bin", "bin_and_fit"]:
- self._meta["bin_dataframe"] = df
-
- def _apply_func( # type: ignore
- self,
- dem: NDArrayf,
- transform: rio.transform.Affine, # Never None thanks to Coreg.fit() pre-process
- crs: rio.crs.CRS, # Never None thanks to Coreg.fit() pre-process
- bias_vars: None | dict[str, NDArrayf] = None,
- **kwargs: Any,
- ) -> tuple[NDArrayf, rio.transform.Affine]:
-
- if bias_vars is None:
- raise ValueError("At least one `bias_var` should be passed to the `apply` function, got None.")
-
- # Check the bias_vars passed match the ones stored for this bias correction class
- if not sorted(bias_vars.keys()) == sorted(self._meta["bias_var_names"]):
- raise ValueError(
- "The keys of `bias_vars` do not match the `bias_var_names` defined during "
- "instantiation or fitting: {}.".format(self._meta["bias_var_names"])
- )
-
- # Apply function to get correction (including if binning was done before)
- if self._fit_or_bin in ["fit", "bin_and_fit"]:
- corr = self._meta["fit_func"](tuple(bias_vars.values()), *self._meta["fit_params"])
-
- # Apply binning to get correction
- else:
- if self._meta["bin_apply_method"] == "linear":
- # N-D interpolation of binning
- bin_interpolator = xdem.spatialstats.interp_nd_binning(
- df=self._meta["bin_dataframe"],
- list_var_names=list(bias_vars.keys()),
- statistic=self._meta["bin_statistic"],
- )
- corr = bin_interpolator(tuple(var.flatten() for var in bias_vars.values()))
- first_var = list(bias_vars.keys())[0]
- corr = corr.reshape(np.shape(bias_vars[first_var]))
-
- else:
- # Get N-D binning statistic for each pixel of the new list of variables
- corr = xdem.spatialstats.get_perbin_nd_binning(
- df=self._meta["bin_dataframe"],
- list_var=list(bias_vars.values()),
- list_var_names=list(bias_vars.keys()),
- statistic=self._meta["bin_statistic"],
- )
-
- dem_corr = dem + corr
-
- return dem_corr, transform
-
-
-class BiasCorr1D(BiasCorr):
- """
- Bias-correction along a single variable (e.g., angle, terrain attribute).
-
- The correction can be done by fitting a function along the variable, or binning with that variable.
- """
-
- def __init__(
- self,
- fit_or_bin: Literal["bin_and_fit"] | Literal["fit"] | Literal["bin"] = "fit",
- fit_func: Callable[..., NDArrayf]
- | Literal["norder_polynomial"]
- | Literal["nfreq_sumsin"] = "norder_polynomial",
- fit_optimizer: Callable[..., tuple[NDArrayf, Any]] = scipy.optimize.curve_fit,
- bin_sizes: int | dict[str, int | Iterable[float]] = 10,
- bin_statistic: Callable[[NDArrayf], np.floating[Any]] = np.nanmedian,
- bin_apply_method: Literal["linear"] | Literal["per_bin"] = "linear",
- bias_var_names: Iterable[str] = None,
- subsample: float | int = 1.0,
- ):
- """
- Instantiate a 1D bias correction.
-
- :param fit_or_bin: Whether to fit or bin. Use "fit" to correct by optimizing a function or
- "bin" to correct with a statistic of central tendency in defined bins.
- :param fit_func: Function to fit to the bias with variables later passed in .fit().
- :param fit_optimizer: Optimizer to minimize the function.
- :param bin_sizes: Size (if integer) or edges (if iterable) for binning variables later passed in .fit().
- :param bin_statistic: Statistic of central tendency (e.g., mean) to apply during the binning.
- :param bin_apply_method: Method to correct with the binned statistics, either "linear" to interpolate linearly
- between bins, or "per_bin" to apply the statistic for each bin.
- :param bias_var_names: (Optional) For pipelines, explicitly define bias variables names to use during .fit().
- :param subsample: Subsample the input for speed-up. <1 is parsed as a fraction. >1 is a pixel count.
- """
- super().__init__(
- fit_or_bin,
- fit_func,
- fit_optimizer,
- bin_sizes,
- bin_statistic,
- bin_apply_method,
- bias_var_names,
- subsample,
- )
-
- def _fit_func( # type: ignore
- self,
- ref_dem: NDArrayf,
- tba_dem: NDArrayf,
- inlier_mask: NDArrayb,
- bias_vars: dict[str, NDArrayf],
- transform: rio.transform.Affine, # Never None thanks to Coreg.fit() pre-process
- crs: rio.crs.CRS, # Never None thanks to Coreg.fit() pre-process
- weights: None | NDArrayf = None,
- verbose: bool = False,
- **kwargs,
- ) -> None:
- """Estimate the bias along the single provided variable using the bias function."""
-
- # Check number of variables
- if len(bias_vars) != 1:
- raise ValueError(
- "A single variable has to be provided through the argument 'bias_vars', "
- "got {}.".format(len(bias_vars))
- )
-
- super()._fit_func(
- ref_dem=ref_dem,
- tba_dem=tba_dem,
- inlier_mask=inlier_mask,
- bias_vars=bias_vars,
- transform=transform,
- crs=crs,
- weights=weights,
- verbose=verbose,
- **kwargs,
- )
-
-
-class BiasCorr2D(BiasCorr):
- """
- Bias-correction along two variables (e.g., X/Y coordinates, slope and curvature simultaneously).
- """
-
- def __init__(
- self,
- fit_or_bin: Literal["bin_and_fit"] | Literal["fit"] | Literal["bin"] = "fit",
- fit_func: Callable[..., NDArrayf] = polynomial_2d,
- fit_optimizer: Callable[..., tuple[NDArrayf, Any]] = scipy.optimize.curve_fit,
- bin_sizes: int | dict[str, int | Iterable[float]] = 10,
- bin_statistic: Callable[[NDArrayf], np.floating[Any]] = np.nanmedian,
- bin_apply_method: Literal["linear"] | Literal["per_bin"] = "linear",
- bias_var_names: Iterable[str] = None,
- subsample: float | int = 1.0,
- ):
- """
- Instantiate a 2D bias correction.
-
- :param fit_or_bin: Whether to fit or bin. Use "fit" to correct by optimizing a function or
- "bin" to correct with a statistic of central tendency in defined bins.
- :param fit_func: Function to fit to the bias with variables later passed in .fit().
- :param fit_optimizer: Optimizer to minimize the function.
- :param bin_sizes: Size (if integer) or edges (if iterable) for binning variables later passed in .fit().
- :param bin_statistic: Statistic of central tendency (e.g., mean) to apply during the binning.
- :param bin_apply_method: Method to correct with the binned statistics, either "linear" to interpolate linearly
- between bins, or "per_bin" to apply the statistic for each bin.
- :param bias_var_names: (Optional) For pipelines, explicitly define bias variables names to use during .fit().
- :param subsample: Subsample the input for speed-up. <1 is parsed as a fraction. >1 is a pixel count.
- """
- super().__init__(
- fit_or_bin,
- fit_func,
- fit_optimizer,
- bin_sizes,
- bin_statistic,
- bin_apply_method,
- bias_var_names,
- subsample,
- )
-
- def _fit_func( # type: ignore
- self,
- ref_dem: NDArrayf,
- tba_dem: NDArrayf,
- inlier_mask: NDArrayb,
- bias_vars: dict[str, NDArrayf],
- transform: rio.transform.Affine, # Never None thanks to Coreg.fit() pre-process
- crs: rio.crs.CRS, # Never None thanks to Coreg.fit() pre-process
- weights: None | NDArrayf = None,
- verbose: bool = False,
- **kwargs,
- ) -> None:
-
- # Check number of variables
- if len(bias_vars) != 2:
- raise ValueError(
- "Exactly two variables have to be provided through the argument 'bias_vars'"
- ", got {}.".format(len(bias_vars))
- )
-
- super()._fit_func(
- ref_dem=ref_dem,
- tba_dem=tba_dem,
- inlier_mask=inlier_mask,
- bias_vars=bias_vars,
- transform=transform,
- crs=crs,
- weights=weights,
- verbose=verbose,
- **kwargs,
- )
-
-
-class BiasCorrND(BiasCorr):
- """
- Bias-correction along N variables (e.g., simultaneously slope, curvature, aspect and elevation).
- """
-
- def __init__(
- self,
- fit_or_bin: Literal["bin_and_fit"] | Literal["fit"] | Literal["bin"] = "bin",
- fit_func: Callable[..., NDArrayf]
- | Literal["norder_polynomial"]
- | Literal["nfreq_sumsin"] = "norder_polynomial",
- fit_optimizer: Callable[..., tuple[NDArrayf, Any]] = scipy.optimize.curve_fit,
- bin_sizes: int | dict[str, int | Iterable[float]] = 10,
- bin_statistic: Callable[[NDArrayf], np.floating[Any]] = np.nanmedian,
- bin_apply_method: Literal["linear"] | Literal["per_bin"] = "linear",
- bias_var_names: Iterable[str] = None,
- subsample: float | int = 1.0,
- ):
- """
- Instantiate an N-D bias correction.
-
- :param fit_or_bin: Whether to fit or bin. Use "fit" to correct by optimizing a function or
- "bin" to correct with a statistic of central tendency in defined bins.
- :param fit_func: Function to fit to the bias with variables later passed in .fit().
- :param fit_optimizer: Optimizer to minimize the function.
- :param bin_sizes: Size (if integer) or edges (if iterable) for binning variables later passed in .fit().
- :param bin_statistic: Statistic of central tendency (e.g., mean) to apply during the binning.
- :param bin_apply_method: Method to correct with the binned statistics, either "linear" to interpolate linearly
- between bins, or "per_bin" to apply the statistic for each bin.
- :param bias_var_names: (Optional) For pipelines, explicitly define bias variables names to use during .fit().
- :param subsample: Subsample the input for speed-up. <1 is parsed as a fraction. >1 is a pixel count.
- """
- super().__init__(
- fit_or_bin,
- fit_func,
- fit_optimizer,
- bin_sizes,
- bin_statistic,
- bin_apply_method,
- bias_var_names,
- subsample,
- )
-
- def _fit_func( # type: ignore
- self,
- ref_dem: NDArrayf,
- tba_dem: NDArrayf,
- inlier_mask: NDArrayb,
- bias_vars: dict[str, NDArrayf], # Never None thanks to BiasCorr.fit() pre-process
- transform: rio.transform.Affine, # Never None thanks to Coreg.fit() pre-process
- crs: rio.crs.CRS, # Never None thanks to Coreg.fit() pre-process
- weights: None | NDArrayf = None,
- verbose: bool = False,
- **kwargs,
- ) -> None:
-
- # Check bias variable
- if bias_vars is None or len(bias_vars) <= 2:
- raise ValueError('At least three variables have to be provided through the argument "bias_vars".')
-
- super()._fit_func(
- ref_dem=ref_dem,
- tba_dem=tba_dem,
- inlier_mask=inlier_mask,
- bias_vars=bias_vars,
- transform=transform,
- crs=crs,
- weights=weights,
- verbose=verbose,
- **kwargs,
- )
-
-
-class DirectionalBias(BiasCorr1D):
- """
- Bias correction for directional biases, for example along- or across-track of satellite angle.
- """
-
- def __init__(
- self,
- angle: float = 0,
- fit_or_bin: Literal["bin_and_fit"] | Literal["fit"] | Literal["bin"] = "bin_and_fit",
- fit_func: Callable[..., NDArrayf] | Literal["norder_polynomial"] | Literal["nfreq_sumsin"] = "nfreq_sumsin",
- fit_optimizer: Callable[..., tuple[NDArrayf, Any]] = scipy.optimize.curve_fit,
- bin_sizes: int | dict[str, int | Iterable[float]] = 100,
- bin_statistic: Callable[[NDArrayf], np.floating[Any]] = np.nanmedian,
- bin_apply_method: Literal["linear"] | Literal["per_bin"] = "linear",
- subsample: float | int = 1.0,
- ):
- """
- Instantiate a directional bias correction.
-
- :param angle: Angle in which to perform the directional correction (degrees).
- :param fit_or_bin: Whether to fit or bin. Use "fit" to correct by optimizing a function or
- "bin" to correct with a statistic of central tendency in defined bins.
- :param fit_func: Function to fit to the bias with variables later passed in .fit().
- :param fit_optimizer: Optimizer to minimize the function.
- :param bin_sizes: Size (if integer) or edges (if iterable) for binning variables later passed in .fit().
- :param bin_statistic: Statistic of central tendency (e.g., mean) to apply during the binning.
- :param bin_apply_method: Method to correct with the binned statistics, either "linear" to interpolate linearly
- between bins, or "per_bin" to apply the statistic for each bin.
- :param subsample: Subsample the input for speed-up. <1 is parsed as a fraction. >1 is a pixel count.
- """
- super().__init__(
- fit_or_bin, fit_func, fit_optimizer, bin_sizes, bin_statistic, bin_apply_method, ["angle"], subsample
- )
- self._meta["angle"] = angle
- self._needs_vars = False
-
- def _fit_func( # type: ignore
- self,
- ref_dem: NDArrayf,
- tba_dem: NDArrayf,
- inlier_mask: NDArrayb,
- transform: rio.transform.Affine,
- crs: rio.crs.CRS,
- bias_vars: dict[str, NDArrayf] = None,
- weights: None | NDArrayf = None,
- verbose: bool = False,
- **kwargs,
- ) -> None:
-
- if verbose:
- print("Estimating rotated coordinates.")
-
- x, _ = gu.raster.get_xy_rotated(
- raster=gu.Raster.from_array(data=ref_dem, crs=crs, transform=transform),
- along_track_angle=self._meta["angle"],
- )
-
- # Parameters dependent on resolution cannot be derived from the rotated x coordinates, need to be passed below
- if "hop_length" not in kwargs:
- # The hop length will condition jump in function values, need to be larger than average resolution
- average_res = (transform[0] + abs(transform[4])) / 2
- kwargs.update({"hop_length": average_res})
-
- super()._fit_func(
- ref_dem=ref_dem,
- tba_dem=tba_dem,
- inlier_mask=inlier_mask,
- bias_vars={"angle": x},
- transform=transform,
- crs=crs,
- weights=weights,
- verbose=verbose,
- **kwargs,
- )
-
- def _apply_func(
- self,
- dem: NDArrayf,
- transform: rio.transform.Affine,
- crs: rio.crs.CRS,
- bias_vars: None | dict[str, NDArrayf] = None,
- **kwargs: Any,
- ) -> tuple[NDArrayf, rio.transform.Affine]:
-
- # Define the coordinates for applying the correction
- x, _ = gu.raster.get_xy_rotated(
- raster=gu.Raster.from_array(data=dem, crs=crs, transform=transform),
- along_track_angle=self._meta["angle"],
- )
-
- return super()._apply_func(dem=dem, transform=transform, crs=crs, bias_vars={"angle": x}, **kwargs)
-
-
-class TerrainBias(BiasCorr1D):
- """
- Correct a bias according to terrain, such as elevation or curvature.
-
- With elevation: often useful for nadir image DEM correction, where the focal length is slightly miscalculated.
- With curvature: often useful for a difference of DEMs with different effective resolution.
-
- DISCLAIMER: An elevation correction may introduce error when correcting non-photogrammetric biases, as generally
- elevation biases are interlinked with curvature biases.
- See Gardelle et al. (2012) (Figure 2), http://dx.doi.org/10.3189/2012jog11j175, for curvature-related biases.
- """
-
- def __init__(
- self,
- terrain_attribute: str = "maximum_curvature",
- fit_or_bin: Literal["bin_and_fit"] | Literal["fit"] | Literal["bin"] = "bin",
- fit_func: Callable[..., NDArrayf]
- | Literal["norder_polynomial"]
- | Literal["nfreq_sumsin"] = "norder_polynomial",
- fit_optimizer: Callable[..., tuple[NDArrayf, Any]] = scipy.optimize.curve_fit,
- bin_sizes: int | dict[str, int | Iterable[float]] = 100,
- bin_statistic: Callable[[NDArrayf], np.floating[Any]] = np.nanmedian,
- bin_apply_method: Literal["linear"] | Literal["per_bin"] = "linear",
- subsample: float | int = 1.0,
- ):
- """
- Instantiate a terrain bias correction.
-
- :param terrain_attribute: Terrain attribute to use for correction.
- :param fit_or_bin: Whether to fit or bin. Use "fit" to correct by optimizing a function or
- "bin" to correct with a statistic of central tendency in defined bins.
- :param fit_func: Function to fit to the bias with variables later passed in .fit().
- :param fit_optimizer: Optimizer to minimize the function.
- :param bin_sizes: Size (if integer) or edges (if iterable) for binning variables later passed in .fit().
- :param bin_statistic: Statistic of central tendency (e.g., mean) to apply during the binning.
- :param bin_apply_method: Method to correct with the binned statistics, either "linear" to interpolate linearly
- between bins, or "per_bin" to apply the statistic for each bin.
- :param subsample: Subsample the input for speed-up. <1 is parsed as a fraction. >1 is a pixel count.
- """
-
- super().__init__(
- fit_or_bin,
- fit_func,
- fit_optimizer,
- bin_sizes,
- bin_statistic,
- bin_apply_method,
- [terrain_attribute],
- subsample,
- )
- # This is the same as bias_var_names, but let's leave the duplicate for clarity
- self._meta["terrain_attribute"] = terrain_attribute
- self._needs_vars = False
-
- def _fit_func( # type: ignore
- self,
- ref_dem: NDArrayf,
- tba_dem: NDArrayf,
- inlier_mask: NDArrayb,
- transform: rio.transform.Affine,
- crs: rio.crs.CRS,
- bias_vars: dict[str, NDArrayf] = None,
- weights: None | NDArrayf = None,
- verbose: bool = False,
- **kwargs,
- ) -> None:
-
- # Derive terrain attribute
- if self._meta["terrain_attribute"] == "elevation":
- attr = ref_dem
- else:
- attr = xdem.terrain.get_terrain_attribute(
- dem=ref_dem, attribute=self._meta["terrain_attribute"], resolution=(transform[0], abs(transform[4]))
- )
-
- # Run the parent function
- super()._fit_func(
- ref_dem=ref_dem,
- tba_dem=tba_dem,
- inlier_mask=inlier_mask,
- bias_vars={self._meta["terrain_attribute"]: attr},
- transform=transform,
- crs=crs,
- weights=weights,
- verbose=verbose,
- **kwargs,
- )
-
- def _apply_func(
- self,
- dem: NDArrayf,
- transform: rio.transform.Affine,
- crs: rio.crs.CRS,
- bias_vars: None | dict[str, NDArrayf] = None,
- **kwargs: Any,
- ) -> tuple[NDArrayf, rio.transform.Affine]:
-
- if bias_vars is None:
- # Derive terrain attribute
- if self._meta["terrain_attribute"] == "elevation":
- attr = dem
- else:
- attr = xdem.terrain.get_terrain_attribute(
- dem=dem, attribute=self._meta["terrain_attribute"], resolution=(transform[0], abs(transform[4]))
- )
- bias_vars = {self._meta["terrain_attribute"]: attr}
-
- return super()._apply_func(dem=dem, transform=transform, crs=crs, bias_vars=bias_vars, **kwargs)
-
-
-class Deramp(BiasCorr2D):
- """
- Correct for a 2D polynomial along X/Y coordinates, for example from residual camera model deformations.
- """
-
- def __init__(
- self,
- poly_order: int = 2,
- fit_or_bin: Literal["bin_and_fit"] | Literal["fit"] | Literal["bin"] = "fit",
- fit_func: Callable[..., NDArrayf] = polynomial_2d,
- fit_optimizer: Callable[..., tuple[NDArrayf, Any]] = scipy.optimize.curve_fit,
- bin_sizes: int | dict[str, int | Iterable[float]] = 10,
- bin_statistic: Callable[[NDArrayf], np.floating[Any]] = np.nanmedian,
- bin_apply_method: Literal["linear"] | Literal["per_bin"] = "linear",
- subsample: float | int = 5e5,
- ):
- """
- Instantiate a directional bias correction.
-
- :param poly_order: Order of the 2D polynomial to fit.
- :param fit_or_bin: Whether to fit or bin. Use "fit" to correct by optimizing a function or
- "bin" to correct with a statistic of central tendency in defined bins.
- :param fit_func: Function to fit to the bias with variables later passed in .fit().
- :param fit_optimizer: Optimizer to minimize the function.
- :param bin_sizes: Size (if integer) or edges (if iterable) for binning variables later passed in .fit().
- :param bin_statistic: Statistic of central tendency (e.g., mean) to apply during the binning.
- :param bin_apply_method: Method to correct with the binned statistics, either "linear" to interpolate linearly
- between bins, or "per_bin" to apply the statistic for each bin.
- :param subsample: Subsample the input for speed-up. <1 is parsed as a fraction. >1 is a pixel count.
- """
- super().__init__(
- fit_or_bin,
- fit_func,
- fit_optimizer,
- bin_sizes,
- bin_statistic,
- bin_apply_method,
- ["xx", "yy"],
- subsample,
- )
- self._meta["poly_order"] = poly_order
- self._needs_vars = False
-
- def _fit_func( # type: ignore
- self,
- ref_dem: NDArrayf,
- tba_dem: NDArrayf,
- inlier_mask: NDArrayb,
- transform: rio.transform.Affine,
- crs: rio.crs.CRS,
- bias_vars: dict[str, NDArrayf] | None = None,
- weights: None | NDArrayf = None,
- verbose: bool = False,
- **kwargs,
- ) -> None:
-
- # The number of parameters in the first guess defines the polynomial order when calling np.polyval2d
- p0 = np.ones(shape=((self._meta["poly_order"] + 1) * (self._meta["poly_order"] + 1)))
-
- # Coordinates (we don't need the actual ones, just array coordinates)
- xx, yy = np.meshgrid(np.arange(0, ref_dem.shape[1]), np.arange(0, ref_dem.shape[0]))
-
- super()._fit_func(
- ref_dem=ref_dem,
- tba_dem=tba_dem,
- inlier_mask=inlier_mask,
- bias_vars={"xx": xx, "yy": yy},
- transform=transform,
- crs=crs,
- weights=weights,
- verbose=verbose,
- p0=p0,
- **kwargs,
- )
-
- def _apply_func(
- self,
- dem: NDArrayf,
- transform: rio.transform.Affine,
- crs: rio.crs.CRS,
- bias_vars: None | dict[str, NDArrayf] = None,
- **kwargs: Any,
- ) -> tuple[NDArrayf, rio.transform.Affine]:
-
- # Define the coordinates for applying the correction
- xx, yy = np.meshgrid(np.arange(0, dem.shape[1]), np.arange(0, dem.shape[0]))
-
- return super()._apply_func(dem=dem, transform=transform, crs=crs, bias_vars={"xx": xx, "yy": yy}, **kwargs)
diff --git a/xdem/coreg/filters.py b/xdem/coreg/filters.py
deleted file mode 100644
index a8e1ac3d..00000000
--- a/xdem/coreg/filters.py
+++ /dev/null
@@ -1 +0,0 @@
-"""Coregistration filters (coming soon)."""
diff --git a/xdem/coreg/workflows.py b/xdem/coreg/workflows.py
deleted file mode 100644
index 44e66471..00000000
--- a/xdem/coreg/workflows.py
+++ /dev/null
@@ -1,294 +0,0 @@
-"""Coregistration pipelines pre-defined with convenient user inputs and parameters."""
-
-from __future__ import annotations
-
-import geoutils as gu
-import matplotlib.pyplot as plt
-import numpy as np
-import pandas as pd
-import rasterio as rio
-from geoutils._typing import Number
-from geoutils.raster import RasterType
-
-from xdem._typing import NDArrayf
-from xdem.coreg.affine import NuthKaab, VerticalShift
-from xdem.coreg.base import Coreg
-from xdem.dem import DEM
-from xdem.spatialstats import nmad
-from xdem.terrain import slope
-
-
-def create_inlier_mask(
- src_dem: RasterType,
- ref_dem: RasterType,
- shp_list: list[str | gu.Vector | None] | tuple[str | gu.Vector] | tuple[()] = (),
- inout: list[int] | tuple[int] | tuple[()] = (),
- filtering: bool = True,
- dh_max: Number = None,
- nmad_factor: Number = 5,
- slope_lim: list[Number] | tuple[Number, Number] = (0.1, 40),
-) -> NDArrayf:
- """
- Create a mask of inliers pixels to be used for coregistration. The following pixels can be excluded:
- - pixels within polygons of file(s) in shp_list (with corresponding inout element set to 1) - useful for \
- masking unstable terrain like glaciers.
- - pixels outside polygons of file(s) in shp_list (with corresponding inout element set to -1) - useful to \
-delineate a known stable area.
- - pixels with absolute dh (=src-ref) are larger than a given threshold
- - pixels where absolute dh differ from the mean dh by more than a set threshold (with \
-filtering=True and nmad_factor)
- - pixels with low/high slope (with filtering=True and set slope_lim values)
-
- :param src_dem: the source DEM to be coregistered, as a Raster or DEM instance.
- :param ref_dem: the reference DEM, must have same grid as src_dem. To be used for filtering only.
- :param shp_list: a list of one or several paths to shapefiles to use for masking. Default is none.
- :param inout: a list of same size as shp_list. For each shapefile, set to 1 (resp. -1) to specify whether \
-to mask inside (resp. outside) of the polygons. Defaults to masking inside polygons for all shapefiles.
- :param filtering: if set to True, pixels will be removed based on dh values or slope (see next arguments).
- :param dh_max: remove pixels where abs(src - ref) is more than this value.
- :param nmad_factor: remove pixels where abs(src - ref) differ by nmad_factor * NMAD from the median.
- :param slope_lim: a list/tuple of min and max slope values, in degrees. Pixels outside this slope range will \
-be excluded.
-
- :returns: A boolean array of same shape as src_dem set to True for inlier pixels
- """
- # - Sanity check on inputs - #
- # Check correct input type of shp_list
- if not isinstance(shp_list, (list, tuple)):
- raise ValueError("`shp_list` must be a list/tuple")
- for el in shp_list:
- if not isinstance(el, (str, gu.Vector)):
- raise ValueError("`shp_list` must be a list/tuple of strings or geoutils.Vector instance")
-
- # Check correct input type of inout
- if not isinstance(inout, (list, tuple)):
- raise ValueError("`inout` must be a list/tuple")
-
- if len(shp_list) > 0:
- if len(inout) == 0:
- # Fill inout with 1
- inout = [1] * len(shp_list)
- elif len(inout) == len(shp_list):
- # Check that inout contains only 1 and -1
- not_valid = [el for el in np.unique(inout) if ((el != 1) & (el != -1))]
- if len(not_valid) > 0:
- raise ValueError("`inout` must contain only 1 and -1")
- else:
- raise ValueError("`inout` must be of same length as shp")
-
- # Check slope_lim type
- if not isinstance(slope_lim, (list, tuple)):
- raise ValueError("`slope_lim` must be a list/tuple")
- if len(slope_lim) != 2:
- raise ValueError("`slope_lim` must contain 2 elements")
- for el in slope_lim:
- if (not isinstance(el, (int, float, np.integer, np.floating))) or (el < 0) or (el > 90):
- raise ValueError("`slope_lim` must be a tuple/list of 2 elements in the range [0-90]")
-
- # Initialize inlier_mask with no masked pixel
- inlier_mask = np.ones(src_dem.data.shape, dtype="bool")
-
- # - Create mask based on shapefiles - #
- if len(shp_list) > 0:
- for k, shp in enumerate(shp_list):
- if isinstance(shp, str):
- outlines = gu.Vector(shp)
- else:
- outlines = shp
- mask_temp = outlines.create_mask(src_dem, as_array=True).reshape(np.shape(inlier_mask))
- # Append mask for given shapefile to final mask
- if inout[k] == 1:
- inlier_mask[mask_temp] = False
- elif inout[k] == -1:
- inlier_mask[~mask_temp] = False
-
- # - Filter possible outliers - #
- if filtering:
- # Calculate dDEM
- ddem = src_dem - ref_dem
-
- # Remove gross blunders with absolute threshold
- if dh_max is not None:
- inlier_mask[np.abs(ddem.data) > dh_max] = False
-
- # Remove blunders where dh differ by nmad_factor * NMAD from the median
- nmad_val = nmad(ddem.data[inlier_mask])
- med = np.ma.median(ddem.data[inlier_mask])
- inlier_mask = inlier_mask & (np.abs(ddem.data - med) < nmad_factor * nmad_val).filled(False)
-
- # Exclude steep slopes for coreg
- slp = slope(ref_dem)
- inlier_mask[slp.data < slope_lim[0]] = False
- inlier_mask[slp.data > slope_lim[1]] = False
-
- return inlier_mask
-
-
-def dem_coregistration(
- src_dem_path: str | RasterType,
- ref_dem_path: str | RasterType,
- out_dem_path: str | None = None,
- coreg_method: Coreg | None = NuthKaab() + VerticalShift(),
- grid: str = "ref",
- resample: bool = False,
- resampling: rio.warp.Resampling | None = rio.warp.Resampling.bilinear,
- shp_list: list[str | gu.Vector] | tuple[str | gu.Vector] | tuple[()] = (),
- inout: list[int] | tuple[int] | tuple[()] = (),
- filtering: bool = True,
- dh_max: Number = None,
- nmad_factor: Number = 5,
- slope_lim: list[Number] | tuple[Number, Number] = (0.1, 40),
- plot: bool = False,
- out_fig: str = None,
- verbose: bool = False,
-) -> tuple[DEM, Coreg, pd.DataFrame, NDArrayf]:
- """
- A one-line function to coregister a selected DEM to a reference DEM.
-
- Reads both DEMs, reprojects them on the same grid, mask pixels based on shapefile(s), filter steep slopes and \
-outliers, run the coregistration, returns the coregistered DEM and some statistics.
- Optionally, save the coregistered DEM to file and make a figure.
- For details on masking options, see `create_inlier_mask` function.
-
- :param src_dem_path: Path to the input DEM to be coregistered
- :param ref_dem_path: Path to the reference DEM
- :param out_dem_path: Path where to save the coregistered DEM. If set to None (default), will not save to file.
- :param coreg_method: The xdem coregistration method, or pipeline.
- :param grid: The grid to be used during coregistration, set either to "ref" or "src".
- :param resample: If set to True, will reproject output Raster on the same grid as input. Otherwise, only \
-the array/transform will be updated (if possible) and no resampling is done. Useful to avoid spreading data gaps.
- :param resampling: The resampling algorithm to be used if `resample` is True. Default is bilinear.
- :param shp_list: A list of one or several paths to shapefiles to use for masking.
- :param inout: A list of same size as shp_list. For each shapefile, set to 1 (resp. -1) to specify whether \
-to mask inside (resp. outside) of the polygons. Defaults to masking inside polygons for all shapefiles.
- :param filtering: If set to True, filtering will be applied prior to coregistration.
- :param dh_max: Remove pixels where abs(src - ref) is more than this value.
- :param nmad_factor: Remove pixels where abs(src - ref) differ by nmad_factor * NMAD from the median.
- :param slope_lim: A list/tuple of min and max slope values, in degrees. Pixels outside this slope range will \
-be excluded.
- :param plot: Set to True to plot a figure of elevation diff before/after coregistration.
- :param out_fig: Path to the output figure. If None will display to screen.
- :param verbose: Set to True to print details on screen during coregistration.
-
- :returns: A tuple containing 1) coregistered DEM as an xdem.DEM instance 2) the coregistration method \
-3) DataFrame of coregistration statistics (count of obs, median and NMAD over stable terrain) before and after \
-coregistration and 4) the inlier_mask used.
- """
- # Check inputs
- if not isinstance(coreg_method, Coreg):
- raise ValueError("`coreg_method` must be an xdem.coreg instance (e.g. xdem.coreg.NuthKaab())")
-
- if isinstance(ref_dem_path, str):
- if not isinstance(src_dem_path, str):
- raise ValueError(
- f"`ref_dem_path` is string but `src_dem_path` has type {type(src_dem_path)}."
- "Both must have same type."
- )
- elif isinstance(ref_dem_path, gu.Raster):
- if not isinstance(src_dem_path, gu.Raster):
- raise ValueError(
- f"`ref_dem_path` is of Raster type but `src_dem_path` has type {type(src_dem_path)}."
- "Both must have same type."
- )
- else:
- raise ValueError("`ref_dem_path` must be either a string or a Raster")
-
- if grid not in ["ref", "src"]:
- raise ValueError(f"`grid` must be either 'ref' or 'src' - currently set to {grid}")
-
- # Load both DEMs
- if verbose:
- print("Loading and reprojecting input data")
-
- if isinstance(ref_dem_path, str):
- if grid == "ref":
- ref_dem, src_dem = gu.raster.load_multiple_rasters([ref_dem_path, src_dem_path], ref_grid=0)
- elif grid == "src":
- ref_dem, src_dem = gu.raster.load_multiple_rasters([ref_dem_path, src_dem_path], ref_grid=1)
- else:
- ref_dem = ref_dem_path
- src_dem = src_dem_path
- if grid == "ref":
- src_dem = src_dem.reproject(ref_dem, silent=True)
- elif grid == "src":
- ref_dem = ref_dem.reproject(src_dem, silent=True)
-
- # Convert to DEM instance with Float32 dtype
- # TODO: Could only convert types int into float, but any other float dtype should yield very similar results
- ref_dem = DEM(ref_dem.astype(np.float32))
- src_dem = DEM(src_dem.astype(np.float32))
-
- # Create raster mask
- if verbose:
- print("Creating mask of inlier pixels")
-
- inlier_mask = create_inlier_mask(
- src_dem,
- ref_dem,
- shp_list=shp_list,
- inout=inout,
- filtering=filtering,
- dh_max=dh_max,
- nmad_factor=nmad_factor,
- slope_lim=slope_lim,
- )
-
- # Calculate dDEM
- ddem = src_dem - ref_dem
-
- # Calculate dDEM statistics on pixels used for coreg
- inlier_data = ddem.data[inlier_mask].compressed()
- nstable_orig, mean_orig = len(inlier_data), np.mean(inlier_data)
- med_orig, nmad_orig = np.median(inlier_data), nmad(inlier_data)
-
- # Coregister to reference - Note: this will spread NaN
- coreg_method.fit(ref_dem, src_dem, inlier_mask, verbose=verbose)
- dem_coreg = coreg_method.apply(src_dem, resample=resample, resampling=resampling)
-
- # Calculate coregistered ddem (might need resampling if resample set to False), needed for stats and plot only
- ddem_coreg = dem_coreg.reproject(ref_dem, silent=True) - ref_dem
-
- # Calculate new stats
- inlier_data = ddem_coreg.data[inlier_mask].compressed()
- nstable_coreg, mean_coreg = len(inlier_data), np.mean(inlier_data)
- med_coreg, nmad_coreg = np.median(inlier_data), nmad(inlier_data)
-
- # Plot results
- if plot:
- # Max colorbar value - 98th percentile rounded to nearest 5
- vmax = np.percentile(np.abs(ddem.data.compressed()), 98) // 5 * 5
-
- plt.figure(figsize=(11, 5))
-
- ax1 = plt.subplot(121)
- plt.imshow(ddem.data.squeeze(), cmap="coolwarm_r", vmin=-vmax, vmax=vmax)
- cb = plt.colorbar()
- cb.set_label("Elevation change (m)")
- ax1.set_title(f"Before coreg\n\nmean = {mean_orig:.1f} m - med = {med_orig:.1f} m - NMAD = {nmad_orig:.1f} m")
-
- ax2 = plt.subplot(122, sharex=ax1, sharey=ax1)
- plt.imshow(ddem_coreg.data.squeeze(), cmap="coolwarm_r", vmin=-vmax, vmax=vmax)
- cb = plt.colorbar()
- cb.set_label("Elevation change (m)")
- ax2.set_title(
- f"After coreg\n\n\nmean = {mean_coreg:.1f} m - med = {med_coreg:.1f} m - NMAD = {nmad_coreg:.1f} m"
- )
-
- plt.tight_layout()
- if out_fig is None:
- plt.show()
- else:
- plt.savefig(out_fig, dpi=200)
- plt.close()
-
- # Save coregistered DEM
- if out_dem_path is not None:
- dem_coreg.save(out_dem_path, tiled=True)
-
- # Save stats to DataFrame
- out_stats = pd.DataFrame(
- ((nstable_orig, med_orig, nmad_orig, nstable_coreg, med_coreg, nmad_coreg),),
- columns=("nstable_orig", "med_orig", "nmad_orig", "nstable_coreg", "med_coreg", "nmad_coreg"),
- )
-
- return dem_coreg, coreg_method, out_stats, inlier_mask
diff --git a/xdem/vcrs.py b/xdem/vcrs.py
deleted file mode 100644
index ab83c6b3..00000000
--- a/xdem/vcrs.py
+++ /dev/null
@@ -1,333 +0,0 @@
-"""Routines for vertical CRS transformation (fully based on pyproj)."""
-from __future__ import annotations
-
-import http.client
-import os
-import pathlib
-import warnings
-from typing import Literal, TypedDict
-
-import pyproj
-from pyproj import CRS
-from pyproj.crs import BoundCRS, CompoundCRS, GeographicCRS, VerticalCRS
-from pyproj.crs.coordinate_system import Ellipsoidal3DCS
-from pyproj.crs.enums import Ellipsoidal3DCSAxis
-from pyproj.transformer import TransformerGroup
-
-from xdem._typing import MArrayf, NDArrayf
-
-# Sources for defining vertical references:
-# AW3D30: https://www.eorc.jaxa.jp/ALOS/en/aw3d30/aw3d30v11_format_e.pdf
-# SRTMGL1: https://lpdaac.usgs.gov/documents/179/SRTM_User_Guide_V3.pdf
-# SRTMv4.1: http://www.cgiar-csi.org/data/srtm-90m-digital-elevation-database-v4-1
-# ASTGTM2/ASTGTM3: https://lpdaac.usgs.gov/documents/434/ASTGTM_User_Guide_V3.pdf
-# NASADEM: https://lpdaac.usgs.gov/documents/592/NASADEM_User_Guide_V1.pdf, HGTS is ellipsoid, HGT is EGM96 geoid !!
-# ArcticDEM (mosaic and strips): https://www.pgc.umn.edu/data/arcticdem/
-# REMA (mosaic and strips): https://www.pgc.umn.edu/data/rema/
-# TanDEM-X 90m global: https://geoservice.dlr.de/web/dataguide/tdm90/
-# COPERNICUS DEM: https://spacedata.copernicus.eu/web/cscda/dataset-details?articleId=394198
-vcrs_dem_products = {
- "ArcticDEM/REMA/EarthDEM": "Ellipsoid",
- "TDM1": "Ellipsoid",
- "NASADEM-HGTS": "Ellipsoid",
- "AW3D30": "EGM96",
- "SRTMv4.1": "EGM96",
- "ASTGTM2": "EGM96",
- "ASTGTM3": "EGM96",
- "NASADEM-HGT": "EGM96",
- "COPDEM": "EGM08",
-}
-
-
-def _parse_vcrs_name_from_product(product: str) -> str | None:
- """
- Parse vertical CRS name from DEM product name.
-
- :param product: Product name (typically from satimg.parse_metadata_from_fn).
-
- :return: vcrs_name: Vertical CRS name.
- """
-
- if product in vcrs_dem_products.keys():
- vcrs_name = vcrs_dem_products[product]
- else:
- vcrs_name = None
-
- return vcrs_name
-
-
-def _build_ccrs_from_crs_and_vcrs(crs: CRS, vcrs: CRS | Literal["Ellipsoid"]) -> CompoundCRS | CRS:
- """
- Build a compound CRS from a horizontal CRS and a vertical CRS.
-
- :param crs: Horizontal CRS.
- :param vcrs: Vertical CRS.
-
- :return: Compound CRS (horizontal + vertical).
- """
-
- # If a vertical CRS was passed, build a compound CRS with horizontal + vertical
- # This requires transforming the horizontal CRS to 2D in case it was 3D
- # Using CRS() because rasterio.CRS does not allow to call .name otherwise...
- if isinstance(vcrs, CRS):
- # If pyproj >= 3.5.1, we can use CRS.to_2d()
- from packaging.version import Version
-
- if Version(pyproj.__version__) > Version("3.5.0"):
- crs_from = CRS(crs).to_2d()
- ccrs = CompoundCRS(
- name="Horizontal: " + CRS(crs).name + "; Vertical: " + vcrs.name,
- components=[crs_from, vcrs],
- )
- # Otherwise, we have to raise an error if the horizontal CRS is already 3D
- else:
- crs_from = CRS(crs)
- # If 3D
- if len(crs_from.axis_info) > 2:
- raise NotImplementedError(
- "pyproj >= 3.5.1 is required to demote a 3D CRS to 2D and be able to compound "
- "with a new vertical CRS. Update your dependencies or pass the 2D source CRS "
- "manually."
- )
- # If 2D
- else:
- ccrs = CompoundCRS(
- name="Horizontal: " + CRS(crs).name + "; Vertical: " + vcrs.name,
- components=[crs_from, vcrs],
- )
-
- # Else if "Ellipsoid" was passed, there is no vertical reference
- # We still have to return the CRS in 3D
- elif isinstance(vcrs, str) and vcrs.lower() == "ellipsoid":
- ccrs = CRS(crs).to_3d()
- else:
- raise ValueError("Invalid vcrs given. Must be a vertical CRS or the literal string 'Ellipsoid'.")
-
- return ccrs
-
-
-def _build_vcrs_from_grid(grid: str, old_way: bool = False) -> CompoundCRS:
- """
- Build a compound CRS from a vertical CRS grid path.
-
- :param grid: Path to grid for vertical reference.
- :param old_way: Whether to use the new or old way of building the compound CRS with pyproj (for testing purposes).
-
- :return: Compound CRS (horizontal + vertical).
- """
-
- if not os.path.exists(os.path.join(pyproj.datadir.get_data_dir(), grid)):
- warnings.warn(
- "Grid not found in "
- + str(pyproj.datadir.get_data_dir())
- + ". Attempting to download from https://cdn.proj.org/..."
- )
- from pyproj.sync import _download_resource_file
-
- try:
- _download_resource_file(
- file_url=os.path.join("https://cdn.proj.org/", grid),
- short_name=grid,
- directory=pyproj.datadir.get_data_dir(),
- verbose=False,
- )
- except http.client.InvalidURL:
- raise ValueError(
- "The provided grid '{}' does not exist at https://cdn.proj.org/. "
- "Provide an existing grid.".format(grid)
- )
-
- # The old way: see https://gis.stackexchange.com/questions/352277/.
- if old_way:
- with warnings.catch_warnings():
- warnings.filterwarnings("ignore", module="pyproj")
- ccrs = pyproj.Proj(init="EPSG:4326", geoidgrids=grid).crs
- bound_crs = ccrs.sub_crs_list[1]
-
- # The clean way
- else:
- # First, we build a bounds CRS (the vertical CRS relative to geographic)
- vertical_crs = VerticalCRS(
- name="unknown using geoidgrids=" + grid, datum='VDATUM["unknown using geoidgrids=' + grid + '"]'
- )
- geographic3d_crs = GeographicCRS(
- name="WGS 84",
- ellipsoidal_cs=Ellipsoidal3DCS(axis=Ellipsoidal3DCSAxis.LATITUDE_LONGITUDE_HEIGHT),
- )
- bound_crs = BoundCRS(
- source_crs=vertical_crs,
- target_crs=geographic3d_crs,
- transformation={
- "$schema": "https://proj.org/schemas/v0.2/projjson.schema.json",
- "type": "Transformation",
- "name": "unknown to WGS84 ellipsoidal height",
- "source_crs": vertical_crs.to_json_dict(),
- "target_crs": geographic3d_crs.to_json_dict(),
- "method": {"name": "GravityRelatedHeight to Geographic3D"},
- "parameters": [
- {
- "name": "Geoid (height correction) model file",
- "value": grid,
- "id": {"authority": "EPSG", "code": 8666},
- }
- ],
- },
- )
-
- return bound_crs
-
-
-# Define types of common Vertical CRS dictionary
-class VCRSMetaDict(TypedDict, total=False):
- grid: str
- epsg: int
-
-
-_vcrs_meta: dict[str, VCRSMetaDict] = {
- "EGM08": {"grid": "us_nga_egm08_25.tif", "epsg": 3855}, # EGM2008 at 2.5 minute resolution
- "EGM96": {"grid": "us_nga_egm96_15.tif", "epsg": 5773}, # EGM1996 at 15 minute resolution
-}
-
-
-def _vcrs_from_crs(crs: CRS) -> CRS:
- """Get the vertical CRS from a CRS."""
-
- # Check if CRS is 3D
- if len(crs.axis_info) > 2:
-
- # Check if CRS has a vertical compound
- if any(subcrs.is_vertical for subcrs in crs.sub_crs_list):
- # Then we get the first vertical CRS (should be only one anyway)
- vcrs = [subcrs for subcrs in crs.sub_crs_list if subcrs.is_vertical][0]
- # Otherwise, it's a 3D CRS based on an ellipsoid
- else:
- vcrs = "Ellipsoid"
- # Otherwise, the CRS is 2D and there is no vertical CRS
- else:
- vcrs = None
-
- return vcrs
-
-
-def _vcrs_from_user_input(
- vcrs_input: Literal["Ellipsoid"] | Literal["EGM08"] | Literal["EGM96"] | str | pathlib.Path | CRS | int,
-) -> VerticalCRS | BoundCRS | Literal["Ellipsoid"]:
- """
- Parse vertical CRS from user input.
-
- :param vcrs_input: Vertical coordinate reference system either as a name ("Ellipsoid", "EGM08", "EGM96"),
- an EPSG code or pyproj.crs.VerticalCRS, or a path to a PROJ grid file (https://github.com/OSGeo/PROJ-data).
-
- :return: Vertical CRS.
- """
-
- # Raise errors if input type is wrong (allow CRS instead of VerticalCRS for broader error messages below)
- if not isinstance(vcrs_input, (str, pathlib.Path, CRS, int)):
- raise TypeError(f"New vertical CRS must be a string, path or VerticalCRS, received {type(vcrs_input)}.")
-
- # If input is ellipsoid
- if (
- (isinstance(vcrs_input, str) and (vcrs_input.lower() == "ellipsoid" or vcrs_input.upper() == "WGS84"))
- or (isinstance(vcrs_input, int) and vcrs_input in [4326, 4979])
- or (isinstance(vcrs_input, CRS) and vcrs_input.to_epsg() in [4326, 4979])
- ):
- return "Ellipsoid"
-
- # Define CRS in case EPSG or CRS was passed
- if isinstance(vcrs_input, (int, CRS)):
- if isinstance(vcrs_input, int):
- vcrs = CRS.from_epsg(vcrs_input)
- else:
- vcrs = vcrs_input
-
- # Raise errors if the CRS constructed is not vertical or has other components
- if isinstance(vcrs, CRS) and not vcrs.is_vertical:
- raise ValueError(
- "New vertical CRS must have a vertical axis, '{}' does not "
- "(check with `CRS.is_vertical`).".format(vcrs.name)
- )
- elif isinstance(vcrs, CRS) and vcrs.is_vertical and len(vcrs.axis_info) > 2:
- warnings.warn(
- "New vertical CRS has a vertical dimension but also other components, "
- "extracting the vertical reference only."
- )
- vcrs = _vcrs_from_crs(vcrs)
-
- # If a string was passed
- else:
- # If a name is passed, define CRS based on dict
- if isinstance(vcrs_input, str) and vcrs_input.upper() in _vcrs_meta.keys():
- vcrs_meta = _vcrs_meta[vcrs_input]
- vcrs = CRS.from_epsg(vcrs_meta["epsg"])
- # Otherwise, attempt to read a grid from the string
- else:
- if isinstance(vcrs_input, pathlib.Path):
- grid = vcrs_input.name
- else:
- grid = vcrs_input
- vcrs = _build_vcrs_from_grid(grid=grid)
-
- return vcrs
-
-
-def _grid_from_user_input(vcrs_input: str | pathlib.Path | int | CRS) -> str | None:
-
- # If a grid or name was passed, get grid name
- if isinstance(vcrs_input, (str, pathlib.Path)):
- # If the string is within the supported names
- if isinstance(vcrs_input, str) and vcrs_input in _vcrs_meta.keys():
- grid = _vcrs_meta[vcrs_input]["grid"]
- # If it's a pathlib path
- elif isinstance(vcrs_input, pathlib.Path):
- grid = vcrs_input.name
- # Or an ellipsoid
- elif vcrs_input.lower() == "ellipsoid":
- grid = None
- # Or a string path
- else:
- grid = vcrs_input
- # Otherwise, there is none
- else:
- grid = None
-
- return grid
-
-
-def _transform_zz(
- crs_from: CRS, crs_to: CRS, xx: NDArrayf, yy: NDArrayf, zz: MArrayf | NDArrayf | int | float
-) -> MArrayf | NDArrayf | int | float:
- """
- Transform elevation to a new 3D CRS.
-
- :param crs_from: Source CRS.
- :param crs_to: Destination CRS.
- :param xx: X coordinates.
- :param yy: Y coordinates.
- :param zz: Z coordinates.
-
- :return: Transformed Z coordinates.
- """
-
- # Find all possible transforms
- with warnings.catch_warnings():
- warnings.filterwarnings("ignore", "Best transformation is not available")
- trans_group = TransformerGroup(crs_from=crs_from, crs_to=crs_to, always_xy=True)
-
- # Download grid if best available is not on disk, download and re-initiate the object
- if not trans_group.best_available:
- trans_group.download_grids()
- trans_group = TransformerGroup(crs_from=crs_from, crs_to=crs_to, always_xy=True)
-
- # If the best available grid is still not there, raise a warning
- if not trans_group.best_available:
- warnings.warn(
- category=UserWarning,
- message="Best available grid for transformation could not be downloaded, "
- "applying the next best available (caution: might apply no transform at all).",
- )
- transformer = trans_group.transformers[0]
-
- # Will preserve the mask of the masked-array since pyproj 3.4
- zz_trans = transformer.transform(xx, yy, zz)[2]
-
- return zz_trans