Skip to content

Conversation

@weiji14
Copy link
Owner

@weiji14 weiji14 commented Jun 14, 2019

Extends #146 to actually use the Antarctic Snow Accumulation dataset by Arthern et al. 2006 in our adapted Enhanced Super Resolution Generative Adversarial Network model!

4 input ESRGAN model with newly added Antarctic Snow Accumulation input

TODO:

  • Establish a 'pre-' baseline without the Snow Accumulation input (7dec5b5)
  • Extend the DeepBedMap ESRGAN model to take in Snow Accumulation as input W3 (054e295)
  • Retune hyperparameters, set new hyperparameter defaults, update figures (83e956d, 75266fc, e8ae274)

@weiji14 weiji14 added enhancement ✨ New feature or request model 🏗️ Pull requests that update neural network model labels Jun 14, 2019
@weiji14 weiji14 added this to the v0.9.0 milestone Jun 14, 2019
@weiji14 weiji14 self-assigned this Jun 14, 2019
@review-notebook-app
Copy link

Check out this pull request on ReviewNB: https://app.reviewnb.com/weiji14/deepbedmap/pull/151

Visit www.reviewnb.com to know how we simplify your Jupyter Notebook workflows.

@weiji14 weiji14 force-pushed the model/with_arthern2006accumulation branch from 7ceba5a to c882753 Compare June 14, 2019 13:02
Some pre-emptive work, training a new 'baseline' model before we add in the Arthern2006Accumulation dataset as a new input into our Enhanced Super Resolution Generative Adversarial Network (ESRGAN)! Using new quilt hash and updated ONNX graph. Also replaced various deprecated functions in optuna and comet-ml with equivalent new ones. For better qualitative comparison, we're also logging the test area's predicted DEM grid image and figure to comet-ml now!

This "before" baseline is needed as we've made heaps of changes since v0.8.0, see e.g. e27aafb...3e2c512 or https://www.comet.ml/weiji14/deepbedmap/47e92ab8663c48afae293e0f019c03d4/5feb810e58414a779e79d34abf4037a5/compare. Granted, we've only achieved 88.94 this time (see https://www.comet.ml/weiji14/deepbedmap/47e92ab8663c48afae293e0f019c03d4), but that's on a few experiments runs (not very patient am I), using untuned hyperparameters from our previous baseline at e27ac4a.
@weiji14 weiji14 force-pushed the model/with_arthern2006accumulation branch from c882753 to 7dec5b5 Compare June 14, 2019 13:35
Our DeepBedMap Enhanced Super Resolution Generative Adversarial Network's Generator Model now takes in an extra W3 input - Antarctic Snow Accumulation! Updated functions in srgan_train.ipynb and deepbedmap.ipynb for this, plus some of the markdown documentation in srgan_train.ipynb (especially the YUML figures). Unit tests have been updated, and we've included the new W3 Antarctic Snow Accumulation dataset in a new upload to the quilt 'test' package covering the 2007tx.nc grid's extent, changing the quilt dataset hash from df0d28b24283c642f5dbe1a9baa22b605d8ae02ec1875c2edd067a614e99e5a4 to ad18f48a7f606b19a0db92fc249e10a85765fc5dbd2f952db77a67530a88383d.

ESRGAN model trained for 20 epochs only to ensure that everything runs smoothly, and to update the ONNX graph. RMSE_test value is at 100.32 (see https://www.comet.ml/weiji14/deepbedmap/43dfae28dfd340119928387b402ae24b), but of course we can do better than that! It's good to see that one epoch still takes only about 7.5 seconds to run even though we've added an extra W3 input that increases the parameter count from 9088641 to 9107393. Maybe it's because we're down from 2499 to 2347 training tiles, but also perhaps the GPU works faster on a size 128 concatenate layer than 96? TODO fix issue with deepbedmap.save_array_to_grid possibly transferring wrong coordinates when going from numpy.array -> geotiff -> netcdf.
weiji14 added 4 commits June 15, 2019 17:31
Patch 054e295 so that the RMSE_test calculated in deepbedmap.ipynb matches that of srgan_train.get_deepbedmap_test_result. Basically run pygmt.grdtrack on an xarray.DataArray grid only, rather than on an xr.DataArray grid in srgan_train.ipynb and a NetCDF file grid in deepbedmap.ipynb that produces slightly different results! Main issue with this is that the grdtrack algorithm samples less points than before, from 38112 down to 37829 now. This is because of how the edges of the grid are not properly sampled. Issue is documented in #152.
Here we report on the series of hyperparameter tuning experiments to see what's the best we got with the new W3 Antarctic Snow Accumulation input dataset (hint: it is comparable to our v0.8.0 model, if not slightly better). Kept to using 12 Residual-in-Residual Blocks for our adapted Enhanced Super Resolution Generative Adversarial Network. Primarily tuning the learning rate and number of epochs, with a secondary focus on tuning the residual scaling factor and batch size. Details of the 3 hyperparameter tuning frenzies as below.

1st hyperparameter tuning frenzy achieved best RMSE_test result of 53.32 at https://www.comet.ml/weiji14/deepbedmap/afd927fa856648bebb00551c68c25016 using these hyperparameter settings: learning rate 1e-4, residual scaling 0.15, batch size 64, num_epochs 69. These came from ~100 experimental trials tuned using these range settings:

Learning rate: 1e-3 to 1e-4
Residual scaling: 0.1 to 0.3
Batch size: 64 or 128
Num epochs: 50 to 100

2nd hyperparameter tuning frenzy achieved best RMSE_test result of 43.58 at https://www.comet.ml/weiji14/deepbedmap/0b9b232394da42e394998b112f628696 using these hyperparameter settings: learning rate 7.5e-5, residual scaling 0.15, batch size 128, num_epochs 84. These came ~50 experimental trials tuned using these range settings (Note: continued to use same train.db from 1st run):

Learning rate: 7.5e-4 to 7.5e-5
Residual scaling 0.1 to 0.3
Batch size: 64 or 128
Num epochs 60 to 90

3rd hyperparameter tuning frenzy achieved best RMSE_test result of 45.40 at https://www.comet.ml/weiji14/deepbedmap/908108f973c142d7ba4fba58297b95ea using these hyperparameter settings: learning rate 5e-5, residual scaling 0.15, batch size 128, num_epochs 87. These came from ~50 experimental trials tuned using these range settings (Note: continued to use same train.db from 2nd run):

Learning rate 5e-4 to 5e-5
Residual scaling 0.1 to 0.3
Batch size 64 or 128
Num epochs: 60 to 120

Seems as though a pretty low learning rate works well, even if the number of epochs required is less than 100. I'm a bit skeptical with what the optuna hyperparameter tuning package is doing, as it doesn't seem to sample the hyperparameter search space very well in subsequent tuning frenzies. But overall, the RMSE_test results do stay below 100 in the 2nd and 3rd frenzies so we'll let it be for now.
Update 2D, 3D and histogram plots in deepbedmap.ipynb for the 2007tx.nc test area using our newly trained Enhanced Super-Resolution Generative Adversarial Network (ESRGAN) model from https://www.comet.ml/weiji14/deepbedmap/0b9b232394da42e394998b112f628696. Had to change our residual_scaling hyperparameter default setting from 0.2 to 0.15 in a few places following the last commit in 83e956d. Like really, we need to find a way to set the correct residual_scaling and num_residual_blocks settings when loading from a trained .npz model file.

Showcasing the best RMSE_test result of 43.57 achieved in the 2nd hyperparameter tuning frenzy in 83e956d. Note that the result is actually about 45.59 if we account for the borders properly (see issue #152), making the result not too different from the 45.35 reported in e27ac4a. However, the peak of the elevation error histogram is actually closer to that of the groundtruth with a mean of -25.37 instead of -94.75 (i.e. nearer to 0)! There's some checkerboard artifacts sure, and the errors at the 4 corners are off the chart for some reason, but I think we're definitely getting somewhere!!
Checking performance of our newly trained Enhanced Super-Resolution Generative Adversarial Network (ESRGAN) with W3 Antarctic Snow Accumulation input on the Pine Island Glacier catchment area! DeepBedMap RMSE on this Pine Island Glacier area is 66.47 compared to bicubic baseline of 72.66, not bad! This was tested on not just 2007tx.nc but also 2010tr.nc and istarxx.nc after I finally got the multi-grid grdtrack method to work (see next paragraph). Also did a big reorganization of the deepbedmap.ipynb jupyter notebook by adding in proper sections, moving some cells around for more logical flow, and tidied up some markdown docs given so much has changed since I last wrote it months ago!

Renamed "Crossover Analysis" section to "Elevation 'error' analysis", and got the multi-grid grdtrack going using pygmt.grdtrack (adios !gmt command line!) and some python list comprehensions. The deepbedmap.save_array_to_grid function not only saves the grid to a file, but also returns an xarray.DataArray now, which can be helpful for working on the grid in-memory straightaway. That said, pygmt.grdtrack can't work on too big an xarray.DataArray grid, so we've just ran it on the NetCDF file instead. Another bit I'm proud of is the deepbedmap.plot_3d_view refactor which tidied up the insane mess that was my copy-pasted 3D matplotlib code. Now it flows better, and is more easy-going on the eyes.
@weiji14 weiji14 marked this pull request as ready for review June 18, 2019 09:00
@weiji14 weiji14 merged commit e8ae274 into master Jun 18, 2019
weiji14 added a commit that referenced this pull request Jun 18, 2019
Closes #151 Train and tune ESRGAN with Antarctic Snow Accumulation input.
@weiji14 weiji14 deleted the model/with_arthern2006accumulation branch June 18, 2019 09:06
weiji14 added a commit that referenced this pull request Apr 15, 2020
Updating the DeepBedMap model architecture figure to include the Antarctic Snow Accumulation (W3) input. Basically patching 0146d6c in light of #151. The png image is downloaded from http://quantarctica.npolar.no/opencms/export/sites/quantarctica/data-catalog/images/glac_albmap_snowacca.png.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement ✨ New feature or request model 🏗️ Pull requests that update neural network model

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant