Skip to content

Conversation

@weiji14
Copy link
Owner

@weiji14 weiji14 commented Aug 16, 2019

Breaking changes to our DeepBedMap neural network model before the v1.0 release (hopefully soon!). Making better use of recently added datasets such as the Carlson Inlet one (#164), but really it's the new (and still awkwardly sized) 450m resolution MEaSUREs phase-based Ice Velocity data (#165) that we're really trying to better utilize here. Also introducing 'topographic loss' and 'structural loss' to our neural network's cost function and using some Deformable Convolutional layers!

TODO:

Improving the accuracy of our surface ice velocity layer! Replacing [MEaSUREs InSAR-Based Antarctica Ice Velocity Map, Version 2](https://nsidc.org/data/nsidc-0484/versions/2) with [MEaSUREs Phase-Based Antarctica Ice Velocity Map, Version 1](https://nsidc.org/data/nsidc-0754/versions/1) that offers a factor of 10 better precision than prior maps based on feature and speckle tracking over 80% of Antarctic continent.

Also decided we don't really need [Landsat 8 Ice Speed of Antarctica (LISA), Version 1](https://nsidc.org/data/NSIDC-0733/versions/1) anymore so getting rid of that. Not because there's no data gaps left (there are still some holes near the pole and a couple here and there), but just that it's not really worth the processing effort. Plus we'll get to simplify our code a bit.
To improve our neural network's sense of (ice flow) direction, we're separating the previous ice speed (sqrt(VX^2 + VY^2)) into its respective x_velocity (VX) and y_velocity (VY) components. In other words, the new 'W2' ice surface velocity grid input will have 2 bands/channels that are concatenated together (one for Velocity_X, one for Velocity_Y) whereas we simply had 1 band/channel i.e. Speed (Magnitude) before. Only processing on 3 tiles to show it works, instead of the full 2493-ish tiles (too slow) because we're gonna make some breaking changes later to use the 450m native resolution instead of resampling to 500m.

Note that we've changed from using a GeoTIFF to netCDF file, and really, we're only doing this split because the netCDF file doesn't come with an ice flow magnitude (i.e. Speed) variable, only VX and VY... The fancy way we access the VX/VY variables directly from the netCDF file is via rasterio's Advanced Dataset Identifier (see https://github.com/mapbox/rasterio/blob/7f2caadecdee9fa9424245877ab4b9faae76b997/docs/topics/datasets.rst#dataset-identifiers). Also removed the ugly LISA m/day to m/year bash scripts committed at 906ff7b.
So that we can use the native 450m resolution MEaSUREs Phase-based Ice Velocity dataset (#165) properly, we're going to use a 9000m grid now, which is exactly divisible by 450m. The 250m resolution groundtruth tiles thus need to increase from 32 to 36 pixels (250m * 36pixels = 9000m). We've also changed the step size from 4 to 3 to get more data. Overall, total number of training tiles have now increased from 2493 to 3379!

More to come, lots of architectural changes to our Convolutional Neural Network's kernel_size, padding and stride settings.
@weiji14 weiji14 added enhancement ✨ New feature or request data 🗃️ Pull requests that update input datasets model 🏗️ Pull requests that update neural network model labels Aug 16, 2019
@weiji14 weiji14 added this to the v0.9.4 milestone Aug 16, 2019
@weiji14 weiji14 self-assigned this Aug 16, 2019
@review-notebook-app
Copy link

Check out this pull request on ReviewNB: https://app.reviewnb.com/weiji14/deepbedmap/pull/166

You'll be able to see notebook diffs and discuss changes. Powered by ReviewNB.

weiji14 added 17 commits August 16, 2019 19:21
Incorporating the new 9km sized tiles from 9d79983 which will align nicely with the 450m resolution MEaSUREs Ice Velocity dataset now!
Create 3379 new **unpadded** 9km grids for each of our datasets (X, W1, W2, W3, Y), undoing 78be31a. Also using native 450m resolution of MEaSUREs Ice Velocity now instead of resampling to 500m, undoing a8863e4! New tiles uploaded to quilt (still using version 2) alongside an antarctic_ice_vel_phase_map_v01_VX_VY.nc file which only contains the VX, VY variables from the original antarctic_ice_vel_phase_map_v01.nc file (3.5G instead of 6.5G). Note to self, use that quilt package to properly seed the binder deployment with data.
Major change to our adapted Enhanced Super Resolution Generative Adversarial Network (ESRGAN)'s Generator Network component! Got the deepbedmap.get_deepbedmap_model_inputs function to use the new MEaSUREs phase-based ice velocity input and 0 padding for all datasets. ONNX graph updated, accordingly, alongside plenty of unit tests, and reuploaded a new 'weiji14/deepbedmap/model/test/' dataset to quilt that has the new W2 MEaSUREs Ice Velocity 2-band tiles.

The DeepBedMap input block now uses a 'wider field of view' and has less context available at the borders, because we've respectively switched to a kernel size of 4 instead of 3, and a 'same-ish' padding of 1 instead of 0. The W2 MEaSUREs Ice Velocity dataset with its 2 bands has a slightly different setup with a billinear resize first (can't get away from 500m resolution...) and then a convolution. Still, they all convolve to an 8x8 pixel tensor which is concatenated together and pased into the RRDB blocks. There is a new 'pre_upsample_conv_layer' that turns the 8x8 shaped tensor back to a 9x9 shaped tensor which deviates from the original ESRGAN paper. Final 4x super resolution predicted output is thus 36x36 (250m resolution, 9km grid).

Also set CuDNN deterministic mode in Chainer to hopefully better reproduce training results! Made a small tweak to features/environment.py for better debugging (previously file was confused between the temporary .py and jupytext paired .py file, giving the wrong line number).
Closes #165 Use new MEaSUREs InSAR phase-based ice velocity v1 instead of tracking-based products.
Implement a 'topographic loss' that tries to ensure the predicted high resolution DeepBedMap DEM's is topographically similar to the low resolution BEDMAP2 DEM. Currently hardcoded to work on 4x upsampling only, and I've removed some old references in the GeneratorModel class that had a 'scaling' setting for other upsampling factors (e.g. 2, 6, 8, etc) which was never implemented. Also quickly patching 75b7493, as the YUML diagram did not mention the bilinear resampling on W2...
Bumps [optuna](https://github.com/pfnet/optuna) from 0.13.0 to 0.14.0. Includes various enhancements to the TPESampler such as setting [hyperopt](https://github.com/hyperopt/hyperopt) compatible parameters!
- [Release notes](https://github.com/pfnet/optuna/releases/tag/v0.14.0)
- [Commits](optuna/optuna@v0.13.0...v0.14.0)
Optuna just brought in this HyperOpt TPE compatible setting in v0.14 so we might as well use it to make reproducibility easier! Main change (that is visible to me) is n_startup_trials increasing from 10 to 20 which should sample the hyperparameter combination space a bit better.

Here we report the 2nd best result from the 3rd of our hyperparameter tuning frenzies, an RMSE_test value of 58.90 at https://www.comet.ml/weiji14/deepbedmap/13ce79f397214197a331488db41ebf7c. My hunch was that we actually need to set a pretty high residual scaling now, e.g. >0.45! Visual inspection shows that the border predictions have become particularly bad so we might actually bring the padding back... Anyways, the details of our 4 GPU (2 Tesla V100s, 2 Tesla P100s) hyperparameter tuning frenzy experiments are as follows:

1st frenzy, 25 trials each
- learning rate from 1.2e-4 to 6.0e-5
- batch_size 64 or 128
- residual_scaling 0.1 to 0.5
- num_residual_blocks from 10 to 14
- num_epochs from 60 to 120

2nd frenzy, 45 trials each
- learning rate from 1.4e-4 to 7.0e-5
- batch_size 64 or 128
- residual_scaling 0.25 to 0.5
- num_residual_blocks from 11 to 14
- num_epochs from 40 to 90

3rd frenzy, 30 trials each
- learning rate from 1.4e-4 to 9.0e-5
- batch_size 64 or 128
- residual_scaling 0.45 to 0.6
- num_residual_blocks from 12 to 14
- num_epochs from 40 to 90

There was a previous hyperparameter tuning frenzy round (3 actually) prior to the UNIX servers' physical migration and Optuna upgrade cbe064f, but I'm ignoring those as the results were pretty bad... You can check them out at comet.ml/weiji14/deepbedmap though.
Patching the previous discriminator patches 6f8d7ae and 3e487da because I'm blind or not paying enough attention. The 4px Conv2D layers are the ones with stride 2 and the 3px ones have stride 1, not vice versa. The VGG128 discriminator was slightly refactored upstream, see https://github.com/xinntao/BasicSR/blame/35b1ee7739e038ca359152ee58e6066c6d101505/codes/models/modules/discriminator_vgg_arch.py#L6-L59, and it appears that only the first Conv2D layer has a bias set now. Also adjusted the BatchNormalization layers' epsilon from Keras's default 0.001 to Pytorch's default 1e-5 so as to align better with the original implementation. Overall, parameter counts increased by 165632 or ~1.6%.
Setting fire to all that code to gapfill a raster with another raster as it's very messy, and we only need it for REMA now since we are no longer gapfilling MEaSUREs with #165 merged in. Still keeping the option to gapfill with a single floating point number, but we're removing the selective_tile_old function that has sat aside selective_tile since aac21fb in #156 of v0.9.2. Temporarily using a bilinear resampled 200m REMA in deepbedmap.ipynb. Will follow up with code to produce a gapfilled 100m resolution REMA geotiff!
With great precision comes great need for optimization. Forcing our data_prep.selective_tile function to precisely bilinear interpolate the Z value down to every XY point, instead of 'slicing' that might still have sub-pixel discrepancies. Extends the work done in 7fd3345. The out_shape option is replaced with a 'resolution' setting now, and the geotiff/netcdf files are loaded in as a dask array by default now.

Yes, this slows things down by a fair bit, but we've wrapped the computationally heavy interpolation and masking methods using dask.delayed, so that tasks can be gathered up and processed in one go (if I can get the right scheduler settings...). Technically the code should be able to run nicely in parallel now, but GeoTIFFs aren't exactly built for parallel reads and big ol' REMA is a RAM-hungry beast so we're stopping short here of using dask.distributed.
Write script for gapfilling the 100m REMA Ice Surface Elevation DEM with the seamless 200m version (bilinear interpolated), i.e. a more proper version of cd72a30 of #64. Based on the old selective_tiling code's gapfill_raster section that we deprecated in 690c365. I've experimented with alternative methods such as making a virtual (.vrt) GeoTIFF, mosaicking using pure GDAL and rasterio's merge tool, even considered GMT's grdblend, but nothing really merges the two together (with REMA_100 as highest priority, then REMA_200) properly the way I want it, in a reasonable-ish amount of time. Might be good to actually output this homemade 100m gapfilled REMA to a Cloud-Optimized GeoTIFF, NetCDF or Zarr, but we'll stick to good ol' GeoTIFF for now, even if it is 9.9GB.
Keeping things DRY again, moving the save_array_to_grid function from deepbedmap.ipynb to data_prep.ipynb since we're using it to write the REMA_100_dem_filled GeoTIFF. Made it into a more reusable function, though there's a lot of nasty hardcoded default settings specific to DeepBedMap (who uses a -2000m value for NaN?!) and far too many options that probably should be turned into kwargs. It now takes in an ndim=3 array in CHW format instead of NCHW as before, has optional saving to NetCDF, and defaults to creating a BigTIFF!

It was really just the GeoTIFF compression I wanted, because Quilt wasn't accepting the >9GB REMA_100m_dem_filled.tif file. Using LZW compression, the filesize is now down to about 4.7GB, an we are trading off size for speed here, i.e. reading from this compressed REMA GeoTIFF can be significantly slower than the original uncompressed version. Preferrably would have used ZSTD compression (see https://gis.stackexchange.com/questions/1104/should-gdal-be-set-to-produce-geotiff-files-with-compression-which-algorithm-sh/333578#333578), if only the rasterio wheels actually had it (see rasterio/rasterio-wheels#23). . Was also doing some detective work on why rasterio has GDAL 2.4.1 whereas our conda version is 2.4.2, might need to set a GDAL_DRIVER_PATH?
Bringing back the 1km padding to resolve lousy predictions at the border areas, and resample the MEaSUREs Ice Velocity dataset from 450m to 500m resolution again, sigh. Basically reverting 7cd28af, but with a bit of a twist. Also using the new REMA_100m_dem_filled.tif we've recently created. Quilt packages reuploaded, now with hash e9494ecd4b4fe1bc1f8b28d4d73a287d093c83dc76eb173822b7e90753d92f27.
Sorry, didn't upload the correct BEDMAP2 (X_data) tiles in my haste yesterday. Properly reuploading those padded tiles with shape (3379,1,11,11) instead of (3379,1,9,9), patching 50ddb5e. Correct quilt data hash to use now is d092c5c00e9b6ceaac3a3bf431dd070f0a2809ef4f552e55faa9d01c1d5dd270.
Towards even better pixel registration at the expense of more computing cost! Patches #150 properly. The xyz_to_grid function has been giving us gridline node registered grids since forever, because that's `gmt surface`'s default setting. Now converting it to pixel node registration to align better with rasterio and other geospatial packages. Also using simple slicing for our high resolution tiles to avoid interpolating to NaNs at the edges! Total tile counts have increased from 3379 to 4028, O.O, mostly from the Basler grid around the Siple Coast region. New training tiles updated on Quilt with hash af86cf135ffe5ed9f78fc65231e3aa0bfc90f45e33b6bccda9f08f392c090113.

Though `surface` does have an `-r` setting to set pixel node registration directly, I looked at it and it gave strange diagonal strips for 2007tx.nc, so nope, we'll use `grdsample` instead, the only fault being we lose data lineage/provenance when checking the grid using `gmt grdinfo`. Hopefully the half pixel ghost doesn't haunt us ever again. Also note to self, to write up wrappers for `gmt blockmedian` and `gmt grdsample` to reduce boilerplate code in xyz_to_grid function.

Reason for doing this was because the more precise selective_tile script (since e7936db) was so exact, that it realized our bounding boxes was outside of the (gridline registered) groundtruth grids and gave us NULL values! This problem didn't surface itself until I was trying to train the neural network, and was strangely getting NaN values in the discriminator loss after just one epoch, no matter what hyperparameters I used.
In order to get those border predictions better, we are very carefully reverting some aspects of 75b7493. Specifically, we're going for valid padding in our ESRGAN model's input block convolutions and directly using the MEaSUREs Ice Velocity we've resampled back to 500m in 50ddb5e instead of resampling on the fly. Keeping the 4km kernel/filters though! The amazing part is that the parameter counts all stay the same, +1 for efficiency of convolutions! Also made small adjustments to the hardcoded Topographic Loss Mean Absolute Error function in d599ee8. Test dataset reuploaded, so again we have a new quilt hash to use - e11988479975a091dd52e44b142370c37a03409f41cb6fec54fd7382ee1f99bc.
Closes #167 Slicing and interpolating tiles as accurately as possible.
@ghost
Copy link

ghost commented Sep 3, 2019

DeepCode Report (#398375)

DeepCode analyzed this pull request.
There are 4 new info reports. 2 warnings and 6 info reports were fixed.

The new precise selective_tile function is super slow sometimes, so we're going to force using the cached test dataset from quilt. Also moving all that hardcoded stuff from our behave test scripts into the actual deepbedmap.get_deepbedmap_model_inputs function itself. Hopefuly we can start some good ol' hyperparameter tuning again!
@weiji14 weiji14 force-pushed the enh/revise_deepbedmap branch from 2c3d8df to fba406e Compare September 3, 2019 22:36
Thinking of using some non-standard neural network layers (e.g. DeformableConv2D) which isn't supported by ONNX (and probably won't be in the near future). Still want to save the model architecture/computational graph somehow though, so we're gonna use the Graphviz [DOT](https://en.wikipedia.org/wiki/DOT_%28graph_description_language%29) format. Also logging the DOT graph text to Comet.ML now using experiment.set_model_graph!
Towards better visualization of what's going wrong (or right) with our model early in the training process. Efficiently tracking the RMSE_test value and predicted grid image in Comet.ML on every epoch! Building upon fba406e, we are now caching the fixed groundtruth and xyz points inputs too, so that srgan_train.get_deepbedmap_test_result runs in mere milliseconds. Uses the handy functools.lru_cache decorator. Also quick patch to remove unused onnx_chainer and warnings libs in our previous commit 66d958b.
Hopefully the last revert for that bad bad commit at 75b7493. Switch back to using 3km padding in the input block instead of 4km, and remove the pre_upsample_conv_layer that was used to convert our 8x8px tensor to 9x9px before doing the 4x upsampling. Tried using 9x9px tensors in the main RRDB block and since it wasn't noticeably slower (because we're not using tensor cores?), we'll stick with it. Would have updated the ONNX graph, but well, it's (properly) gone now, and I've updated the model/README.md file to reference the new graphviz DOT file stored on Comet.ML, thereby patching 66d958b.
@weiji14 weiji14 force-pushed the enh/revise_deepbedmap branch from 8afbd46 to e4f2f14 Compare September 7, 2019 04:35
Finding the right hyperparameters sooner by stopping those experiments that perform worse than the median. Made possible since we are calculating intermediate RMSE_test values since ba87043. Really should call it RMSE_dev now, but we'll stick with 'test' for apples-to-apples legacy compatibility. Each experiment is currently given 15 epochs of warm up time before it will be brutally pruned if the RMSE_test value is not below the Median.
Apparently it wasn't a good idea to simply remove the onnx.txt file back in d68f2c7 because the _download_model_weight_from_comet script needs that directory to exist! Creating that folder (if it doesn't exist) should help fix the deepbedmap.ipynb integration test that was failing. Also ensuring we close the written .npz file properly using a with context manager.
Experimenting with using Deformable Convolutional layers in our DeepBedMap input block and final output layers! This has huge potential in improving the realism of our predicted grid, as the kernel filters now have more flexibility in sampling different spatial locations rather than be fixed at some regular grid.

Hyperparameters seem to be rather out of tune, with this setup's best performance (RMSE_test value) currently at 295.07, see https://www.comet.ml/weiji14/deepbedmap/8ff4fea7fb2e48268b8c5ff1de9068dc. Not so great, but the plots (especially if you look at them in 3D) seem to capture the streaklines better and show less pixelated artifacts. Next tuning stage should probably should increase the learning rate (to >2e-4) and/or number of epochs needed (>100??).
Continuing on from 6f4a6e5, we're also changing the pre_residual, post_residual and post_upsampling Convolutional layers to Deformable ones. Adjusted learning rate to be higher, previously tuning from 1-2e-4, now from 2-4e-4 (note that ESRGAN uses 2e-4, EDVR uses 4e-4); num_residual_blocks fixed at 12. Also dropping the intermediate_values column when reporting top ten best values in the last cell of srgan_train.ipynb.

Achieved an RMSE_test value of 216.67 at https://www.comet.ml/weiji14/deepbedmap/b5a3f17d2c1a4fb18c73893bb80986ff with this setup, somewhat better than previous. Will consider changing all our Residual-in-Residual Dense Block layers to use Deformable Conv2D next, and look into using Structural similarity (SSIM) loss (perhaps swap out topographic loss for that).
Differentiable structural similarity index that works on Chainer! 
Repository at https://github.com/higumachan/ssim-chainer.
Incorporating a Structural Similarity (SSIM) Index based loss function 
into out adapted ESRGAN's Generator Network's loss function. Currently 
set with a weighting of 1e-2 that matches the L1 content loss weighting. 
Properly creating a unit-tested function that wraps around the 
differentiable [SSIM](https://github.com/higumachan/ssim-chainer) 
chainer module, in case things change down the line. Also flipped the 
y_true/y_pred kwarg positioning in psnr() to ease my OCD, and correctly 
renamed d_train_loss to d_dev_loss (not major).
For better structural reconstruction of our DEM, we revise our SSIM Loss weighting from 1e-2 up to 5.25e-2. Based on [Zhao et al. 2017](https://doi.org/10.1109/TCI.2016.2644865)'s paper which empirically weighted MS-SSIM loss at 0.84 and L1 loss at 0.16 (1-0.84) which is therefore 5.25x. Yes, it's only an empirical setting, but too lazy to tune those weightings (though someone probably should in the future). Current best SSIM score we get is ~0.25 which is a ways off from perfect at 1.00, so setting a higher structural weighting should encourage our model to produce more structurally similar images to the groundtruth.

Even though an RMSE_test of 1655.87 isn't so great, nor is the actual SSIM score of 0.1885, qualitative 3D evaluation of the result at https://www.comet.ml/weiji14/deepbedmap/88b073324a644fd695aecf47109dd2bc does show a pretty nice terrain. Tempted to use SSIM as 'the' tuning metric instead of RMSE_test now, but we'll see.
Closes #172 Add Structural Similarity Loss/Metric.
Adjust our Content, Adversarial, Topographic and Structural Loss weightings to be on a more equal footing, with priority towards better SSIM scores (patching #172). Content and Topographic Losses (~35) was overpowering the Adversarial and Structural Losses (~0.1-10) by an order of magnitude (i.e. wasn't really any adversarial impact/structural improvement)! Loss weighting changes are as follows:

Loss:  Content  Adversarial  Topographic  Structural
Old     1e-2     5e-3         5e-3         5.25e-3
New     1e-2     2e-2         2e-3         5.25e-0

This is all to do with our domain specific (DEM generation) task. Ideally we would scale our images to lie in the range of 0-1 like those out in the computer vision world, which can be easily done by converting metres to kilometres (divide by 1000). Workaround instead is to scale down the content and topographic losses relative to the adversarial and structural loss. Also because we are recording too much metric information that results in I/O errors when writing to the sqlite database, I've made code changes so that our 2 Tesla V100 GPUs and 2 Tesla P100 GPUs write their training results to separate databases named based on the hostname.

Best tuned score had RMSE_test of 215.60 and SSIM of 0.6195 at https://www.comet.ml/weiji14/deepbedmap/699ecfa6f14448c09cf0e450edf64f30. The results aren't so good when ran on the whole Pine Island Glacier area (2007tx, 2010tr, istarxx), but they're not as bad compared to the others, and you can really see the realistic DEM textures now!
Tone down our use of Deformable Convolutional layers, applying them only at our final two convolutional layers rather than in other places as done in 6f4a6e5 and 88a364c. After re-reading [Dai et al. 2017](http://arxiv.org/abs/1703.06211)'s paper, it turns out that they only applied deformable convolution at the final three ResNet blocks (confusing use of the word 'top' layers), rather than at the input blocks, for reasons I don't quite fully understand. Anyways, our network does seem to perform better now, generating bed elevation models more closer to BEDMAP2 but of course, with some bumpy terrain!

Best RMSE_test result of 66.70, with SSIM score of 0.7625 at https://www.comet.ml/weiji14/deepbedmap/72e783d7b96d4ef5ac39cc00b808198f! The RMSE for the full Pine Island Glacier area is 214.03 which is not ideal, but the (deform conv applied) model is starting to capture the topography a lot better, phew! Tweaking our default residual scaling to 0.1 as that seems to be where things are at, and re-instated our MedianPruner to use n_warmup steps=15 as was in f52be2d. Training is still a bit fiddly, with occasional shingle-like artifacts, but when we get lucky, the results can turn out well!
Closes #171 Switch from standard Convolution to Deformable Convolution.
Tentative new DeepBedMap DEM! Compare this with last version v0.9.2 version at aac21fb. Using the trained model at https://www.comet.ml/weiji14/deepbedmap/055b697548e048b78202cfebb78d6d8c with RMSE_test of 60.43 and SSIM score of 0.8544, and a RMSE of 51.01 over the Pine Island Glacier catchment! The big tiles have been reduced from 250km squares to 250X125km rectangles to fit the bigger model and data into our 16GB of GPU memory, and we've cropped out areas 10km outside the grounding line.

TODO fix the 0-valued pixels at the very right of the full map, reduce the tile-looking artifacts and use GMT or PyGMT to produce a more publication ready map!
Managed to get back the square 250x250km tiles, reduce the tile edge artifacts in the full mosaic, and get a pretty PyGMT map out! Solution to the GPU memory limitation was to have chainer disable the 'backprop' setting that means we don't save the memory hungry computational graph during inference, and we're also making sure to use cudnn's deterministic mode here. The 'whole of Antarctica' inference script now uses the a dataclass (>Python 3.7 feature) that has a namedtuple-like 'y' and 'x' accessor to make it harder to mess things up, especially important with all the extra padding and clipping we're doing to remove the tile edge artifacts. Also removed the 10km grounding line clip as the ocean predictions doesn't look too bad now (though they're not exactly 'valid'). Also waiting on GenericMappingTools/pygmt#126 so that we can use color palettes properly in PyGMT.
@weiji14 weiji14 marked this pull request as ready for review September 21, 2019 01:44
Simplified the DeepBedMap figure to remove annotations and gridlines (keeping only the tick marks), and put it in the main README.md! Other than that, the full DEM geotiff file is exactly the same as was in db6568e. Did try to use gmt grdblend to stitch together a more seamless mosaic but the results were not as good as I hoped. Also fixed some pylint errors to please deepcode.ai such as unused imports, singleton comparisons and whatnot.
@weiji14 weiji14 force-pushed the enh/revise_deepbedmap branch from 9708040 to 3983751 Compare September 21, 2019 02:07
@weiji14 weiji14 merged commit 3983751 into master Sep 21, 2019
@weiji14 weiji14 deleted the enh/revise_deepbedmap branch September 21, 2019 02:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

data 🗃️ Pull requests that update input datasets enhancement ✨ New feature or request model 🏗️ Pull requests that update neural network model

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant