Satheshkumar Kaliyugarasan and Alexander S. Lundervold
Department of Computer Science, Electrical Engineering and Mathematical Sciences, Faculty of Engineering and Science, Western Norway University of Applied Sciences, Bergen, Norway.
The figure show's one of our model's predictions on validation data from an external dataset used for pretraining.
BibTeX entry:
@article{kaliyugarasan2022lab,
title={{LAB-Net}: Lidar and aerial image-based building segmentation using {U-Nets}},
author={Kaliyugarasan, Satheshkumar and Lundervold, Alexander Selvikv{\aa}g},
journal={Nordic Machine Intelligence},
volume={2},
number={3},
year={2023}
}
See also our team's code in the competiton repo: https://github.com/Sjyhne/MapAI-Competition (team_hvlml).
Details TBA
Add results on validation data
If you only want to produce predictions on new data, then you can install our inference environment by following the instructions below. However, if you're going to re-run or modify the training process, please install the libraries in our more extensive training environment.
Click for installation instructions
git clone https://github.com/skaliy/MapAI_challenge
cd MapAI_challenge
conda env update -f environment-inference.yml
git clone --recurse-submodules https://github.com/skaliy/MapAI_challenge
cd MapAI_challenge
conda env update -f environment-training.yml
conda activate mapai
Follow the instructions at https://pytorch.org/get-started/locally/. E.g.,
conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia
conda install -c fastchan fastai
pip install datasets
pip install kornia
pip install -e 'semantic_segmentation_augmentations[dev]'
Note that the TOC is a bit outdated.
Notebook | Description |
---|---|
00a_mapai_prepare_data.ipynb | Loads the MapAI data and computes and stores information about which images the "ground truth" masks indicate have buildings and to what extent. |
00b_inria_prepare_data.ipynb | Loads and extracts images patches from the INRIA dataset used for pretraining |
Notebook | Description |
---|---|
01a_classifier.ipynb | Trains a building detection classifier used to discover mislabeled data. |
01b_inspect_diff.ipynb | Our filtering process to find mislabeled images is only partially automatic. This notebook contains code for a manual step investigating possible mislabels. |
01c_manual_find_error.ipynb | Code for a manual step investigating possible mislabels. |
01d_segmentation_cleaning.ipynb | We repeat the above filtering process to discover even more mislabeled data, but this time using a segmentation model. |
01e_segmentation-pretraining-cleaning.ipynb | Use the pretrained segmentation model to filter mislabeled data |
Notebook | Description |
---|---|
02a_segmentation-pretraining.ipynb | Pretrains our segmentation models on the INRIA dataset described above. |
02b_segmentation-pretraining-evaluate.ipynb | Evaluate the pretrained model. Visualize predicted results. |
02d_segmentation-aerial.ipynb | Fine-tunes the above model on the MapAI data |
02g_segmentation-aerial-lidar.ipynb | Trains a segmentation model on the lidar data |
02j_segmentation_aerial-lidar_create_visualizations_get_info_ensemble.ipynb |
Notebook | Description |
---|---|
03a_inference_aerial.ipynb | Inference on new aerial images using our top-performing model ensemble. |
03b_inference_lidar.ipynb | Inference on new lidar data using our top-performing model ensemble. |