Skip to content
This repository was archived by the owner on Nov 11, 2024. It is now read-only.

Commit dc87acf

Browse files
committed
Updates README.md
1 parent a02f3a2 commit dc87acf

File tree

1 file changed

+15
-4
lines changed

1 file changed

+15
-4
lines changed

README.md

+15-4
Original file line numberDiff line numberDiff line change
@@ -2,11 +2,11 @@
22

33
A toolkit for semantic segmentation of volumetric data using PyTorch deep learning models.
44

5-
![example workflow](https://github.com/DiamondLightSource/volume-segmantics/actions/workflows/tests.yml/badge.svg)
5+
![example workflow](https://github.com/DiamondLightSource/volume-segmantics/actions/workflows/tests.yml/badge.svg) ![example workflow](https://github.com/DiamondLightSource/volume-segmantics/actions/workflows/release.yml/badge.svg)
66

7-
Given a 3d image volume and corresponding dense labels (the segmentation), a 2d model is trained on image slices taken along the x, y, and z axes. The method is optimised for small training datasets, e.g a single $384^3$ pixel dataset. To achieve this, all models use pretrained encoders and image augmentations are used to expand the size of the training dataset.
7+
Given a 3d image volume and corresponding dense labels (the segmentation), a 2d model is trained on image slices taken along the x, y, and z axes. The method is optimised for small training datasets, e.g a single dataset in between $128^3$ and $512^3 $pixels. To achieve this, all models use pre-trained encoders and image augmentations are used to expand the size of the training dataset.
88

9-
This work utilises the abilities afforded by the excellent [segmentation-models-pytorch](https://github.com/qubvel/segmentation_models.pytorch) library. Also the metrics and loss functions used make use of the hard work done by Adrian Wolny in his [pytorch-3dunet](https://github.com/wolny/pytorch-3dunet) repository.
9+
This work utilises the abilities afforded by the excellent [segmentation-models-pytorch](https://github.com/qubvel/segmentation_models.pytorch) library in combinations with augmentations made available via [Albumentations](https://albumentations.ai/). Also the metrics and loss functions used make use of the work done by Adrian Wolny in his [pytorch-3dunet](https://github.com/wolny/pytorch-3dunet) repository.
1010

1111
## Requirements
1212

@@ -44,4 +44,15 @@ The input data will be segmented using the input model following the settings sp
4444

4545
## Using the API
4646

47-
You can use the functionality of the package in your own program via the API, this is [documented here](https://diamondlightsource.github.io/volume-segmantics/). This interface is the one used by [SuRVoS2](https://github.com/DiamondLightSource/SuRVoS2), a client/server GUI application that allows fast annotation and segmentation of volumetric data.
47+
You can use the functionality of the package in your own program via the API, this is [documented here](https://diamondlightsource.github.io/volume-segmantics/). This interface is the one used by [SuRVoS2](https://github.com/DiamondLightSource/SuRVoS2), a client/server GUI application that allows fast annotation and segmentation of volumetric data.
48+
49+
## References
50+
51+
**Albumentations**
52+
Buslaev, A., Iglovikov, V.I., Khvedchenya, E., Parinov, A., Druzhinin, M., and Kalinin, A.A. (2020). Albumentations: Fast and Flexible Image Augmentations. Information 11. [https://doi.org/10.3390/info11020125](https://doi.org/10.3390/info11020125)
53+
54+
**Segmentation Models PyTorch**
55+
Yakubovskiy, P. (2020). Segmentation Models Pytorch (GitHub).
56+
57+
**PyTorch-3dUnet**
58+
Wolny, A., Cerrone, L., Vijayan, A., Tofanelli, R., Barro, A.V., Louveaux, M., Wenzl, C., Strauss, S., Wilson-Sánchez, D., Lymbouridou, R., et al. (2020). Accurate and versatile 3D segmentation of plant tissues at cellular resolution. ELife 9, e57613. [https://doi.org/10.7554/eLife.57613](https://doi.org/10.7554/eLife.57613).

0 commit comments

Comments
 (0)