Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update BMZ README #322

Open
wants to merge 9 commits into
base: main
Choose a base branch
from
Open

Update BMZ README #322

wants to merge 9 commits into from

Conversation

jdeschamps
Copy link
Member

@jdeschamps jdeschamps commented Dec 9, 2024

Description

Following #278, this PR reorganizes the exported BMZ README:

  • Put all configuration to the end
  • Fix title level of algorithm
  • Put data description first
  • Remove training section
  • Add a validation section
  • Fix link to documentation

In addition, the API for the BMZ export has changed:

  • data_description is now mandatory
  • model_version is an optional parameter allowing to version models
  • cover is an optional parameter

Users can now provide a path to a cover, no validation is done of that path. If no cover is provided, then we do something that is probably not optimized for multichannels:

  • For 2D images, the images (input and output) are split horizontally, or vertically if they have different height. If they have different width as well, then an error is raised.
  • For Z stack, the middle slice is selected.
  • For images with 2 channels, the channels are used as blue and green
  • For images with 3 channels, they are interpreted as RGB
  • For images with 4 or more channels, the first 4 channels are used. The pixel value are normalised and multiplied with the RGB vectors of 4 pre-defined colors, and all channels are summed into an RGB image.

Note that that this is not a great way to represent scientific data, we should apply LUT to the grey scale channels and recompose overlays. Look up tables have been worked out for scientific figures and we could use those.

If that is acceptable for now, it will be easy to replace it.

Changes Made

  • Added:
    • cover_factory.py
    • helper scripts to inspect BMZ README and covers.
  • Modified: all BMZ related modules.

Related Issues

#278
#176

Breaking changes

Any call to careamist.export_bmz.

Additional Notes and Examples

The results README looks like this:

# Noise2Void - CAREamics

## Data description

Mydata

## Algorithm description:

Noise2Void is a UNet-based self-supervised algorithm that uses blind-spot training to denoise images. In short, in every patches during training, random pixels are selected and their value replaced by a neighboring pixel value. The network is then trained to predict the original pixel value. The algorithm relies on the continuity of the signal (neighboring pixels have similar values) and the pixel-wise independence of the noise (the noise in a pixel is not correlated with the noise in neighboring pixels).

## Configuration

Noise2Void was trained using CAREamics (version 0.1.0) using the following configuration:


algorithm_config:
  algorithm: n2v
  loss: n2v
  lr_scheduler:
    name: ReduceLROnPlateau
    parameters: {}
  model:
    architecture: UNet
    conv_dims: 2
    depth: 2
    final_activation: None
    in_channels: 1
    independent_channels: true
    n2v2: false
    num_channels_init: 32
    num_classes: 1
  optimizer:
    name: Adam
    parameters:
      lr: 0.0001
data_config:
  axes: YX
  batch_size: 2
  data_type: array
  patch_size:
  - 64
  - 64
  transforms:
  - flip_x: true
    flip_y: true
    name: XYFlip
    p: 0.5
  - name: XYRandomRotate90
    p: 0.5
  - masked_pixel_percentage: 0.2
    name: N2VManipulate
    roi_size: 11
    strategy: uniform
    struct_mask_axis: none
    struct_mask_span: 5
experiment_name: export_bmz_readme
training_config:
  accumulate_grad_batches: 1
  check_val_every_n_epoch: 1
  checkpoint_callback:
    auto_insert_metric_name: false
    mode: min
    monitor: val_loss
    save_last: true
    save_top_k: 3
    save_weights_only: false
    verbose: false
  enable_progress_bar: true
  gradient_clip_algorithm: norm
  max_steps: -1
  num_epochs: 10
  precision: '32'
version: 0.1.0


## Validation

In order to validate the model, we encourage users to acquire a test dataset with ground-truth data. Additionally, inspecting the residual image (difference between input and predicted image) can be helpful to identify whether real signal is removed from the input image.

## References

Krull, A., Buchholz, T.O. and Jug, F., 2019. "Noise2Void - Learning denoising from single noisy images". In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 2129-2137). doi: 10.1109/cvpr.2019.00223

## Links

- [CAREamics repository](https://github.com/CAREamics/careamics)
- [CAREamics documentation](https://careamics.github.io/)

Please ensure your PR meets the following requirements:

  • Code builds and passes tests locally, including doctests
  • New tests have been added (for bug fixes/features)
  • Pre-commit passes
  • PR to the documentation exists (for bug fixes / features)

@jdeschamps jdeschamps marked this pull request as ready for review December 12, 2024 19:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant