|
| 1 | +.. _do3d: |
| 2 | + |
| 3 | +3D segmentation |
| 4 | +------------------------------------ |
| 5 | + |
| 6 | +Input format |
| 7 | +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| 8 | + |
| 9 | +Tiffs with multiple planes and multiple channels are supported in the GUI (can |
| 10 | +drag-and-drop tiffs) and supported when running in a notebook. |
| 11 | +To open the GUI with z-stack support, use ``python -m cellpose --Zstack``. |
| 12 | +Multiplane images should be of shape nplanes x channels x nY x nX or as |
| 13 | +nplanes x nY x nX. You can test this by running in python |
| 14 | + |
| 15 | +:: |
| 16 | + |
| 17 | + import tifffile |
| 18 | + data = tifffile.imread('img.tif') |
| 19 | + print(data.shape) |
| 20 | + |
| 21 | +If drag-and-drop of the tiff into |
| 22 | +the GUI does not work correctly, then it's likely that the shape of the tiff is |
| 23 | +incorrect. If drag-and-drop works (you can see a tiff with multiple planes), |
| 24 | +then the GUI will automatically run 3D segmentation and display it in the GUI. Watch |
| 25 | +the command line for progress. It is recommended to use a GPU to speed up processing. |
| 26 | + |
| 27 | +In the CLI/notebook, you can specify the ``channel_axis`` and/or ``z_axis`` |
| 28 | +parameters to specify the axis (0-based) of the image which corresponds to the image channels and to the z axis. |
| 29 | +For example an image with 2 channels of shape (1024,1024,2,105,1) can be |
| 30 | +specified with ``channel_axis=2`` and ``z_axis=3``. If ``channel_axis=None`` |
| 31 | +cellpose will try to automatically determine the channel axis by choosing |
| 32 | +the dimension with the minimal size after squeezing. If ``z_axis=None`` |
| 33 | +cellpose will automatically select the first non-channel axis of the image |
| 34 | +to be the Z axis. These parameters can be specified using the command line |
| 35 | +with ``--channel_axis`` or ``--z_axis`` or as inputs to ``model.eval`` for |
| 36 | +the ``Cellpose`` or ``CellposeModel`` model. |
| 37 | + |
| 38 | +Volumetric stacks do not always have the same sampling in XY as they do in Z. |
| 39 | +Therefore you can set an ``anisotropy`` parameter in CLI/notebook to allow for differences in |
| 40 | +sampling, e.g. set to 2.0 if Z is sampled half as dense as X or Y, and then in the algorithm |
| 41 | +Z is upsampled by 2x. |
| 42 | + |
| 43 | +Segmentation settings |
| 44 | +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| 45 | + |
| 46 | +The default segmentation in the GUI is 2.5D segmentation, where the flows are computed |
| 47 | +on each YX, ZY and ZX slice and then averaged, and then the dynamics are run in 3D. |
| 48 | +Specify this segmentation format in the notebook with ``do_3D=True`` or in the CLI with ``--do_3D`` |
| 49 | +(with the CLI it will segment all tiffs in the folder as 3D tiffs if possible). |
| 50 | + |
| 51 | +If you see many cells that are fragmented, you can smooth the flows before the dynamics |
| 52 | +are run in 3D using the ``dP_smooth`` parameter, which specifies the standard deviation of |
| 53 | +a Gaussian for smoothing the flows. The default is 0.0, which means no smoothing. Alternatively/additionally, |
| 54 | +you may want to train a model on 2D slices from your 3D data to improve the segmentation (see below). |
| 55 | + |
| 56 | +The network rescales images using the user diameter and the model ``diam_mean`` (usually 30), |
| 57 | +so for example if you input a diameter of 90 and the model was trained with a diameter of 30, |
| 58 | +then the image will be downsampled by a factor of 3 for computing the flows. If ``resample`` |
| 59 | +is enabled, then the image will then be upsampled for finding the masks. This will take |
| 60 | +additional CPU and GPU memory, so for 3D you may want to set ``resample=False`` or in the CLI ``--no_resample`` |
| 61 | +(more details here :ref:`resample`). |
| 62 | + |
| 63 | +There may be additional differences in YZ and XZ slices |
| 64 | +that make them unable to be used for 3D segmentation. |
| 65 | +I'd recommend viewing the volume in those dimensions if |
| 66 | +the segmentation is failing, using the orthoviews (activate in the bottom left of the GUI). |
| 67 | +In those instances, you may want to turn off |
| 68 | +3D segmentation (``do_3D=False``) and run instead with ``stitch_threshold>0``. |
| 69 | +Cellpose will create ROIs in 2D on each XY slice and then stitch them across |
| 70 | +slices if the IoU between the mask on the current slice and the next slice is |
| 71 | +greater than or equal to the ``stitch_threshold``. Alternatively, you can train a separate model for |
| 72 | +YX slices vs ZY and ZX slices, and then specify the separate model for ZY/ZX slices |
| 73 | +using the ``pretrained_model_ortho`` option in ``CellposeModel``. |
| 74 | + |
| 75 | +3D segmentation ignores the ``flow_threshold`` because we did not find that |
| 76 | +it helped to filter out false positives in our test 3D cell volume. Instead, |
| 77 | +we found that setting ``min_size`` is a good way to remove false positives. |
| 78 | + |
| 79 | +Training for 3D segmentation |
| 80 | +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| 81 | + |
| 82 | +You can create image crops from z-stacks (in YX, YZ and XZ) using the script ``cellpose/gui/make_train.py``. |
| 83 | +If you have anisotropic volumes, then set the ``--anisotropy`` flag to the ratio between pixel size in Z and in YX, |
| 84 | +e.g. set ``--anisotropy 5`` for pixel size of 1.0 um in YX and 5.0 um in Z. Now you can |
| 85 | +drag-and-drop an image from the folder into the GUI and start to re-train a model |
| 86 | +by labeling your crops and using the ``Train`` option in the GUI (see the |
| 87 | +Cellpose2 tutorial for more advice). If the model with all crops |
| 88 | +isn't working well, you can alternatively separate the crops |
| 89 | +into two folders (YX and ZY/ZX) and train separate networks, and use |
| 90 | +``pretrained_model_ortho`` when declaring your model. |
| 91 | + |
| 92 | +See the help message for more information: |
| 93 | + |
| 94 | +:: |
| 95 | + |
| 96 | + python cellpose\gui\make_train.py --help |
| 97 | + usage: make_train.py [-h] [--dir DIR] [--image_path IMAGE_PATH] [--look_one_level_down] [--img_filter IMG_FILTER] |
| 98 | + [--channel_axis CHANNEL_AXIS] [--z_axis Z_AXIS] [--chan CHAN] [--chan2 CHAN2] [--invert] |
| 99 | + [--all_channels] [--anisotropy ANISOTROPY] [--sharpen_radius SHARPEN_RADIUS] |
| 100 | + [--tile_norm TILE_NORM] [--nimg_per_tif NIMG_PER_TIF] [--crop_size CROP_SIZE] |
| 101 | + |
| 102 | + cellpose parameters |
| 103 | + |
| 104 | + options: |
| 105 | + -h, --help show this help message and exit |
| 106 | + |
| 107 | + input image arguments: |
| 108 | + --dir DIR folder containing data to run or train on. |
| 109 | + --image_path IMAGE_PATH |
| 110 | + if given and --dir not given, run on single image instead of folder (cannot train with this |
| 111 | + option) |
| 112 | + --look_one_level_down |
| 113 | + run processing on all subdirectories of current folder |
| 114 | + --img_filter IMG_FILTER |
| 115 | + end string for images to run on |
| 116 | + --channel_axis CHANNEL_AXIS |
| 117 | + axis of image which corresponds to image channels |
| 118 | + --z_axis Z_AXIS axis of image which corresponds to Z dimension |
| 119 | + --chan CHAN channel to segment; 0: GRAY, 1: RED, 2: GREEN, 3: BLUE. Default: 0 |
| 120 | + --chan2 CHAN2 nuclear channel (if cyto, optional); 0: NONE, 1: RED, 2: GREEN, 3: BLUE. Default: 0 |
| 121 | + --invert invert grayscale channel |
| 122 | + --all_channels use all channels in image if using own model and images with special channels |
| 123 | + --anisotropy ANISOTROPY |
| 124 | + anisotropy of volume in 3D |
| 125 | + |
| 126 | + algorithm arguments: |
| 127 | + --sharpen_radius SHARPEN_RADIUS |
| 128 | + high-pass filtering radius. Default: 0.0 |
| 129 | + --tile_norm TILE_NORM |
| 130 | + tile normalization block size. Default: 0 |
| 131 | + --nimg_per_tif NIMG_PER_TIF |
| 132 | + number of crops in XY to save per tiff. Default: 10 |
| 133 | + --crop_size CROP_SIZE |
| 134 | + size of random crop to save. Default: 512 |
0 commit comments