1. Network Type
This tab define the deep learning network we would like to use.
Network Selections
Options | Description |
---|---|
RCAN | For denoising and super-resolution. This is also the model used by our Nature Methods papaer. https://www.nature.com/articles/s41592-021-01155-x |
UNet | For virtual staining and segmentation |
Network Shape
Options: 2D or 3D
Description: By default, you should choose 2D model for 2D image data and 3D model for 3D data. But you can train a 2D model using 3D data. It will process the image slice-by-slice.
RCAN Specific Parameters
Number of Filters
Default: 32
Description: Number of features (i.e. number of output channels of each convolution layer).
How to use it: Increase it for model complexity, reduce it for a smaller model.
Number of Residual Blocks
Default: 3
Description: Number of residual blocks in each residual group.
How to use it: Increase it for model complexity, reduce it for a smaller model..
Number of Residual Groups
Default: 3
Description: Number of residual groups.
How to use it: Increase it for model complexity, reduce it for a smaller model..
Channel Reduction Factor
Default: 8
Description: Channel reduction factor for the squeeze-and-excitation module. See
How to use it: Increase channel reduction factor for better performance.
UNet Specific Parameters
Depth
Default: 4
Description: Depth of UNet architecture (the number of down/up-sampling).
How to use it: Increase it to build a more complex model, reduce it for reduce it for a smaller model.
Number of Initial Filter
Default: 64
Description: Number of filters in the first convolution layer.
How to use it: Increase it to build a more complex model, reduce it for reduce it for a smaller model.
Filter Growth Factor
Default: 64
Description: Number of filters added/subtracted when down/up-sampling.
How to use it: Increase it to build a more complex model, reduce it for reduce it for a smaller model.
Normalization Type
Default: None
Description: Normalization method applied in the residual block. Currently three methods ("batch", "instance", and "group") are supported. Note that the number of groups for group normalization is hard-coded to 16. No normalization is performed if None
is given.
How to use it: Try different normalization methods to see which method works the best for your dataset
Channel Reduction Factor
Default: 8
Description: Channel reduction factor for the squeeze-and-excitation module. See
How to use it: Increase channel reduction factor for better performance.
Use Attention Gate
Default: False
Description: If True
, attention gates are applied to skip-connection signals.
How to use it: Automatically learns to focus on target structures of varying shapes and sizes. Try on/off this option to see if it works for your dataset.
Activation Type at the Last Layer
Default: Sigmoid
Description: Activation function applied to the output.
How to use it: Try different last layer activation functions to see if it works for your dataset.
2. Training Parameters
This tab define some general parameters and how is Aivia going to update the model weights during training
Intensity Normalization Method
Options | Description | When to use |
---|---|---|
None | Use the raw input to train deep learning models | Choose this option if you want to use the original data to train or your input images have been normalized |
Percentile | Normalize input images using percentile method. Normalizes the image intensity so that the 2nd and 99th percentiles are converted to 0 and 1 respectively. | Generally good for fluorescence images |
Divide by Max | Using the max intensity value to normalize images. | Useful for normalizing segmentation mask |
Data Augmentation
Options | Description | When to use |
---|---|---|
None | No augmentation | If you believe you have enough image pair samples |
Rotate_and_flip | Randomly rotate and flip data to increasing input data variety. Note that when this option is selected, you need to make sure the Block Size width and height is the same. | If you have little amount of data, allow data augmentation generally gives you a better results and prevent overfitting. |
Block Size
Default: 256, 256, 16 (width, height, depth)
How to adjust: If your GPU is less capable, reduce the each default by several pixels until you can run the training on your computer without out of memory issue. Do not make block size too small, otherwise the model may not have enough pixels/voxels to pass down the convolution neural networks.
Foreground Patch Selection
Options | Description | When to use |
---|---|---|
Intensity threshold | If | Set the threshold when your images has fewer foreground. Try to start with a small number such as 0.05. |
Area ratio threshold | If | Set the threshold when your images has fewer foreground. Try to start with 0.25. |
Optimizer
Options | Description | When to use |
---|---|---|
None | Use the raw input to train deep learning models | Choose this option if you want to use the original data to train or your input images have been normalized |
Percentile | Normalize input images using percentile method. Normalizes the image intensity so that the 2nd and 99th percentiles are converted to 0 and 1 respectively. | Generally good for fluorescence images |
Divide by Max | Using the max intensity value to normalize images. | Useful for normalizing segmentation mask |
Note that if the image is 8-bit or 16-bit, Aivia will try to use % value as threshold.
Initial Learning Rate
Default: 0.0001
How to adjust: Reduce it if overfitting.
Learning Rate Scheduling Method
Options | Description | When to use |
---|---|---|
Staircase exponential decay | drop the learning rate by half every | Default |
Exponential Decay | Exponentially reduce the learning rate on every epoch using the function: learning_rate = learning_rate*0.5^(epoch/100) | If staircase exponential decay does not works for your model |
Reduce on Plateau | Reduce learning rate to 0.1*learning_rate when validation loss has stopped improving for more than 10 epochs. | For models that are harder to train. |
Early Stopping
Default: False
How to use: Checked this if you want to stop training when validation loss has stopped improving for more than 10 epochs.
Batch Size
Defualt:
How to use:
Number of Epochs
Steps Per Epochs
Loss Function
Metrics
3. Apply Parameters
This tab define how are we going to update the model weights during training