...
Options | Description | When to use |
---|---|---|
None | Use the raw input to train deep learning models | Choose this option if you want to use the original data to train or your input images have been normalized |
Percentile | Normalize input images using percentile method. Normalizes the image intensity so that the 2nd and 99th percentiles are converted to 0 and 1 respectively. | Generally good for fluorescence images |
Divide by Max | Using the max intensity value to normalize images. | Useful for normalizing segmentation mask |
(To be implemented) If the image is 8-bit or 16-bit, Aivia will use % value as threshold.
Data Augmentation
Options | Description | When to use |
---|---|---|
None | No augmentation | If you believe you have enough image pair samples |
Rotate_and_flip | Randomly rotate and flip data to increasing input data variety. Note that when this option is selected, you need to make sure the Block Size width and height is the same. | If you have little amount of data, allow data augmentation generally gives you a better results and prevent overfitting. |
...
Optimizer
Options | Description | When to use |
---|---|---|
None | Use the raw input to train deep learning models | Choose this option if you want to use the original data to train or your input images have been normalized |
Percentile | Normalize input images using percentile method. Normalizes the image intensity so that the 2nd and 99th percentiles are converted to 0 and 1 respectively. | Generally good for fluorescence images |
Divide by Max | Using the max intensity value to normalize images. | Useful for normalizing segmentation mask |
...
to use |
---|
...
Initial Learning Rate
Default: 0.0001
...
How to use: Checked this if you want to stop training when validation loss has stopped improving for more than 10 epochs.
Batch Size
Defualt: 1
How to use:adjust: Increase it if you have a larger GPU RAM to speed up the training.
Number of Epochs
Default: 300
How to adust: Reduce it to reduce training time. Increase it if model has room to improve and not overfit.
Steps Per Epochs
Default: 256
Description: steps*batch_size examples will given to the model to updates the weights.
How to adjust: Increase it if you want your models see more example per epoch
Loss Function
The goal function that optimizer try to minimize when updating model weights. Usually the lower the loss the better the results.
Options | Description | When to use |
---|---|---|
Mean absolute error | Measures the mean absolute error (MAE) between each element in the input x and target y. | Default for Denoising, Super-Resolution, and Virtual Staining |
balanced binary crossentropy (to be implemented) | Weighted verision binary crossentropy loss for imbalanced data. | Default for Segmentation |
Mean squared error | Measures the mean squared error (MAE) between each element in the input x and target y. | More sensitive to outlier comparing to mean absolute error. |
binary cross entropy | Good for segmentation, only when the data is balanced | |
dice loss | Also good for imbalanced data |
Metrics
Options | Description | When to use |
---|---|---|
PSNR | Computes the peak signal-to-noise ratio between two images. Note that the maximum signal value is assumed to be 1. | Denoising, Super-Resolution, and Virtual Staining |
SSIM | Computes the structural similarity index between two images. Note that the maximum signal value is assumed to be 1. | Denoising, Super-Resolution, and Virtual Staining |
Accuracy | Segmentation |
3. Apply Parameters
This tab define how are we going to update the model weights during training
...
Intensity Normalization Method
Should be the same as intensity normalization method in Training parameters.
Block Size
Unless your GPU can process a larger block at a time, you should use the same block size in Training Parameters. Do not choose a s block size that is smaller than training block size, the neural network will not have enough information to pass down.
Block Overlap Size
The overlap sizes between neighboring blocks.