Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

To open hyperparameters settings, click Edit underneath the Hyperparameters dropdown menu to open a dialog where you can set custom hyperparameters.

...

1. Network Type

This tab defines the deep learning network we would like to use.

...

Options

Description

RCAN

For denoising and super-resolution. This is also the model used by our Nature Methods papaerpaper. https://www.nature.com/articles/s41592-021-01155-x

UNet

For virtual staining and segmentation. [1505.04597] U-Net: Convolutional Networks for Biomedical Image Segmentation (arxiv.org)

Network Shape

Options: 2D or 3D

...

How to use it: Increase it for model complexity, reduce it for a smaller model..

Number of Residual Groups

...

How to use it: Increase it for model complexity, reduce it for a smaller model..

Channel Reduction Factor

...

Description: Channel reduction factor for the squeeze-and-excitation module. See

How to use it: Increase channel reduction factor for better performance.

...

How to use it: Increase it to build a more complex model, reduce it for reduce it for a smaller model.

...

Options

Description

When to use

None

Use the raw input to train deep learning models

Choose this option if you want to use the original data to train or your input images have been normalized (Note: If the image is 8-bit or 16-bit, the scripts will error out and ask users to choose one of the normalization methods.)

Percentile

Normalize input images using percentile method. Normalizes the image intensity so that the 2nd and 99th percentiles are converted to 0 and 1 respectively.

Generally good for fluorescence images

Divide by Max

Using the max intensity value to normalize images.

Useful for normalizing segmentation mask

...

Data Augmentation

Options

Description

When to use

None

No augmentation

If you believe you have enough image pair samples

Rotate_and_flip

Randomly rotate and flip data to increasing increase input data variety. Note that when this option is selected, you need to make sure the Block Size width and height is are the same.

If you have little a limited amount of data, allow allowing data augmentation generally gives you a better results and prevent prevents overfitting.

Block Size

Default: 256, 256, 16 (width, height, depth)

...

Options

Description

When to use

Intensity threshold

If intensity_threshold > 0, pixels whose intensities are greater than this threshold will be considered as foreground.

Set the threshold when your images has have fewer foregroundforegrounds. Try to start with a small number such as 0.25.

Area ratio threshold

If intensity_threshold > 0, the generator calculates the ratio of foreground pixels in a target patch , and rejects the patch if the ratio is smaller than this threshold.

Set the threshold when your images has have fewer foreground signals. Try to start with 0.05.

...

Options

Description

When to use

Staircase exponential decay

drop the learning rate by half every 100 epochs.

Default

Exponential Decay

Exponentially reduce the learning rate on every epoch using the function: learning_rate = learning_rate*0.5^(epoch/100)

If staircase exponential decay does not works work for your model

Reduce on Plateau

Reduce learning rate to 0.1*learning_rate when validation loss has stopped improving for more than 10 epochs.

For models that are harder to train.

...

Options

Description

When to use

Mean absolute error

Measures the mean absolute error (MAE) between each element in the input x and target y.Default Prediction(Pred) and Ground Truth(GT).

The default for Denoising, Super-Resolution, and Virtual Staining

balanced binary crossentropycross-entropy

Weighted verision version binary crossentropy cross-entropy loss for imbalanced data.

Default for Segmentation

Mean squared error

Measures the mean squared error (MAE) between each element in the input x and target yPrediction(Pred) and Ground Truth(GT).

More sensitive to outlier comparing compared to mean absolute error.

binary cross-entropy

BCE compares each of the predicted probabilities to binary Ground Truth

Good for segmentation, only when the data is balanced

dice loss

2*(Pred ∩ GT) / (Pred + GT)

Also good for imbalanced data

...

Options

Description

When to use

PSNR

Computes the peak signal-to-noise ratio between two images. Note that the maximum signal value is assumed to be 1.

Denoising, Super-Resolution, and Virtual Staining

SSIM

Computes the structural similarity index between two images. Note that the maximum signal value is assumed to be 1.

Denoising, Super-Resolution, and Virtual Staining

Accuracy

Correct outputs/Total outputs

Segmentation

3. Apply Parameters

...

Intensity Normalization Method

Should It should be the same as the intensity normalization method in Training parameters.

...