Aivia Software
Deep Learning Processor
In Aivia’s Deep Learning Processor, you can train and apply deep learning models.
Interface
To open the Deep Learning Processor, go to Analysis > Deep Learning Processor or run the Deep Learning Processor command from the search bar.
The Deep Learning Processor interface (see below) is composed of two main sections:
Local Library
Jobs Panel
On this page:
|
File menu
The File menu is located in the upper-left corner of the Batch & Cloud Processor window. There are five options in the File menu as described in the table below.
Option | Description |
---|---|
Settings | Opens the Batch & Cloud Processor Settings dialog, which has options for specifying the number of GPUs to use as well as the Python executable path |
Validate Python Modules | Checks that all necessary Python modules are installed and allows you to install any missing modules |
Add Python Module | Opens the Import Python Module dialog, which allows you to specify modules to install |
Wiki Help | Launches the help wiki for Aivia Cloud |
Close | Closes the Deep Learning Processor window; the jobs started continue to process while the window is closed |
Local Library
The Local Library section shows the contents of the current local computer folder. You can specify the folder location by clicking on the Add Local Folder icon, navigating to the desired location, and then clicking Select Folder.
You can update the Local Library by clicking on the Refresh button in the lower-left corner of the window.
Jobs Panel
The Jobs Panel section lets you create processing jobs for training or applying deep learning models. There are two tabs:
The Create Job tab lets you set up a training or applying run. Depending on the type of job, you will have different interface options for specifying the input images and applications.
The Progress Queue tab provides you with status updates on the current and previous jobs that you have created.
Status indicators
The Progress Queue tab shows the status of the current and previous jobs. Each task associated with a job has a status indicator shown once the task has been processed. There are three status indicator icons, which are described in the following table.
Icon | State | Description |
---|---|---|
Success | Indicates the operation or task has been successfully completed | |
Warning | Indicates the operation or task has timed out or is in an unknown state | |
Error | Indicates the operation or task has failed |
General usage
Train a deep learning model
To train a deep learning model, you will need to have a minimum of two pairs of ground truth images and raw input images. The ground truth should be the desired output (or target result) for the model, and the raw input should be the images that you wish to transform. Typically the raw input should adhere to your standard experimental procedure, while the ground truth should represent the "best-case scenario."
Select application
Before adding images, go to the Create Job tab and select the Training option. In the Hyperparameters dropdown menu, select the application you wish to use. There are three 2D and three 3D applications, each with their own requirements for raw and ground truth data, associated with their own default hyperparameters:
RCAN
Segmentation
Virtual Staining
You may click Edit underneath the Hyperparameters dropdown menu to open a dialog (see right) where you can set custom hyperparameters. Use the Load and Save buttons under the Hyperparameters dropdown menu to load hyperparameters from and save your hyperparameters to .AIVIADLPARAM files.
You may choose to augment an existing model instead of training one from scratch by checking the Transfer Learning checkbox and specifying a model to augment. Transfer learning can speed up the learning process or adapt a specified model to a wider gamut of applications.
Select training images
To assign images to their respective classes, first select the images in your Local Library. Then, right-click on the selected images to open up the context menu and select one of the Assign as... options or drag the selected files and drop them in their respective boxes in the Jobs Panel. Ground truth data should be assigned to the Example section, while raw input data should be assigned to the Raw Input section.
In the Jobs Panel, image pairs are indicated by matching green numbers in the top-left corners of the thumbnails.
Please ensure the Raw Input images and Example images are paired and aligned. Mismatched images may results in poorly trained or invalid models.
Initiate job
After specifying the Example and Raw Input image pairs with the application/hyperparameters as well as the output name and folder, click Create in the lower-right corner of the Jobs Panel to create the job.
Once the job begins, the Progress Queue tab is automatically shown. The Progress Queue informs you about the progress of your job. When you initiate a training job, there are two tasks that the Deep Learning Processor may perform, which are given in the table below.
Task | Description |
---|---|
Validating | This task tests the input images for minimum viable shape and congruence. |
Processing | This task is the deep learning model training. You can monitor the progress with real-time plots of training accuracy and error rates displayed at the bottom of the Progress Queue tab. |
When training is finished, the trained model is output to the folder you specified.
With the launch of Aivia 10.5, .ACMODEL model output and usage is deprecated and replaced by .AIVIADL.
Apply a deep learning model
The Applying mode lets you apply a supported deep learning model to your images. To start, click on the Applying button in the Create Job tab to enter apply mode.
Select application
In the Hyperparameters dropdown menu, pick the desired application. Applications define many general characteristics about the data you wish to process; for more advanced users, the application defines the hyperparameters used for applying the model. You may click Edit underneath the Hyperparameters dropdown menu to open a dialog where you can set custom hyperparameters. Use the Load and Save buttons under the Hyperparameters dropdown menu to load hyperparameters from and save your hyperparameters to .AIVIADLPARAM files. In the Model dropdown, select the model you want to use. If you have a newly trained model, click Refresh to update the model list.
Select images
Select the files you wish to apply the model to in either the Local Library section. Right-click on the images and select Assign as raw input from the context menu to designate the images for processing. Alternately, you can drag and drop the selected files into the Raw Input section in the Jobs Panel.
Initiate job
When you have finished selecting the images, the output folder, the hyperparameters, and the model to apply, click Create to create a new apply job.
Once the job begins, the Progress Queue tab is automatically shown. The Progress Queue informs you about the progress of your job. Once the job is finished, the results will be in the output folder you specified.
Data best practices
The Deep Learning Processor supports common image formats such as TIFF, JPG and PNG. Prior to training, it is highly recommended that you follow the best practices below and format your data accordingly.
Same image dimensions and bit-depth: Each pair of the training images (Raw Input and Example) should have the same XYZ and time dimensions; all images should have the same bit-depth.
Minimum image size: For training 3D models, each input image must be 256 x 256 x 16 or greater in all dimensions.
Minimum number of training images: You need at least two (2) pairs of Raw Input and Example images (for a total of 4 images) for training.
Convert images to common formats: It is strongly encouraged that you convert your images to a standard, non-proprietary format such as TIFF (preferred), JPG or PNG. Proprietary formats may not be recognized by the Deep Learning Processor in Aivia and could result in invalid models.
Single channels only: Make sure each image contains only a single image channel; additional channels will not be read and may cause the training or apply to fail.
Single image or volume (3D only) per file: The Deep Learning Processor supports multi-frame training and apply for 3D applications only; by default, the Deep Learning Processor will read any T-series or Z-stacks as a 3D image; 3D+time images are not supported.
Minimize out-of-focus areas: Having the image feature in focus will enable the model to be trained more effectively; if the results from your model are not desirable, you may consider cropping the image to eliminate out-of-focus areas.
Images will be tested for minimum viable shape and congruence between input data.