Segment by Example

Aivia Software

Segment by Example

The Segment By Example (SBE) tool is an image segmentation tool that segments objects in the image based on annotations added by the user.

Interface

When you select Segment By Example, the tool selects the most recently used segmenter or starts a new segmenter, if you have an image currently in the viewer.

The main Segment by Example GUI is shown below. There are 7 sections in the interface:

  • The SBE Toolbar along the top of the GUI allows you to perform file actions related to the Segment by Example tool.

  • The Tutorial section displays helpful tips to get you started.

  • The Input & Output section lets you select your input channel, your output object set, the name of the output object set, and the color of the output object set.

  • The Drawing section lets you select a tool for defining training regions in the Image Panel.

  • The Parameters section lets you adjust the estimated parameters after they’ve been calculated from your drawings.

  • The Apply Controls along the bottom of the GUI allow you to preview and run a segmenter and specify regions of interest (ROIs) for previewing and applying.

  • The Warning section indicates anything that is preventing you from running segmentation or estimating parameters.

    image-20250305-224507.png
    Segment by Example GUI

SBE Toolbar

The SBE Toolbar is located at the top of the SBE tab. The toolbar lets you perform file actions related to the SBE, such as creating a new segmenter; saving a segmenter and/or annotations; and loading segmenter files. 

The function for each item in the toolbar is summarized in the table below.

image-20250305-232830.png
SBE Toolbar

 

Function

Icon

Description

Function

Icon

Description

Select Segmenter

 

Shows the name and type (2D or 3D) of the currently selected segmenter; click on the item to expand a dropdown menu from which you may select a loaded/created segmenter for estimation and/or applying

Create New

image-20250305-233613.png

 

 

Allows you to create a new, blank segmenter for estimation

Delete

image-20250305-233659.png

 

Removes the current segmenter from Aivia; this does not delete the segmenter from its file if the segmenter was loaded from a file

Remove All Segmenters

Removes all currently loaded segmenters

Load

Allows you to load a saved off segmenter (.segmenter), (.sbetraining), or set of annotations (.annotations) from a file

Save

Allows you to save the current segmenter, segmenter training , or annotations to a file

Tutorial

The tutorial section contains example images to help you get acquainted with how to use the the SBE tool. Starting from left to right, it gives examples of what steps to follow to successfully create a segmenter.

image-20250310-191148.png
Tutorial Section

Input & Output

The Input & Output section lets you set the following:

  • The input channel to use for estimating when creating a new segmenter and which channel to use when running segmentation.

  • The output object set. By default, it outputs to a new object set.

  • The output object sets name and color.

 

image-20250307-190814.png
Input & Output section

 

 

Drawing

The SBE uses annotations to estimate the parameters to use for the segmentation step. The drawing section allows you to use a variety of drawing tools to draw cells in the image. For best results, draw at least 5 of a variety of cells in your image. You should annotate some of the smallest and biggest cells in your image. All drawings must be done in Main View (2D).

 

Drawing section

 

 

Drawing tools

The function and description for each of the drawing tools are summarized in the table below.

Function

Icon

Description

Function

Icon

Description

Paint 

Paints a circle under the cursor; the brush size is adjustable

Erase

Erases a circle under the cursor; the eraser size is adjustable

Flood

Fills in enclosed regions

Region Drawing

Draws a freehand drawing that automatically closes

Auto Draw 

Predicts the contour on the next z-plane based on the previous drawing; see the "Auto Draw mode" section for more details

Magic Wand

Uses flood-filling to paint a region that has similar intensity and is connected to the selected location; see the "Magic Wand" section for more details

Size Selector

Adjusts the size of the brush or eraser

Jump to Previous Teaching Frame

Jumps to the previous frame with an annotation drawn on it, moving back through Z and then through time

Jump to Next Teaching Frame

Jumps to the next frame with an annotation drawn on it, moving forward through Z and then through time

Automatically Move to Next Frame

(only available for 3D images)

Sets the mode for moving between z-frames after a drawing is completed; the modes are as follows:

  • Stay Still: The viewer does not change the z-frame after a drawing is completed. This is the default mode.

  • Up: After a drawing is completed, the viewer moves up in the z-stack.

  • Down: After a drawing is completed, the viewer moves down in the z-stack.

  • Auto: After a drawing is completed, the viewer moves in the last z-direction the user cycled through.

Auto Draw mode

Auto Draw Options

When Auto Draw mode is selected, a set of options are shown underneath the row of drawing tools (see above). First, select the channel to use for prediction in the dropdown menu on the left, which shows "GFP" in the example above. Click and drag the mouse to draw the outline of a region on the image; when the mouse button is released, the ends of the drawn line are automatically connected to create a closed loop. Use the Interpolate Down button and Interpolate Up button (or hotkeys S and D) to interpolate the next drawing down or up in Z respectively. The Step to Bottom button and Step to Top button (or hotkeys A and F) can be used to visit the bottom and top drawings respectively. Click on the Clear icon to remove all drawing from the current z-slice or on the Reset icon to clear all drawing. Click on Finish Drawing to assign the drawn regions (enclosed by the blue outlines) on all slices.

Magic Wand

Magic Wand Options

Options for the Magic Wand tool are provided below the row of drawing tools when the Magic Wand is selected (see above). The first step is to select the image channel to use from the dropdown menu, which shows "GFP" in the image above.

In Main View (2D), expand the Thresholding Mode dropdown menu to choose one of the following modes:

  1. Bidirectional mode accepts connected pixels within the sensitivity limit above and below the intensity at the selected location.

  2. Peak Finding mode accepts connected pixels within the sensitivity limit above the intensity at the selected location.

  3. Valley Finding mode accepts connected pixels within the sensitivity limit below the intensity at the selected location.

Selected location (small green square), search limits (large green square), and region preview (blue) for the Magic Wand

Select a location on the image for the Magic Wand to search from. A preview of the region to be painted is shown in blue (see right). Adjust the Sensitivity slider (or use hotkeys W and R) to change the tolerance for intensity differences. The narrowest range of accepted intensity levels is used when the slider is all the way to the left. Adjust the Search Range slider (or use Ctrl + Shift + Mouse Scroll over the Image Panel) to increase or decrease the area over which the Magic Wand will search. No pixels outside of the search region are included in the painted region. The search range is indicated by the green box around the cursor when the Magic Wand is used (see right). Check the Fill Multiple Z Frames box to extend the user-defined search range limits in the z-direction; when this box is not checked, the Magic Wand only searches on the current z-frame. Finally, click on Paint (or press E) to paint the region and assign it to the selected class.

When the Fill Multiple Z Frames option is used, the painted regions may not exactly match the blue preview.

Hotkeys for drawing

Drawing mode

Hotkey

Description

Drawing mode

Hotkey

Description

Paint

Ctrl

Switches to erasing while held down

Ctrl + Shift + Mouse Scroll

Adjusts the brush size when done over the Image Panel

Erase

Ctrl

Switches to painting while held down

Ctrl + Shift + Mouse Scroll

Adjusts the eraser size when done over the Image Panel

Auto Draw

A

Goes to the bottom drawing

S

Predicts the next drawing down

D

Predicts the next drawing up

F

Goes to the top drawing

E

Clears all drawing

Magic Wand

W

Decreases the sensitivity

R

Increases the sensitivity

E

Paints the selected region

Ctrl + Shift + Mouse Scroll

Adjusts the search range when done over the Image Panel

Parameters section

The Parameters section is where you can make adjustments to the parameters estimated by the algorithm. These parameters will be given to the underlying algorithm to run the segmentation. Once you have drawn some annotations and the estimation has been run, the estimated values will populate this area and overwrite the previously estimated parameters.

  • Diameter: The estimated average diameter of the cells.

  • Probability Threshold: The sensitivity of cell membrane detection. Higher values mean a lesser number of detected cell membranes.

  • Cellpose Algorithm: This parameter is available for 3D images only.

    • Standard: Original Cellpose based algorithm.

    • Flowbased: Faster mask generation with better z separation for 3D images.

 

image-20250310-142601.png
Parameters Section

Apply Controls

The Apply Controls are located at the bottom of the SBE and lets you preview and run segmentation. It also lets you specify a ROI for preview and optionally one for apply. Prior to running segmentation the user must have run estimation at least once.

 

Function

Icon

Description

Preview ROI

 

Select the singular ROI to use as your preview region. Preview requires at least one ROI, smaller than 300x300

Create Preview Roi

image-20250312-161527.png

Creates a ROI in the center of the viewer, and sets it to be the preview ROI

Toggle Preview

image-20250312-161700.png

 

Toggles the preview on/off

Delete ROI

image-20250312-161801.png

 

Deletes the current selected preview ROI

Use for Final Segmentation

 

Toggles the selected preview ROIs use as the ROI to use when running the full segmentation

Run Segmentation

 

Runs the segmentation on the image based on the current inputs and parameters

Preview

Click on the Preview icon (see above) to initiate preview generation; while previewing is toggled on, there is live previewing that updates as changes to the drawings/annotations are made. All output types may be previewed in Main View (2D).

image-20250307-222019.png
Apply Controls

Warnings

image-20250312-183155.png
Example warning message

While trying to annotate an image to estimate on or running segmentation, warnings may show up in the bottom left. These warnings indicate anything that may interfere with generating good parameter estimates or running segmentation. These warnings are indicated in orange right below the apply controls.

General usage of the SBE

Create a new segmenter

To create a new segmenter, click on the Create New icon in the SBE Toolbar. A blank segmenter will then be created. The general workflow for using a new segmenter is as follows:

1. Draw teaching regions using any tools in the Drawing section to paint desired features of interest.

You need at least 5 objects drawn to estimate, preview, and run a segmenter.

2. Preview the results. Use smaller ROIs for faster previews.

3. Add more drawings to the image, following the same instructions as are in Step 1; keep previewing to determine the quality of the estimation as you add more teaching regions.

4. Run the Segmentation to detect and create cells on your image.

Update/use a segmenter over multiple sessions

There are two (2) types of files that you can save and load back into Aivia for estimating at a later date: annotation files (.annotations) and training set files (.sbetraining). 

Annotation files

Annotation files contain the drawn teaching regions for an image. To save annotations for the displayed image and current segmenter, click on the downward-pointing triangle next to the Save icon in the SBE Toolbar and then select Save Annotations for the current image (see right); this opens a window where you may specify the location and name for the .annotations file.

It is assumed that when annotations are loaded, the image that they are from and the segmenter that was being trained are already open and displayed. To load annotations, click on the Load icon in the SBE Toolbar, navigate to the .annotations file, and then click on Open. The previously drawn regions are then displayed and may be edited and used for previewing, estimating, and running segmenters.

Training set files

Training set files include the following:

  • Path to the image used for the estimation.

  • Drawn teaching regions (annotations)

  • Segmenter settings

  • The segmenter

To save a training set file, click on the downward-pointing triangle next to the Save icon in the SBE Toolbar and select Save Training Set; this opens a window where you may specify the location and name for the .sbetraining file.

To load a training set, click on the Load icon in the SBE Toolbar, navigate to the .sbetraining file, and then click on Open or drag and drop the .sbetraining file onto the SBE interface; this will load image in the training set into the workspace as well as load the previously drawn training regions and segmenter settings for the in-progress segmenter. You may then continue drawing teaching regions, and otherwise building upon and/or using your segmenter.

Changing the file location of images in a training set can prevent loading of the training set. If you move the image or copy to a new location, ensure that the previous image is in the same folder as the training set and it will find the local version of the image.

Delaying loading training files

image-20250312-170325.png
Delay loaded training

The SBE records saved SBE files and reloads them on restarting Aivia. Plain segmenter files fully reload as normal, but SBE training files don’t fully load until told to do so. Until then, they just load the saved segmenter, with the drawing controls hidden. To finish fully load the SBE, including the image used, click the “Load Example Data to Continue Annotating” button.

Save a segmenter

image-20250312-164713.png
Saving options

When you have a taught segmenter that needs no further tuning, you may save the segmenter as a .segmenter file; to do so, click on the downward-pointing triangle next to the Save icon in the SBE Toolbar, select Save Segmenter, choose the name and location for the file, and then click on Save. You may then load the segmenter by clicking on the Load icon in the Segmenter Toolbar, navigating to the .segmenter file, and then clicking on Open or by dragging and dropping the .segmenter onto the Segment By Example interface.

If you would like to save an estimated segmenter but there is a possibility you may want to change its settings or teaching regions later, save the training set instead of (or in addition to) the segmenter. The training set file includes the taught segmenter, if one is available, so that the segmenter can be immediately applied in the future but can also be estimated. 

Technical details

The Segment by Example pipeline consists of two (2) fundamental steps:

  1. Parameter estimation: 

    • Process: User paints the desired regions, and the algorithm generates a set of parameters that represents the characteristics of the drawn regions.

    • Algorithm: Diameters are derived from user-drawn objects on the images, and the probability thresholds are determined from the corresponding regions in the probability map generated by deep learning models.

  2. Run Segmentation: The estimated parameters are run through the backing algorithm to detect the objects in the image.

    • Process: The estimated parameters are fed into the deep learning segmentation algorithm. If needed, user can manually adjust these parameters.

    • Algorithm: Deep learning models. For 2D images, we use our accelerated Cellpose pipeline that is similar to the official Cellpose. For 3D images, based on user selection, we use either our accelerated Cellpose pipeline or our specialized Flow-based Cellpose.