You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: piximi-documentation/translocation_tutorial.md
+69-30Lines changed: 69 additions & 30 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
# Piximi: Installation-free segmentation and classification in the browser
2
2
3
-
## A computer exercise using webtool \- Piximi
3
+
## A computer exercise using webtool - Piximi
4
4
5
5
Beth Cimini, Le Liu, Esteban Miglietta, Paula Llanos, Nodar Gogoberidze
6
6
@@ -12,20 +12,18 @@ Broad Institute of MIT and Harvard, Cambridge, MA.
12
12
13
13
Piximi is a modern, no-programming image analysis tool leveraging deep learning. Implemented as a web application at [https://piximi.app/](https://piximi.app/), Piximi requires no installation and can be accessed by any modern web browser. Its client-only architecture preserves the security of researcher data by running all\* computation locally.
14
14
15
-
Piximi is interoperable with existing tools and workflows by supporting import and export of common data and model formats. The intuitive researcher interface and easy access to Piximi allows biological researchers to obtain insights into images within just a few minutes. Piximi aims to bring deep learning-powered image analysis to a broader community by eliminating barriers to entry.
15
+
Piximi is interoperable with existing tools and workflows by supporting import and export of common data and model formats. The intuitive interface and easy access to Piximi allows biological researchers to obtain insights into images within just a few minutes. Piximi aims to bring deep learning-powered image analysis to a broader community by eliminating barriers to entry.
16
16
17
17
\* except for the segmentations using Cellpose, which are sent to a remote server (with the permission of the user).
In this exercise, you will familiarize yourself with Piximi’s main functionalities of annotation, segmentation, classification, measurement and visualization and use it to analyze a sample image dataset from a translocation experiment. The goal of this experiment is to determine the **lowest effective dose** of Wortmannin required to induce GFP-tagged FOXO1A nuclear localization (Figure 1\)**.** You will segment the images using one of the deep learning models available in Piximi, check and curate the segmentation, then train an image classifier to classify the individual cells as having “nuclear-GFP”, “cytoplasmic-GFP” or “no-GFP”. Finally, you will make measurements and plot them to answer the biological question.
23
+
In this exercise, you will familiarize yourself with Piximi’s main functionalities of annotation, segmentation, classification, measurement and visualization and use it to analyze a sample image dataset from a translocation experiment. The goal of this experiment is to determine the **lowest effective dose** of Wortmannin required to induce GFP-tagged FOXO1A nuclear localization (Figure 1). You will segment the images using one of the deep learning models available in Piximi, check and curate the segmentation, then train an image classifier to classify the individual cells as having “nuclear-GFP”, “cytoplasmic-GFP” or “no-GFP”. Finally, you will make measurements and plot them to answer the biological question.
In this experiment, researchers imaged fixed U2OS osteosarcoma (bone cancer) cells expressing a FOXO1A-GFP fusion protein and stained DAPI to label the nuclei. FOXO1 is a transcription factor that plays a key role in regulating gluconeogenesis and glycogenolysis through insulin signaling. FOXO1A dynamically shuttles between the cytoplasm and nucleus in response to various stimuli. Wortmannin, a PI3K inhibitor, can block nuclear export, resulting in the accumulation of FOXO1A in the nucleus.
30
28
31
29
@@ -40,7 +38,7 @@ In this experiment, researchers imaged fixed U2OS osteosarcoma (bone cancer) cel
40
38
41
39
#### **Materials necessary for this exercise**
42
40
43
-
The materials needed in this exercise can be downloaded from: [PiximiTutorial](./downloads/Piximi_Translocation_Tutorial_RGB.zip). The “Piximi Translocation Tutorial RGB.zip” file contains a Piximi project, including all the images, already labeled with the corresponding treatment (Wortmannin concentration or Control). Download this file but **do NOT unzip it**\!
41
+
The materials needed in this exercise can be downloaded from: [PiximiTutorial](./downloads/Piximi_Translocation_Tutorial_RGB.zip). The “Piximi Translocation Tutorial RGB.zip” file contains a Piximi project, including all the images, already labeled with the corresponding treatment (Wortmannin concentration or Control). Download this file but **do NOT unzip it**!
44
42
45
43
#### **Exercise instructions**
46
44
@@ -54,20 +52,29 @@ Read through the steps below and follow instructions where stated. Steps where y
54
52
55
53
* Load the example project: Click “Open” \- “Project” \- “Project from Zip”, as shown in figure 2 to upload a project file for this tutorial from Zip, and you can optionally change the project name in the top left panel, such as “Piximi Exercise”. As it is loaded, you can see the progression in the top left corner logo <imgsrc="./img/tutorial_images/Piximi_logo.png"width="80">.
2.##### **Check the loaded images and explore the Piximi interface**
60
63
61
-
These 17 images represent Wortmannin treatments at eight different concentrations (expressed in nM), as well as mock treatments (0uM). Note the DAPI channel (Nuclei) is shown in magenta and that the GFP channel (FOXOA1) is shown in green.
64
+
These 17 images represent Wortmannin treatments at eight different concentrations (expressed in nM), as well as mock treatments (0nM). Note the DAPI channel (Nuclei) is shown in magenta and that the GFP channel (FOXOA1) is shown in green.
62
65
63
66
As you hover over the image, color labels are displayed on the left corner of the images. These annotations are from metadata in the zipped file we just uploaded. In this tutorial, the different colored labels indicate the concentration of Wortmannin, while the numbers represent the number of images in each category.
64
67
65
68
Optionally, you can annotate the images manually by clicking “+ Category”, entering your label, and then selecting the image by clicking the images annotating the selected images by clicking **“Categorize”**. In this tutorial, we’ll skip this step since the labels were already uploaded at the beginning.
Please note that the previous steps were performed on your local machine, meaning your images are stored locally. However, Cellpose inference runs in the cloud, which means your images will be uploaded for processing. If your images are highly sensitive, please exercise caution when using cloud-based services.
86
97
87
98
4.##### **Visualize segmentation result and fix the segmentation errors**
88
99
89
100
🔴 TO DO
90
101
91
-
* Click on the **CELLPOSE\_CELLS** tab to check the individual cells that have been segmented Click on the “IMAGE” tab and then “Annotate”, you can check the segmentation on the whole image.
102
+
* Click on the **CELLPOSE_CELLS** tab to check the individual cells that have been segmented. Click on the “IMAGE” tab and then “Annotate”, you can check the segmentation on the whole image.
* Optionally, here you can manually refine the segmentation using the annotator tools. The Piximi annotator provides several options to **add**, **subtract**, or **intersect** annotations. Additionally, the **selection tool** allows you to **resize** or **delete** specific annotations. To begin editing, select specific or all images by clicking the checkbox at the top.
96
112
* Optionally, you can adjust channels: Although there are two channels in this experiment, the nuclei signal is duplicated in both the red and green channels. This design is intended to be **color-blind friendly** and to produce a **magenta color** for nuclei. The **green channel** also includes cytoplasmic signals.
@@ -105,28 +121,37 @@ Reason for doing this: We want to classify the 'CELLPOSE\_CELLS' based on GFP di
105
121
106
122
🔴 TO DO
107
123
108
-
* Go to the **CELLPOSE\_CELLS** tab that displays the segmented objects (arrow 1, figure 6\)
124
+
* Go to the **CELLPOSE_CELLS** tab that displays the segmented objects (arrow 1, figure 6)
109
125
* Click on the **Classification** tab on the left panel (arrow 2, figure 6).
110
-
* Create new categories by clicking **“+ Category”**. Adding “Cytoplasmatic\_GFP”, “Nuclear \_GFP”, “No GFP” three categories (Arrow 3, Figure 6).
126
+
* Create new categories by clicking **“+ Category”**. Adding “Cytoplasmatic_GFP”, “Nuclear_GFP”, “No_GFP” three categories (Arrow 3, Figure 6).
111
127
* Click on the images that match your criteria. You can select multiple cells by holding **Command (⌘)** on Mac or **Shift** on Linux. Aim to assign **\~20–40 cells per category**. Once selected, click **“Categorize”** to assign the labels to the selected cells.
**Figura 6**: Classifying individual cells based on GFP presence and localization.
134
+
```
114
135
115
136
6.##### **Train the Classifier model**
116
137
117
138
🔴 TO DO
118
139
119
-
* Click the ”<imgsrc="./img/tutorial_images/Fit_model.png"alt="Fit model icon"width="20px"> - fit model” icon to open the model hyperparameter settings. For today’s exercise, we’ll adjust a few parameters:
120
-
* Click on “Architecture Settings” and set the Model Architecture to SimpleCNN.
140
+
* Click the ”<imgsrc="./img/tutorial_images/Fit_model.png"alt="Fit model icon"width="20px"> - Fit Model” icon to open the model hyperparameter settings. For today’s exercise, we’ll adjust a few parameters:
141
+
* Click on “Architecture Settings” and set the Model Architecture to **SimpleCNN**.
121
142
* Update the Input Dimensions to:
122
143
- Input rows: 48
123
144
- Input cols: 48
124
145
- Channels: 3 (since our images are in RGB format)
* Click on the “Dataset Setting” tab and set the Training Percentage to 0.75, which reserves 25% of the labeled data for validation.
132
157
* When you click **"Fit Classifier"** in Piximi, two training plots will appear “**Accuracy vs Epochs”** and **“Loss vs Epochs”**. Each plot shows curves for both **training** and **validation** data.
@@ -139,12 +164,17 @@ These plots help you understand how the model is learning and whether adjustment
* Click **“Predict Model” (figure 8, arrow 1\)** to apply the model we just trained. This step will generate predictions on the cells we did not annotate.
145
-
* You can review the predictions in the CELLPOSE\_CELLS tab and delete any wrongly assigned categories.
174
+
* Click **“Predict Model” (figure 8, arrow 1)** to apply the model we just trained. This step will generate predictions on the cells we did not annotate.
175
+
* You can review the predictions in the CELLPOSE_CELLS tab and delete any wrongly assigned categories.
146
176
* Optionally, you can continue using the labels to refine the ground truth and improve the classifier. This process is part of the **Human-in-the-loop classification**, where you iteratively correct and train the model based on human input.
147
-
* Click **“Evaluate Model” (figure 8, arrow 2\)** to evaluate the model we just trained. The confusion metrics and evaluation metrics can be compared to the ground truth.
177
+
* Click **“Evaluate Model” (figure 8, arrow 2)** to evaluate the model we just trained. The confusion metrics and evaluation metrics can be compared to the ground truth.
148
178
* Click "Accept Prediction (Hold)”, to assign the predicted labels to all the objects.
149
179
150
180
8.##### **Measurement**
@@ -154,27 +184,36 @@ Once you are satisfied with the classification, we will proceed to measure the o
154
184
🔴 TO DO
155
185
156
186
* Click “Measurement” in the top right corner.
157
-
* Click Tables (Arrow 1\) and select Image and click “Confirm” (Arrow 2).
187
+
* Click Tables (Arrow 1) and select Image and click “Confirm” (Arrow 2).
158
188
* Choose "MEASUREMENT" in the left panel, note the measurement step may take some time to process.
159
189
* Click on 'Category' to include all categories in the measurement.
160
-
* "Under 'Total', click on 'Channel 1' (Arrow 3\) to select the measurement for GFP. You will see the measurement in the “DATA GRID” tab. Measurements are presented as either mean or median values, and the full dataset is available upon exporting the .csv file.
190
+
* "Under 'Total', click on 'Channel 1' (Arrow 3) to select the measurement for GFP. You will see the measurement in the “DATA GRID” tab. Measurements are presented as either mean or median values, and the full dataset is available upon exporting the .csv file.
After generating the measurements, you can plot the measurements.
168
202
169
203
🔴 TO DO
170
204
171
-
* Click on 'PLOTS' (Arrow 1\) to visualize the measurements.
205
+
* Click on 'PLOTS' (Figure 10, Arrow 1) to visualize the measurements.
172
206
* Set the plot type to 'Swarm' and choose a color theme based on your preference.
173
-
* Select 'Y-axis' as 'intensity-total-channel-1' and set 'SwarmGroup' to 'category'; this will generate a curve showing how GFP intensity varies across different categories (Arrow 2).
174
-
* Selecting 'Show Statistics' will display the mean, as well as the upper and lower quality bounds, on the plot.
207
+
* Select 'Y-axis' as 'intensity-total-channel-1' and set 'SwarmGroup' to 'category'; this will generate a curve showing how GFP intensity varies across different categories (Figure 10, Arrow 2).
208
+
* Selecting 'Show Statistics' will display the mean, as well as the upper and lower confidence boundaries, on the plot.
175
209
* Optionally, you can experiment with different plot types and axes to see if the data reveals additional insights.
0 commit comments