Skip to content

Commit ba85db1

Browse files
authored
Updated tutorial (#54)
* correct tutorial figures * edit figure insertion and create quick SPANISH translation * add spanish tutorial to toc file * Update _toc.yml * Update _toc.yml
1 parent eb00635 commit ba85db1

File tree

5 files changed

+299
-30
lines changed

5 files changed

+299
-30
lines changed

piximi-documentation/_toc.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,7 @@ parts:
1111
- caption: Tutorials
1212
chapters:
1313
- file: translocation_tutorial
14+
- file: translocation_tutorial_ES
1415
- file: classify-example-eukaryotic-image
1516
- file: classify-example-eukaryotic-object
1617
- caption: How-to Guides
376 KB
Loading
374 KB
Loading

piximi-documentation/translocation_tutorial.md

Lines changed: 69 additions & 30 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Piximi: Installation-free segmentation and classification in the browser
22

3-
## A computer exercise using webtool \- Piximi
3+
## A computer exercise using webtool - Piximi
44

55
Beth Cimini, Le Liu, Esteban Miglietta, Paula Llanos, Nodar Gogoberidze
66

@@ -12,20 +12,18 @@ Broad Institute of MIT and Harvard, Cambridge, MA.
1212

1313
Piximi is a modern, no-programming image analysis tool leveraging deep learning. Implemented as a web application at [https://piximi.app/](https://piximi.app/), Piximi requires no installation and can be accessed by any modern web browser. Its client-only architecture preserves the security of researcher data by running all\* computation locally.
1414

15-
Piximi is interoperable with existing tools and workflows by supporting import and export of common data and model formats. The intuitive researcher interface and easy access to Piximi allows biological researchers to obtain insights into images within just a few minutes. Piximi aims to bring deep learning-powered image analysis to a broader community by eliminating barriers to entry.
15+
Piximi is interoperable with existing tools and workflows by supporting import and export of common data and model formats. The intuitive interface and easy access to Piximi allows biological researchers to obtain insights into images within just a few minutes. Piximi aims to bring deep learning-powered image analysis to a broader community by eliminating barriers to entry.
1616

1717
\* except for the segmentations using Cellpose, which are sent to a remote server (with the permission of the user).
1818

1919
Core functionalities: **Annotator, Segmentor, Classifier, Measurments.**
2020

2121
#### **Goal of the exercise**
2222

23-
In this exercise, you will familiarize yourself with Piximi’s main functionalities of annotation, segmentation, classification, measurement and visualization and use it to analyze a sample image dataset from a translocation experiment. The goal of this experiment is to determine the **lowest effective dose** of Wortmannin required to induce GFP-tagged FOXO1A nuclear localization (Figure 1\)**.** You will segment the images using one of the deep learning models available in Piximi, check and curate the segmentation, then train an image classifier to classify the individual cells as having “nuclear-GFP”, “cytoplasmic-GFP” or “no-GFP”. Finally, you will make measurements and plot them to answer the biological question.
23+
In this exercise, you will familiarize yourself with Piximi’s main functionalities of annotation, segmentation, classification, measurement and visualization and use it to analyze a sample image dataset from a translocation experiment. The goal of this experiment is to determine the **lowest effective dose** of Wortmannin required to induce GFP-tagged FOXO1A nuclear localization (Figure 1). You will segment the images using one of the deep learning models available in Piximi, check and curate the segmentation, then train an image classifier to classify the individual cells as having “nuclear-GFP”, “cytoplasmic-GFP” or “no-GFP”. Finally, you will make measurements and plot them to answer the biological question.
2424

2525
#### **Context of the sample experiment**
2626

27-
<img src="./img/tutorial_images/Figure1.png" style="float: right;" alt="Figure 1" width="200px">
28-
2927
In this experiment, researchers imaged fixed U2OS osteosarcoma (bone cancer) cells expressing a FOXO1A-GFP fusion protein and stained DAPI to label the nuclei. FOXO1 is a transcription factor that plays a key role in regulating gluconeogenesis and glycogenolysis through insulin signaling. FOXO1A dynamically shuttles between the cytoplasm and nucleus in response to various stimuli. Wortmannin, a PI3K inhibitor, can block nuclear export, resulting in the accumulation of FOXO1A in the nucleus.
3028

3129

@@ -40,7 +38,7 @@ In this experiment, researchers imaged fixed U2OS osteosarcoma (bone cancer) cel
4038

4139
#### **Materials necessary for this exercise**
4240

43-
The materials needed in this exercise can be downloaded from: [PiximiTutorial](./downloads/Piximi_Translocation_Tutorial_RGB.zip). The “Piximi Translocation Tutorial RGB.zip” file contains a Piximi project, including all the images, already labeled with the corresponding treatment (Wortmannin concentration or Control). Download this file but **do NOT unzip it**\!
41+
The materials needed in this exercise can be downloaded from: [PiximiTutorial](./downloads/Piximi_Translocation_Tutorial_RGB.zip). The “Piximi Translocation Tutorial RGB.zip” file contains a Piximi project, including all the images, already labeled with the corresponding treatment (Wortmannin concentration or Control). Download this file but **do NOT unzip it**!
4442

4543
#### **Exercise instructions**
4644

@@ -54,20 +52,29 @@ Read through the steps below and follow instructions where stated. Steps where y
5452

5553
* Load the example project: Click “Open” \- “Project” \- “Project from Zip”, as shown in figure 2 to upload a project file for this tutorial from Zip, and you can optionally change the project name in the top left panel, such as “Piximi Exercise”. As it is loaded, you can see the progression in the top left corner logo <img src="./img/tutorial_images/Piximi_logo.png" width="80">.
5654

57-
<img src="./img/tutorial_images/Figure2.png" alt="Figure 2" width="600px">
55+
```{figure} ./img/tutorial_images/Figure2.png
56+
:width: 600
57+
:align: center
58+
59+
**Figure 2**: Loading a project file.
60+
```
5861

5962
2. ##### **Check the loaded images and explore the Piximi interface**
6063

61-
These 17 images represent Wortmannin treatments at eight different concentrations (expressed in nM), as well as mock treatments (0uM). Note the DAPI channel (Nuclei) is shown in magenta and that the GFP channel (FOXOA1) is shown in green.
64+
These 17 images represent Wortmannin treatments at eight different concentrations (expressed in nM), as well as mock treatments (0nM). Note the DAPI channel (Nuclei) is shown in magenta and that the GFP channel (FOXOA1) is shown in green.
6265

6366
As you hover over the image, color labels are displayed on the left corner of the images. These annotations are from metadata in the zipped file we just uploaded. In this tutorial, the different colored labels indicate the concentration of Wortmannin, while the numbers represent the number of images in each category.
6467

6568
Optionally, you can annotate the images manually by clicking “+ Category”, entering your label, and then selecting the image by clicking the images annotating the selected images by clicking **“Categorize”**. In this tutorial, we’ll skip this step since the labels were already uploaded at the beginning.
6669

67-
<img src="./img/tutorial_images/Figure3.png" alt="Figure 3" width="600px">
70+
```{figure} ./img/tutorial_images/Figure3.png
71+
:width: 600
72+
:align: center
6873
74+
**Figura 3**: Exploring the images and labels.
75+
```
6976

70-
3. ##### **Segment Cells \- find out the cells from the background**
77+
3. ##### **Segment Cells - find out the cells from the background**
7178

7279
🔴 TO DO
7380

@@ -79,18 +86,27 @@ Optionally, you can annotate the images manually by clicking “+ Category”, e
7986
* It will take a few minutes to finish the segmentation.
8087

8188

82-
<img src="./img/tutorial_images/Figure4.png" alt="Figure 1" width="600px">
89+
```{figure} ./img/tutorial_images/Figure4.png
90+
:width: 600
91+
:align: center
8392
93+
**Figura 4**: Loading a segmentation model.
94+
```
8495

8596
Please note that the previous steps were performed on your local machine, meaning your images are stored locally. However, Cellpose inference runs in the cloud, which means your images will be uploaded for processing. If your images are highly sensitive, please exercise caution when using cloud-based services.
8697

8798
4. ##### **Visualize segmentation result and fix the segmentation errors**
8899

89100
🔴 TO DO
90101

91-
* Click on the **CELLPOSE\_CELLS** tab to check the individual cells that have been segmented Click on the “IMAGE” tab and then “Annotate”, you can check the segmentation on the whole image.
102+
* Click on the **CELLPOSE_CELLS** tab to check the individual cells that have been segmented. Click on the “IMAGE” tab and then “Annotate”, you can check the segmentation on the whole image.
103+
104+
```{figure} ./img/tutorial_images/Figure5.png
105+
:width: 600
106+
:align: center
92107
93-
<img src="./img/tutorial_images/Figure5.png" alt="Figure 5" width="600px">
108+
**Figura 5**: Piximi's annotator tool.
109+
```
94110

95111
* Optionally, here you can manually refine the segmentation using the annotator tools. The Piximi annotator provides several options to **add**, **subtract**, or **intersect** annotations. Additionally, the **selection tool** allows you to **resize** or **delete** specific annotations. To begin editing, select specific or all images by clicking the checkbox at the top.
96112
* Optionally, you can adjust channels: Although there are two channels in this experiment, the nuclei signal is duplicated in both the red and green channels. This design is intended to be **color-blind friendly** and to produce a **magenta color** for nuclei. The **green channel** also includes cytoplasmic signals.
@@ -105,28 +121,37 @@ Reason for doing this: We want to classify the 'CELLPOSE\_CELLS' based on GFP di
105121

106122
🔴 TO DO
107123

108-
* Go to the **CELLPOSE\_CELLS** tab that displays the segmented objects (arrow 1, figure 6\)
124+
* Go to the **CELLPOSE_CELLS** tab that displays the segmented objects (arrow 1, figure 6)
109125
* Click on the **Classification** tab on the left panel (arrow 2, figure 6).
110-
* Create new categories by clicking **“+ Category”**. Adding “Cytoplasmatic\_GFP”, “Nuclear \_GFP”, “No GFP” three categories (Arrow 3, Figure 6).
126+
* Create new categories by clicking **“+ Category”**. Adding “Cytoplasmatic_GFP”, “Nuclear_GFP”, “No_GFP” three categories (Arrow 3, Figure 6).
111127
* Click on the images that match your criteria. You can select multiple cells by holding **Command (⌘)** on Mac or **Shift** on Linux. Aim to assign **\~20–40 cells per category**. Once selected, click **“Categorize”** to assign the labels to the selected cells.
112128

113-
<img src="./img/tutorial_images/Figure6.png" alt="Figure 6" width="600px">
129+
```{figure} ./img/tutorial_images/Figure6.png
130+
:width: 600
131+
:align: center
132+
133+
**Figura 6**: Classifying individual cells based on GFP presence and localization.
134+
```
114135

115136
6. ##### **Train the Classifier model**
116137

117138
🔴 TO DO
118139

119-
* Click the ”<img src="./img/tutorial_images/Fit_model.png" alt="Fit model icon" width="20px"> - fit model” icon to open the model hyperparameter settings. For today’s exercise, we’ll adjust a few parameters:
120-
* Click on “Architecture Settings” and set the Model Architecture to SimpleCNN.
140+
* Click the ”<img src="./img/tutorial_images/Fit_model.png" alt="Fit model icon" width="20px"> - Fit Model” icon to open the model hyperparameter settings. For today’s exercise, we’ll adjust a few parameters:
141+
* Click on “Architecture Settings” and set the Model Architecture to **SimpleCNN**.
121142
* Update the Input Dimensions to:
122143
- Input rows: 48
123144
- Input cols: 48
124145
- Channels: 3 (since our images are in RGB format)
125146

126147
(You can change to other numbers such as 64, 128)
127148

128-
<img src="./img/tutorial_images/Figure7.png" alt="Figure 7" width="600px">
149+
```{figure} ./img/tutorial_images/Figure7.png
150+
:width: 600
151+
:align: center
129152
153+
**Figura 7**: Classifier model setup.
154+
```
130155

131156
* Click on the “Dataset Setting” tab and set the Training Percentage to 0.75, which reserves 25% of the labeled data for validation.
132157
* When you click **"Fit Classifier"** in Piximi, two training plots will appear “**Accuracy vs Epochs”** and **“Loss vs Epochs”**. Each plot shows curves for both **training** and **validation** data.
@@ -139,12 +164,17 @@ These plots help you understand how the model is learning and whether adjustment
139164

140165
🔴 TO DO
141166

142-
<img src="./img/tutorial_images/Figure8.png" style="float: right;" alt="Figure 8" width="300px">
167+
```{figure} ./img/tutorial_images/Figure8.png
168+
:width: 400
169+
:align: center
170+
171+
**Figura 8**: Classifier training and validation.
172+
```
143173

144-
* Click **“Predict Model” (figure 8, arrow 1\)** to apply the model we just trained. This step will generate predictions on the cells we did not annotate.
145-
* You can review the predictions in the CELLPOSE\_CELLS tab and delete any wrongly assigned categories.
174+
* Click **“Predict Model” (figure 8, arrow 1)** to apply the model we just trained. This step will generate predictions on the cells we did not annotate.
175+
* You can review the predictions in the CELLPOSE_CELLS tab and delete any wrongly assigned categories.
146176
* Optionally, you can continue using the labels to refine the ground truth and improve the classifier. This process is part of the **Human-in-the-loop classification**, where you iteratively correct and train the model based on human input.
147-
* Click **“Evaluate Model” (figure 8, arrow 2\)** to evaluate the model we just trained. The confusion metrics and evaluation metrics can be compared to the ground truth.
177+
* Click **“Evaluate Model” (figure 8, arrow 2)** to evaluate the model we just trained. The confusion metrics and evaluation metrics can be compared to the ground truth.
148178
* Click "Accept Prediction (Hold)”, to assign the predicted labels to all the objects.
149179

150180
8. ##### **Measurement**
@@ -154,27 +184,36 @@ Once you are satisfied with the classification, we will proceed to measure the o
154184
🔴 TO DO
155185

156186
* Click “Measurement” in the top right corner.
157-
* Click Tables (Arrow 1\) and select Image and click “Confirm” (Arrow 2).
187+
* Click Tables (Arrow 1) and select Image and click “Confirm” (Arrow 2).
158188
* Choose "MEASUREMENT" in the left panel, note the measurement step may take some time to process.
159189
* Click on 'Category' to include all categories in the measurement.
160-
* "Under 'Total', click on 'Channel 1' (Arrow 3\) to select the measurement for GFP. You will see the measurement in the “DATA GRID” tab. Measurements are presented as either mean or median values, and the full dataset is available upon exporting the .csv file.
190+
* "Under 'Total', click on 'Channel 1' (Arrow 3) to select the measurement for GFP. You will see the measurement in the “DATA GRID” tab. Measurements are presented as either mean or median values, and the full dataset is available upon exporting the .csv file.
161191

162-
<img src="./img/tutorial_images/Figure9.png" alt="Figure 9" width="600px">
192+
```{figure} ./img/tutorial_images/Figure9.png
193+
:width: 600
194+
:align: center
163195
196+
**Figura 9**: Add measurements.
197+
```
164198

165199
9. ##### **Visualization**
166200

167201
After generating the measurements, you can plot the measurements.
168202

169203
🔴 TO DO
170204

171-
* Click on 'PLOTS' (Arrow 1\) to visualize the measurements.
205+
* Click on 'PLOTS' (Figure 10, Arrow 1) to visualize the measurements.
172206
* Set the plot type to 'Swarm' and choose a color theme based on your preference.
173-
* Select 'Y-axis' as 'intensity-total-channel-1' and set 'SwarmGroup' to 'category'; this will generate a curve showing how GFP intensity varies across different categories (Arrow 2).
174-
* Selecting 'Show Statistics' will display the mean, as well as the upper and lower quality bounds, on the plot.
207+
* Select 'Y-axis' as 'intensity-total-channel-1' and set 'SwarmGroup' to 'category'; this will generate a curve showing how GFP intensity varies across different categories (Figure 10, Arrow 2).
208+
* Selecting 'Show Statistics' will display the mean, as well as the upper and lower confidence boundaries, on the plot.
175209
* Optionally, you can experiment with different plot types and axes to see if the data reveals additional insights.
176210

177-
<img src="./img/tutorial_images/Figure10.png" alt="Figure 10" width="600px">
211+
```{figure} ./img/tutorial_images/Figure10.png
212+
:width: 600
213+
:align: center
214+
215+
**Figura 10**: Plot results.
216+
```
178217

179218
10. ##### **Export results and save the project**
180219

0 commit comments

Comments
 (0)