Johan Edstedt · André Mateus · Alberto Jaenal
Maintainer: Johan Edstedt
Contact: [email protected]
🚧 Note: This repository is currently under active development. Some features and documentation may be incomplete. 🚧
In this initial code release we provide inference and training code for our model, as well as the synthetic trajectory dataset. Please refer to the checklist below for the current status of different components.
We provide a simply API to use the model for inference in api, code example below:
import pycolmap
from colabsfm.api import RefineRoITr
sfm_model_A = pycolmap.Reconstruction(...)
sfm_model_B = pycolmap.Reconstruction(...)
sfm_registrator = RefineRoITr(mode = "se3")
registration_results = sfm_registrator.register_reconstructions(sfm_model_A, sfm_model_B)
print(registration_results)- Trained models: Pre-trained weights available.
- Training code: Code for training the model.
- Inference code: API and inference scripts.
- Datasets: Release synthetic trajectory dataset.
- Result reproduction: Verify that results reproduce paper results.
- Run eval on megadepth: verify that the results of the release reproduce paper results.
- Train from scratch: verify if model trained with code performs on par with released weights
- Documentation: Complete setup and usage documentation.
Through Conda (click to expand)
- Create a new conda env:
conda activate colabsfm
conda create -n colabsfm python=3.10- pip install the requirments in development mode
uv pip install -e .- Install cuda-toolkit that matches whatever is used by pytorch. You can try something like:
mamba install cuda(should work for latest pytorch) alternatively
mamba install -c "nvidia/label/cuda-12.2.0" cuda- We need to build the pointops library. This can be done by
cd third_party/pointops
uv pip install .Note: installing these custom cuda kernels is quite finnicky, editable mode hangs in the current setup.
Through venv (click to expand)
- Create a new venv:
python3 -m venv colabsfm
. colabsfm/bin/activate- pip install the requirments
pip install -e .- Install cuda-toolkit. You can try something like:
pip install cuda-toolkit- We need to build the pointops library. This can be done by
cd third_party/pointops
uv pip install .Note: installing these custom cuda kernels is quite finnicky, editable mode hangs in the current setup. Note: to install pointops make sure you have python3-dev installed
sudo apt install python3-devThrough Docker (click to expand)
- Build the docker image:
docker build -t colabsfm .- Run the container with NVIDIA GPU support:
# For Docker with NVIDIA Container Toolkit (recommended)
docker run --gpus all -it --rm \
-v $(pwd)/workspace:/app/workspace \
-v $(pwd)/data:/app/data \
colabsfm
# Alternative: using nvidia-docker (if available)
nvidia-docker run -it --rm \
-v $(pwd)/workspace:/app/workspace \
-v $(pwd)/data:/app/data \
colabsfm- Optional configurations:
# Run with specific GPU
docker run --gpus '"device=0"' -it --rm \
-v $(pwd)/workspace:/app/workspace \
-v $(pwd)/data:/app/data \
colabsfm
# Run with Jupyter port mapping
docker run --gpus all -it --rm \
-p 8888:8888 \
-v $(pwd)/workspace:/app/workspace \
-v $(pwd)/data:/app/data \
colabsfmNote: Make sure you have NVIDIA Container Toolkit installed for GPU support.
Most code is in the colabsfm folder. We use RoITr as backbone, and have therefore fully integrated it into the codebase in the colabsfm/roitr folder.
The experiments used to produce tables etc can be found in experiments.
Finetuning on colabsfm (click to expand)
Run
python experiments/final_experiments/y23w47_colabsfm_combined_lo.pyEvaluating (click to expand)
The evaluation is still quite manual. Running the experiments on colabsfm should just be running, e.g.
# colabsfmer
python experiments/refineroitr_eval_finetuned.py colabsfm/configs/val/colabsfm.yamlAt the moment the code is only prepared to train from scratch on our proposed dataset
To train a ColabSfM model:
-
download the released data
-
run the script with the train configuration
python experiments/refineroitr_eval.py colabsfm/configs/train/colabsfm.yaml --data_root=<path to the data folder>An initial script for running evaluation on our proposed MegaDepth-based dataset can be found in experiments/refineroitr_eval.py
Note: The script can also be used for training if given the proper configuration file.
To run the evaluation on our proposed dataset:
-
download the released data and weights
-
by default the script will try to write to the workspace folder, you may need to create it
-
run the evaluation script with the validation configuration
python experiments/refineroitr_eval.py colabsfm/configs/val/megadepth.yaml --data_root=<path to the data folder> --pretrain_weights=<path to the model weights> --colabsfm_mode=<se3/sim3>Note: the weights to be used need to be the ones for the respective mode, i.e., se3 or sim3
A hardcoded test of MegaDepth scene 0015 can be found in tests/test.py.
This assumes that data/megadepth/Undistorted_SfM/0015/sparse contains the reconstruction.
