Skip to content

EricssonResearch/ColabSfM

Repository files navigation

ColabSfM: Collaborative Structure-from-Motion by Point Cloud Registration

Johan Edstedt · André Mateus · Alberto Jaenal

example
The ColabSfM paradigm.

Maintainer: Johan Edstedt

Contact: [email protected]

🚧 Note: This repository is currently under active development. Some features and documentation may be incomplete. 🚧

In this initial code release we provide inference and training code for our model, as well as the synthetic trajectory dataset. Please refer to the checklist below for the current status of different components.

API

We provide a simply API to use the model for inference in api, code example below:

import pycolmap
from colabsfm.api import RefineRoITr

sfm_model_A = pycolmap.Reconstruction(...)
sfm_model_B = pycolmap.Reconstruction(...)

sfm_registrator = RefineRoITr(mode = "se3")
registration_results = sfm_registrator.register_reconstructions(sfm_model_A, sfm_model_B)
print(registration_results)

Checklist

  • Trained models: Pre-trained weights available.
  • Training code: Code for training the model.
  • Inference code: API and inference scripts.
  • Datasets: Release synthetic trajectory dataset.
  • Result reproduction: Verify that results reproduce paper results.
    • Run eval on megadepth: verify that the results of the release reproduce paper results.
    • Train from scratch: verify if model trained with code performs on par with released weights
  • Documentation: Complete setup and usage documentation.

Install

Through Conda (click to expand)
  1. Create a new conda env:
conda activate colabsfm
conda create -n colabsfm python=3.10
  1. pip install the requirments in development mode
uv pip install -e .
  1. Install cuda-toolkit that matches whatever is used by pytorch. You can try something like:
mamba install cuda

(should work for latest pytorch) alternatively

mamba install -c "nvidia/label/cuda-12.2.0" cuda
  1. We need to build the pointops library. This can be done by
cd third_party/pointops
uv pip install .

Note: installing these custom cuda kernels is quite finnicky, editable mode hangs in the current setup.

Through venv (click to expand)
  1. Create a new venv:
python3 -m venv colabsfm
. colabsfm/bin/activate
  1. pip install the requirments
pip install -e .
  1. Install cuda-toolkit. You can try something like:
pip install cuda-toolkit
  1. We need to build the pointops library. This can be done by
cd third_party/pointops
uv pip install .

Note: installing these custom cuda kernels is quite finnicky, editable mode hangs in the current setup. Note: to install pointops make sure you have python3-dev installed

sudo apt install python3-dev
Through Docker (click to expand)
  1. Build the docker image:
docker build -t colabsfm .
  1. Run the container with NVIDIA GPU support:
# For Docker with NVIDIA Container Toolkit (recommended)
docker run --gpus all -it --rm \
  -v $(pwd)/workspace:/app/workspace \
  -v $(pwd)/data:/app/data \
  colabsfm

# Alternative: using nvidia-docker (if available)
nvidia-docker run -it --rm \
  -v $(pwd)/workspace:/app/workspace \
  -v $(pwd)/data:/app/data \
  colabsfm
  1. Optional configurations:
# Run with specific GPU
docker run --gpus '"device=0"' -it --rm \
  -v $(pwd)/workspace:/app/workspace \
  -v $(pwd)/data:/app/data \
  colabsfm

# Run with Jupyter port mapping
docker run --gpus all -it --rm \
  -p 8888:8888 \
  -v $(pwd)/workspace:/app/workspace \
  -v $(pwd)/data:/app/data \
  colabsfm

Note: Make sure you have NVIDIA Container Toolkit installed for GPU support.

Code Structure

Most code is in the colabsfm folder. We use RoITr as backbone, and have therefore fully integrated it into the codebase in the colabsfm/roitr folder.

The experiments used to produce tables etc can be found in experiments.

Finetuning on colabsfm (click to expand)

Run

python experiments/final_experiments/y23w47_colabsfm_combined_lo.py
Evaluating (click to expand)

The evaluation is still quite manual. Running the experiments on colabsfm should just be running, e.g.

# colabsfmer
python experiments/refineroitr_eval_finetuned.py colabsfm/configs/val/colabsfm.yaml

Training

At the moment the code is only prepared to train from scratch on our proposed dataset

To train a ColabSfM model:

  1. download the released data

  2. run the script with the train configuration

python experiments/refineroitr_eval.py colabsfm/configs/train/colabsfm.yaml --data_root=<path to the data folder>

Running evaluation

An initial script for running evaluation on our proposed MegaDepth-based dataset can be found in experiments/refineroitr_eval.py

Note: The script can also be used for training if given the proper configuration file.

To run the evaluation on our proposed dataset:

  1. download the released data and weights

  2. by default the script will try to write to the workspace folder, you may need to create it

  3. run the evaluation script with the validation configuration

  python experiments/refineroitr_eval.py colabsfm/configs/val/megadepth.yaml --data_root=<path to the data folder> --pretrain_weights=<path to the model weights> --colabsfm_mode=<se3/sim3>

Note: the weights to be used need to be the ones for the respective mode, i.e., se3 or sim3

Tests

A hardcoded test of MegaDepth scene 0015 can be found in tests/test.py. This assumes that data/megadepth/Undistorted_SfM/0015/sparse contains the reconstruction.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 3

  •  
  •  
  •