yslan

yslan/STream3R

No description available.

Model Documentation

STream3R: Scalable Sequential 3D Reconstruction with Causal Transformer



STream3R presents a novel approach to 3D reconstruction that reformulates pointmap prediction as a decoder-only Transformer problem. It introduces an streaming framework that processes image sequences efficiently using causal attention, inspired by advances in modern language modeling. By learning geometric priors from large-scale 3D datasets, STream3R generalizes well to diverse and challenging scenarios, including dynamic scenes where traditional methods often fail.

STream3R reformulates dense 3D reconstruction into a sequential registration task with causal attention.
⭐ Now supports FlashAttention, KV Cache, Causal Attention, Sliding Window Attention, and Full Attention!

pipeline :open_book: See more visual results on our project page


Paper: STream3R: Scalable Sequential 3D Reconstruction with Causal Transformer Project Page: https://nirvanalan.github.io/projects/stream3r Code: https://github.com/NIRVANALAN/STream3R

Abstract

We present STream3R, a novel approach to 3D reconstruction that reformulates pointmap prediction as a decoder-only Transformer problem. Existing state-of-the-art methods for multi-view reconstruction either depend on expensive global optimization or rely on simplistic memory mechanisms that scale poorly with sequence length. In contrast, STream3R introduces an streaming framework that processes image sequences efficiently using causal attention, inspired by advances in modern language modeling. By learning geometric priors from large-scale 3D datasets, STream3R generalizes well to diverse and challenging scenarios, including dynamic scenes where traditional methods often fail. Extensive experiments show that our method consistently outperforms prior work across both static and dynamic scene benchmarks. Moreover, STream3R is inherently compatible with LLM-style training infrastructure, enabling efficient large-scale pretraining and fine-tuning for various downstream 3D tasks. Our results underscore the potential of causal Transformer models for online 3D perception, paving the way for real-time 3D understanding in streaming environments.

Installation



1. Clone Repo
bash
    git clone https://github.com/NIRVANALAN/STream3R
    cd STream3R
    


2. Create Conda Environment
bash
    conda create -n stream3r python=3.11 cmake=3.14.0 -y
    conda activate stream3r
    
3. Install Python Dependencies

Important: Install Torch based on your CUDA version. For example, for *Torch 2.8.0 + CUDA 12.6*:


    

Install Torch

pip3 install torch torchvision --index-url https://download.pytorch.org/whl/cu126

Install other dependencies

pip install -r requirements.txt

Install STream3R as a package

pip install -e .


Inference

You can now try STream3R with the following code. The checkpoint will be downloaded automatically from Hugging Face.

You can set the inference mode to causal for causal attention, window for sliding window attention (with a default window size of 5), or full for bidirectional attention.

python
import os
import torch
from stream3r.models.stream3r import STream3R
from stream3r.models.components.utils.load_fn import load_and_preprocess_images

device = "cuda" if torch.cuda.is_available() else "cpu"

model = STream3R.from_pretrained("yslan/STream3R").to(device)

example_dir = "examples/static_room" image_names = [os.path.join(example_dir, file) for file in sorted(os.listdir(example_dir))] images = load_and_preprocess_images(image_names).to(device)

with torch.no_grad():

Use one mode "causal", "window", or "full" in a single forward pass

predictions = model(images, mode="causal")


We also support a KV cache version to enable streaming input using StreamSession. The StreamSession takes sequential input and processes them one by one, making it suitable for real-time or low-latency applications. This streaming 3D reconstruction pipeline can be applied in various scenarios such as real-time robotics, autonomous navigation, online 3D understanding and SLAM. An example usage is shown below:

python
import os
import torch
from stream3r.models.stream3r import STream3R
from stream3r.stream_session import StreamSession
from stream3r.models.components.utils.load_fn import load_and_preprocess_images

device = "cuda" if torch.cuda.is_available() else "cpu"

model = STream3R.from_pretrained("yslan/STream3R").to(device)

example_dir = "examples/static_room" image_names = [os.path.join(example_dir, file) for file in sorted(os.listdir(example_dir))] images = load_and_preprocess_images(image_names).to(device)

StreamSession supports KV cache management for both "causal" and "window" modes.

session = StreamSession(model, mode="causal")

with torch.no_grad():

Process images one by one to simulate streaming inference

for i in range(images.shape[0]): image = images[i : i + 1] predictions = session.forward_stream(image) session.clear()


Demo

You can run the demo built on VGG-T's code using the script app.py with the following command:

sh
python app.py


Quantitative Results



*3D Reconstruction Comparison on NRGBD.*

| Method | Type | Acc Mean ↓ | Acc Med. ↓ | Comp Mean ↓ | Comp Med. ↓ | NC Mean ↑ | NC Med. ↑ | |---------------------|----------|------------|------------|-------------|-------------|-----------|-----------| | VGG-T | FA | 0.073 | 0.018 | 0.077 | 0.021 | 0.910 | 0.990 | | DUSt3R | Optim | 0.144 | 0.019 | 0.154 | 0.018 | 0.870 | 0.982 | | MASt3R | Optim | 0.085 | 0.033 | 0.063 | 0.028 | 0.794 | 0.928 | | MonST3R | Optim | 0.272 | 0.114 | 0.287 | 0.110 | 0.758 | 0.843 | | Spann3R | Stream | 0.416 | 0.323 | 0.417 | 0.285 | 0.684 | 0.789 | | CUT3R | Stream | 0.099 | 0.031 | 0.076 | 0.026 | 0.837 | 0.971 | | StreamVGGT | Stream | 0.084 | 0.044 | 0.074 | 0.041 | 0.861 | 0.986 | | Ours | Stream | 0.057 | 0.014 | 0.028 | 0.013 | 0.910 | 0.993 |

Read our full paper for more insights.

GPU Memory Usage and Runtime



We report the peak GPU memory usage (VRAM) and runtime of our full model for processing each streaming input using the StreamSession implementation. All experiments were conducted at a common resolution of 518 × 384 on a single H200 GPU. The benchmark includes both *Causal* for causal attention and *Window* for sliding window attention with a window size of 5.

*Run Time (s).* | Num of Frames | 1 | 20 | 40 | 80 | 100 | 120 | 140 | 180 | 200 | |-----------|--------|--------|--------|--------|--------|--------|--------|--------|--------| | Causal | 0.1164 | 0.2034 | 0.3060 | 0.4986 | 0.5945 | 0.6947 | 0.7916 | 0.9911 | 1.1703 | | Window | 0.1167 | 0.1528 | 0.1523 | 0.1517 | 0.1515 | 0.1512 | 0.1482 | 0.1443 | 0.1463 |

*VRAM (GB).* | Num of Frames | 1 | 20 | 40 | 80 | 100 | 120 | 140 | 180 | 200 | |-----------|--------|--------|--------|--------|--------|--------|--------|--------|--------| | Causal | 5.49 | 9.02 | 12.92 | 21.00 | 25.03 | 29.10 | 33.21 | 41.31 | 45.41 | | Window | 5.49 | 6.53 | 6.53 | 6.53 | 6.53 | 6.53 | 6.53 | 6.53 | 6.53 |

Datasets

We follow CUT3R to preprocess the dataset for training. The training configuration can be found at
configs/experiment/stream3r/stream3r.yaml
.

TODO



  • [ ] Release evaluation code.
  • [ ] Release training code.
  • [ ] Release the metric-scale version.


  • License



    This project is licensed under NTU S-Lab License 1.0. Redistribution and use should follow this license.

    Citation



    If you find our code or paper helps, please consider citing:

    bibtex
    @article{stream3r2025,
      title={STream3R: Scalable Sequential 3D Reconstruction with Causal Transformer},
      author={Lan, Yushi and Luo, Yihang and Hong, Fangzhou and Zhou, Shangchen and Chen, Honghua and Lyu, Zhaoyang and Yang, Shuai and Dai, Bo and Loy, Chen Change and Pan, Xingang},
      booktitle={arXiv preprint arXiv:2508.10893},
      year={2025}
    }
    

    Acknowledgments

    We recognize several concurrent works on streaming methods. We encourage you to check them out: StreamVGGT  |  CUT3R  |  SLAM3R  |  Spann3R

    STream3R is built on the shoulders of several outstanding open-source projects. Many thanks to the following exceptional projects:

    VGG-T  |  Fast3R  |  DUSt3R  |  MonST3R  |  Viser

    Contact

    If you have any question, please feel free to contact us via lanyushi15@gmail.com or Github issues.

    Files & Weights

    FilenameSizeAction
    model.safetensors 4.44 GB