ReSplat

ReSplat: Learning Recurrent Gaussian Splatting

Haofei Xu1,2     Daniel Barath1     Andreas Geiger2     Marc Pollefeys1,3
1ETH Zurich     2University of Tübingen, Tübingen AI Center     3Microsoft    

ReSplat enables recurrent Gaussian splatting reconstruction in a feed-forward manner.

As initialization (iteration 0), we introduce a compact and expressive feed-forward model to produce 16x fewer Gaussians than pixel-aligned feed-forward models while still achieving better results.

ReSplat is able to benefit from additional test-time compute with more iterations.

Feed-forward novel view synthesis (540x960) on unseen scenes from 16 input views with ~500K Gaussians (16x fewer than pixel-aligned models).

Abstract

While existing feed-forward Gaussian splatting models offer computational efficiency and can generalize to sparse view settings, their performance is fundamentally constrained by relying on a single forward pass for inference. We propose ReSplat, a feed-forward recurrent Gaussian splatting model that iteratively refines 3D Gaussians without explicitly computing gradients. Our key insight is that the Gaussian splatting rendering error serves as a rich feedback signal, guiding the recurrent network to learn effective Gaussian updates. This feedback signal naturally adapts to unseen data distributions at test time, enabling robust generalization across datasets, view counts, and image resolutions. To initialize the recurrent process, we introduce a compact reconstruction model that operates in a 16x subsampled space, producing 16x fewer Gaussians than previous per-pixel Gaussian models. This substantially reduces computational overhead and allows for efficient Gaussian updates. Extensive experiments across varying number of input views (2, 8, 16, 32), resolutions (256x256 to 540x960), and datasets (DL3DV, RealEstate10K, and ACID) demonstrate that our method achieves state-of-the-art performance while significantly reducing the number of Gaussians and improving the rendering speed.

Approach

Compact initialization with 16x fewer Gaussians than pixel-aligned models.
Recurrent Gaussian update by using the rendering error as a feedback.

ReSplat vs. Standard 3DGS Optimization

A conceptual comparison between ReSplat and standard 3DGS optimization [Kerbl et al.]. ReSplat is a purely feed-forward model with a feed-forward initialization and feed-forward recurrent Gaussian updates, while standard 3DGS optimization relies on many slow gradient descent steps to reach convergence.

State-of-the-art Performance

On RealEstate10K, ReSplat achieves performance similar to LVSM while offering 20x faster rendering speed thanks to its efficient 3DGS representation. On DL3DV, ReSplat surpasses previous methods by a clear margin, while using 4x-16x fewer Gaussians than pixel-aligned models.

Robust Generalization

ReSplat leverages the rendering error as a feedback signal, allowing it to adapt to the test data and thereby achieve robust generalization to unseen datasets, view counts and image resolutions.

Optimization-based vs. Feed-Forward Refinement

Starting from the same ReSplat initialization, we compare our feed-forward refinement against per-scene optimization using 3DGS [Kerbl et al.]. Our ReSplat improves the rendering quality significantly faster (4 vs. 80 iterations) and provides a 53x speedup in reconstruction time. Furthermore, as highlighted in the zoomed-in region of (b), our per-iteration speed is faster than standard optimization since our approach eliminates the need for gradient computation.

Single-step vs. Recurrent Models

Our recurrent ReSplat-Small (77M) outperforms all single-step baselines, including the significantly larger ReSplat-Large (559M) and WorldMirror (1263M). This demonstrates that the benefits of recurrent error correction cannot be matched by simply increasing model parameters.

High-Quality View Synthesis

ReSplat produces significantly better novel view synthesis results than prior methods on DL3DV.

BibTeX

@article{xu2025resplat,
      title={ReSplat: Learning Recurrent Gaussian Splatting},
      author={Xu, Haofei and Barath, Daniel and Geiger, Andreas and Pollefeys, Marc},
      journal={arXiv preprint arXiv:2510.08575},
      year={2025}
    }

awesome webpage template