TexRecon – 3D Reconstruction Texturing
This website provides material for our 3D reconstruction texturing algorithm.
Abstract: 3D reconstruction pipelines using structure-from-motion and multi-view stereo techniques are today able to reconstruct impressive, large-scale geometry models from images but do not yield textured results. Current texture creation methods are unable to handle the complexity and scale of these models. We therefore present the first comprehensive texturing framework for large-scale, real-world 3D reconstructions.
Our method addresses most challenges occurring in such reconstructions: the large number of input images, their drastically varying properties such as image scale, (out-of-focus) blur, exposure variation, and occluders (e.g., moving plants or pedestrians). Using the proposed technique, we are able to texture datasets that are several orders of magnitude larger and far more challenging than shown in related work.
If you have questions regarding this project (bug reports, etc.) please contact us through the GitHub link below.
The project is described in the following paper:
- Let There Be Color! – Large-Scale Texturing of 3D Reconstructions
Michael Waechter, Nils Moehrle, and Michael Goesele
In: European Conference on Computer Vision (ECCV 2014), Zürich, Switzerland, 6-12 Sept. 2014.
[GitHub] The code contains a README.md file in which you can find all information concerning code dependencies, compilation, what input to provide to the application, etc. Furthermore, it contains a LICENSE.txt with information on how this software is licensed.
If you use our code for research purposes, please cite our paper.
As input our algorithm requires a triangulated 3D model and images that are registered against this model. One way to obtain this is to
- import images, infer camera parameters and reconstruct depth maps using the Multi-View Environment, and
- fuse these depth maps into a combined 3D model using the Floating Scale Surface Reconstruction algorithm.
The following guide shows how this can be done (including code downloading, compilation and execution):
imdir=<directory where you have your images>
codedir=<directory where you want to do the compilation>
# Download and compile MVE
git clone https://github.com/simonfuhrmann/mve.git
make -j8 -C mve
# Compile UMVE
cd mve/apps/umve && qmake && make
# Download and compile texrecon
git clone https://github.com/nmoehrle/mvs-texturing.git texrecon
cd texrecon && mkdir build && cd build
# Image import & bundling (images --> camera parameters)
$codedir/mve/apps/makescene/makescene -i . scene
# Multi-view stereo reconstruction (images + camera parameters --> depth maps)
$codedir/mve/apps/dmrecon/dmrecon -s3 scene # -s3 downscales the images three times
# Surface reconstruction (depth maps + camera parameters --> 3D model)
$codedir/mve/apps/scene2pset/scene2pset -F3 scene point-set.ply
$codedir/mve/apps/fssrecon/fssrecon point-set.ply surface.ply
$codedir/mve/apps/meshclean/meshclean -t10 -c10000 surface.ply surface-clean.ply
# Texturing (3D model + images + camera parameters --> textured 3D model)
$codedir/texrecon/build/apps/texrecon/texrecon scene::undistorted surface-clean.ply textured
# Inspect results
There are other ways as well to obtain 3D models with registered cameras, for example with range scanning, etc. Our program only requires a 3D model, images and camera parameters and the latter do not need to be in the MVE scene format. For information on the accepted input formats check the README file in the code tarball above or the output of the compiled binary when you run it without parameters.