Visibility-Consistent Thin Surface Reconstruction (TSR)
TSR is the 3D surface reconstruction software accompanying the publication:
Visibility-Consistent Thin Surface Reconstruction Using Multi-Scale Kernels
Samir Aroudj, Patrick Seemann, Fabian Langguth, Stefan Guthe and Michael Goesele
In: ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia), Vol. 36, No. 6, pp. 187:1-187:13, 2017.
[Paper] [Supplemental] [BibTeX] [DOI]
The following images show representative results reconstructed from solely photos and using the software of our group. We employed our pipeline MVE (GitHub fork of MVE supporting export of visibility information, see below for details about TSR input data). TSR was hereby the latest pipeline step to get a dense surface represented by a triangle mesh from an input oriented point cloud with links to capture device positions.
First, regarding thin structures, the images show a plastic Orchid accurately reconstructed by TSR. The pipeline input consisted of 120 images taken freely in an uncontrolled way.
Second, we show the Stephansdom in Vienna reconstructed from 3834 casually captured photos to demonstrate the high robustness and ability of TSR to handle multi-scale surface samples.
Third, we show the reconstruction of a pineapple from 503 input images. This scene is highly challenging due to the uncontrolled environment, and capturing as well as thin leaves of the fruit. Our reconstruction clearly shows that TSR is very robust to noise and able to reconstruct very thin or small structures all the way to the top of the leaves.
orchid front view
orchid back view
pineapple front view
Abstract and author version
One of the key properties of many surface reconstruction techniques is that they represent the volume in front of and behind the surface, e.g., using a variant of signed distance functions. This creates significant problems when reconstructing thin areas of an object since the backside interferes with the reconstruction of the front. We present a two-step technique that avoids this interference and thus imposes no constraints on object thickness. Our method first extracts an approximate surface crust and then iteratively refines the crust to yield the final surface mesh. To extract the crust, we use a novel observation-dependent kernel density estimation to robustly estimate the approximate surface location from the samples. Free space is similarly estimated from the samples' visibility information. In the following refinement, we determine the remaining error using a surface-based kernel interpolation that limits the samples' influence to nearby surface regions with similar orientation and iteratively move the surface towards its true location. We demonstrate our results on synthetic as well as real datasets reconstructed using multi-view stereo techniques or consumer depth sensors.
SIGGRAPH Asia paper; PDF
Supplemental Material; PDF
TSR GitHub repository
TSR Win64 binary
The source code is available via GitHub under the BSD 3-Clause License. We also provide a snapshot of the code shortly after publication. See the GitHub repository for details about building the application, creating input data and configuring TSR.
We here provide the synthetic wedge dataset presented in the paper and the MVE scene of the pineapple shown above. Please cite our publication in case you use these datasets.
- synthetic wedge ground truth object: wedge
- pineapple: MVE scene (images and SfM data, 8.12 GB)
- example TSR config and input description files are provided with the windows binary or via GitHub
Example command to reconstruct the synthetic scene:
TSRApp.exe App.cfg Scenes\WedgeComplex\InputDataSynthetic.txt
In case you want to reconstruct a real world scene, see MVE and its visibility-based GitHub fork which supports reconstruction of input point clouds with links to sensor device positions and output of corresponding capture devices files.
TSR was partially funded by the Seventh Research Framework Programme (7FP) of the European Commission within the scope of the project Harvest4D.