RealityScan

RealityScan – An End-User Usable 3D Scanning Pipeline

Our RealityScan project is supported by the Intel Visual Computing Institute in Saarbrücken.

Rendering 3D models is nowadays ubiquitous. 3D models can even be used in mobile games on modern smartphones. Through techniques such as XML3D they are finding their way into regular web browsers. In contrast, the ability to easily create 3D content is severely lagging behind. Image-based modeling, the process of automatically creating 3D models from photos of real world scenes, has made significant progress in recent years, but it is still by far not simple enough to be successfully used by end users. Furthermore many image-based modeling systems have not been built with visual appearance but with geometric correctness in mind. The goal of this project is to create a complete 3D reconstruction pipeline that facilitates this process such that end-users can accomplish visually satisfying results.

TexRecon

Within RealityScan, one subproject is our TexRecon project, where we worked on the automatic creation of diffuse texture maps for a given 3D model and images that are registered against this model. Current state of the art multi-view stereo systems only produce models with vertex colors (i.e., exactly one color is attached to each vertex of a model). Given that the geometric resolution of a 3D model should be small for storing, transferring and quick rendering, texture resolution must far exceed mesh resolution to achieve visually pleasing results. Thus, a texture reconstruction algorithm is essential to produce realistic 3D content. Our texturing algorithm keeps typical properties of multi-view datasets in mind: It chooses images for texturing that were shot close and orthogonal to the model and are in-focus, and it adjusts luminance differences between different texture regions that result from images taken with different lighting conditions, exposure times or camera properties. Furthermore, it accounts for shortcomings in previous reconstruction steps by discarding pedestrians and plants from the texture or masking artifacts from slight camera registration inaccuracies. The algorithm scales to large datasets and produces almost photo-realistic results. We believe that the achieved realism is key in getting end-users to create, view, modify and share 3D content.

A textured reconstruction of Darmstadt's old city wall.
A textured reconstruction of Darmstadt's old city wall.

Virtual Rephotography

Another subproject within RealityScan is Virtual Rephotography: Since automatically reconstructed 3D content will not be flawless in the near future, it is essential to automatically evaluate the quality of reconstructed scenes and point out errors to users. This is especially true for end-users that have not been trained for 3D reconstruction and are thus more likely to capture unsuitable images that in turn produce flawed reconstructions. When reconstructing previously unknown scenes, evaluating a reconstruction must be done without 3D ground truth. What is, however, always available is 2D ground truth, i.e. photographs of the scene. Renderings of a perfect reconstruction should be identical to the input images.

In the past year we worked on a project that exploits this idea: Given a set of images, we separate it into a reconstruction and an evaluation set. We then feed the reconstruction set into a 3D reconstruction algorithm, render the resulting model from the viewpoints of the evaluation images and compute the difference between renderings and evaluation images. This procedure can be seen in the Figure below. As a result, we obtain an error score for the whole model that can then, for example, be compared to the scores of other reconstruction algorithms.

Left to right: Input photo, virtual rephoto (rendered reconstruction), and difference image.
Left to right: Input photo, virtual rephoto (rendered reconstruction), and difference image.
Visualization of local reconstruction error.
Visualization of local reconstruction error.

Additionally, one can take the created difference images (third image in the figure above) and project them back into the scene to obtain a graphic visualization of local reconstruction error (as can be seen on the right). This can, e.g., be used to show users areas where they need to take more photos in order to improve reconstruction quality. We experimentally analyzed our approach and evaluated how it relates to established error metrics that require 3D ground truth.

Moreover, we implemented an online Image-based Modeling and Rendering benchmark based on the same concept. Since our method’s only prerequisite is, that the reconstruction system under investigation can produce renderings from novel viewpoints, this benchmark makes all image-based reconstruction and rendering systems directly comparable.

Web Reconstruction

We also have a web reconstruction framework where you can upload pictures, let the reconstruction conveniently run on our machines and download your 3D model.