Project Lab Visual Computing ((Fortgeschrittenes) Praktikum Visual Computing)

Project Lab Visual Computing (Projektpraktikum Visual Computing)

In this potentially advanced project lab a student works on an interesting selected topic in capturing reality, i.e., on the boundary between computer vision and computer graphics. Project results will be presented in a talk at the end of the module. The specific topics addressed in the project lab change every semester.

Modalities (Credit Points, Prerequisites, …)

For all information regarding modalities of this project lab please have a look into TUCaN. (Project Lab Visual Computing or Advanced Project Lab Visual Computing)

Topics Summer Term 2018

We offer highly interesting and hands-on projects this term!

Boosting the IBMR Benchmark

Introduction
There is a plethora of image-based 3D reconstruction pipelines. To make them comparable despite the high versatility of existing approaches, we introduced our image-based modeling and rendering benchmark. See the official web page
IBMR benchmark for detailed information.
Our benchmark is supposed to support the community for years by always having submissions created by the most recent state-of-the-art pipelines.

Task
The goal of this project lab is to boost the benchmark. In particular, participating students are required to create various submissions for the benchmark. This involves reconstructing the datasets available for the benchmark using various different pipelines. The focus will be on using a wide range of state-of-the-art reconstruction pipeline types. Further, the students have to appropriately render the reconstruction results to provide image-based submissions. Quite likely, data converters need to be implemented to manage the data flow of the employed individual reconstruction software modules.

Highly recommended prerequisites

  • be familiar with linear algebra
  • C++
  • Computer Graphics I, Computer Vision I or Capturing Reality

Point Cloud Noise and Outlier Removal for Image-Based 3D Reconstruction

Introduction
The GCC research members heavily use their inhouse reconstruction pipeline called Multi-View Environment (MVE). Our MVE pipeline usually involves the following steps:

  • Structure from Motion (SfM) to estimate camera parameters and a sparse point cloud of the scene
  • Multi-View Stereo (MVS) to obtain depth maps for each camera and a dense point cloud for scene representation
  • Surface Reconstruction (e.g., FSSR, TSR) to compute surface triangle meshes from dense point clouds
  • Post processing of surface meshes, e.g., texture reconstruction

Motivation
Multi-View Stereo is a highly challenging passive reconstruction approach and often leads to erroneous results with potential for improvement by post filtering.
The goal of MVS is reconstructing 3D sample points of surfaces seen from multiple captured views. The key challenge hereby is robustly finding correct correspondences between different views of the very same surface part. Due to the ambiguity during matching for finding of correspondences and owing to image noise, camera calibration errors, etc. the output point clouds might be afflicted by noise or contain not few outlier surface sample points which actually do not fit to the true object surfaces. These erroneous sample point clouds are then input into a surface reconstruction algorithm such as FSSR or TSR. The basic idea for this lab is to improve the quality of the sample point clouds from MVS to make the following task of surface reconstruction easier.

Task
The goal of this project lab is to extend our reconstruction pipeline (MVE) by an additional point cloud filtering step. This additional filtering step is supposed to improve the quality of potentially noisy sample point clouds with outliers reconstructed by Multi-View Stereo. For this filtering, students are supposed to implement and integrate a novel approach that is based on geometric and photometric consistency constraints to sort out erroneous surface samples. The complete filtering technique is described in the following publication:
Point Cloud Noise and Outlier Removal for Image-Based 3D Reconstruction
Katja Wolff, Changil Kim, Henning Zimmer, Christopher Schroers, Mario Botsch, Olga Sorkine-Hornung, Alexander Sorkine-Hornung
In: IEEE International Conference on 3D Vision, 2016.
PDF

Highly recommended prerequisites

  • Capturing Reality or Computer Vision I
  • Computer Graphics I
  • C++

Official website:
http://igl.ethz.ch/projects/noise-rem/

Media:
The following video is from the official website mentioned above.

Topics Winter Term 2016/17

We offer a highly interesting and hands-on project this summer term!

Occluding Contours For Multi-View Stereo

Introduction
The GCC research members heavily use their inhouse reconstruction pipeline called Multi-View Environment (MVE). Our MVE pipeline usually involves the following steps:

  • Structure from Motion (SfM) to register cameras and get a sparse point cloud
  • Multi-View Stereo (MVS) to obtain depth maps for each camera and a dense point cloud for scene representation
  • Floating Scale Surface Reconstruction (FSSR) to compute a surface triangle mesh from the dense point cloud
  • Post processing of the surface mesh, e.g. simplification

Motivation
Reconstruction of edges, corners or other sharp features are sometimes not accurate using FSSR. For example, reconstructed meshes of roof or tree parts might falsely include parts of the sky or surrounding scene.

Task
The goal of this project lab is to implement and integrate a novel approach based on so called occluding contours (visibility constraints) in order to provide high quality results for the mentioned difficult cases as well.
The student is required to implement and integrate techniques of the following publication:
Shan, Qi et al. “Occluding Contours for Multi-View Stereo” CVPR. 2014.


Highly recommended prerequisites

  • Capturing Reality or Computer Vision I
  • Computer Graphics I
  • C++

Official website:
http://grail.cs.washington.edu/projects/sq_rome_g2/

Media:

Vorschaubild