ICCV 2017 Workshop: 3D Reconstruction meets Semantics

Over the last decades, we have seen tremendous progress in the area of 3D reconstruction, enabling us to reconstruct large scenes at a high level of detail in little time. However, the resulting 3D representations only describe the scene at a geometric level. They cannot be used directly for more advanced applications, such as a robot interacting with its environment, due to a lack of semantic information. In addition, purely geometric approaches are prone to fail in challenging environments, where appearance information alone is insufficient to reconstruct complete 3D models from multiple views, for instance, in scenes with little texture or with complex and fine-grained structures. At the same time, deep learning has led to a huge boost in recognition performance, but most of this recognition is restricted to outputs in the image plane or, in the best case, to 3D bounding boxes, which makes it hard for a robot to act based on these outputs. Integrating learned knowledge and semantics with 3D reconstruction is a promising avenue towards a solution to both these problems. For example, the semantic 3D reconstruction techniques proposed in recent years, e.g., by Häne et al., jointly optimize the 3D structure and semantic meaning of a scene and semantic SLAM methods add semantic annotations to the estimated 3D structure. Learning formulations of depth estimation, such as in Eigen et al., show the promises of integrating single-image cues into multi-view reconstruction and, in principle, allow the integration of depth estimation and recognition in a joint approach.

The goal of this workshop is to explore and discuss new ways for integrating techniques from 3D reconstruction with recognition and learning. How can semantic information be used to improve the dense matching process in 3D reconstruction techniques? How valuable is 3D shape information for the extraction of semantic information? In the age of deep learning, can we formulate parts of 3D reconstruction as a learning problem and benefit from combined networks that estimate both 3D structures and their semantic labels? How do we obtain feedback-loops between semantic segmentation and 3D techniques that improve both components? Will this help recover more detailed 3D structures?

Invited talks by renowned experts will give an overview of the current state of the art. At the same time, the workshop provides authors a platform to present novel approaches towards answering these questions and proposes a new semantic 3D reconstruction challenge.

Twitter
#TrimBot goes #offroad in the #garden @WUR #robotics #trimming https://t.co/U3TRyLt0kG5 days
The dataset for the #3drms challenge @ICCV2017 is available https://t.co/9vN2s5EZdA7 days
#DeMoN is the very first work that formulates joint egomotion and #depth #estimation as a pure #learning problem https://t.co/nDyV7cmuvA1 week
Download the dataset for the #3drms challenge #3dreconstruction https://t.co/6w9Fm9PTmX1 week
RT @PALRobotics: TIAGo knows how to work a crowd... https://t.co/vjOuJuVxJH #robots #robotics #robocup20171 week