ICCV 2017 Workshop: 3D Reconstruction meets Semantics

Over the last decades, we have seen tremendous progress in the area of 3D reconstruction, enabling us to reconstruct large scenes at a high level of detail in little time. However, the resulting 3D representations only describe the scene at a geometric level. They cannot be used directly for more advanced applications, such as a robot interacting with its environment, due to a lack of semantic information. In addition, purely geometric approaches are prone to fail in challenging environments, where appearance information alone is insufficient to reconstruct complete 3D models from multiple views, for instance, in scenes with little texture or with complex and fine-grained structures. At the same time, deep learning has led to a huge boost in recognition performance, but most of this recognition is restricted to outputs in the image plane or, in the best case, to 3D bounding boxes, which makes it hard for a robot to act based on these outputs. Integrating learned knowledge and semantics with 3D reconstruction is a promising avenue towards a solution to both these problems. For example, the semantic 3D reconstruction techniques proposed in recent years, e.g., by Häne et al., jointly optimize the 3D structure and semantic meaning of a scene and semantic SLAM methods add semantic annotations to the estimated 3D structure. Learning formulations of depth estimation, such as in Eigen et al., show the promises of integrating single-image cues into multi-view reconstruction and, in principle, allow the integration of depth estimation and recognition in a joint approach.

The goal of this workshop is to explore and discuss new ways for integrating techniques from 3D reconstruction with recognition and learning. How can semantic information be used to improve the dense matching process in 3D reconstruction techniques? How valuable is 3D shape information for the extraction of semantic information? In the age of deep learning, can we formulate parts of 3D reconstruction as a learning problem and benefit from combined networks that estimate both 3D structures and their semantic labels? How do we obtain feedback-loops between semantic segmentation and 3D techniques that improve both components? Will this help recover more detailed 3D structures?

Invited talks by renowned experts will give an overview of the current state of the art. At the same time, the workshop provides authors a platform to present novel approaches towards answering these questions and proposes a new semantic 3D reconstruction challenge.

Twitter
#Pictures from the last consortium meeting in Renningen #robotics #trimbot #H2020 @EU_H2020 @RoboticsEU https://t.co/6uRcXqth7M2 days
TrimBot2020's robotic arm cuts bushes #robotics #automatic #cutting #bush #gardening #research #sterevision… https://t.co/SoWhNc4SXm4 days
@Jesse_Scholtes @farmtechnology @WUR @Jesse_Scholtes look at the demo video of the cutting https://t.co/LowIj2NwpA4 days
Automatic cutting of bushes with the roboti arm of TrimBot: https://t.co/fW6lnAc1Ew via @YouTube4 weeks
Check out the new #paper by the team at @EdinburghUni on "3D plane labeling #stereo #matching" https://t.co/PvatuaAjko #3dvision @EU_H20202 months