3D Reconstruction meets Semantics 2018 – Challenge

Description

In order to support work on questions related to the integration of 3D reconstruction with semantics, the workshop features a semantic reconstruction challenge. The dataset was rendered from a drive through a semantically-rich virtual garden scene with many fine structures. Virtual models of the environment will allow us to provide exact ground truth for the 3D structure and semantics of the garden and rendered images from virtual multi-camera rig, enabling the use of both stereo and motion stereo information. The challenge participants will submit their result for benchmarking in one or more categories: the quality of the 3D reconstructions, the quality of semantic segmentation, and the quality of semantically annotated 3D models. Additionally, a dataset captured in the real garden from moving robot is available for validation.

Datasets

Given a set of images and their known camera poses, the goal of the challenge is to create a semantically annotated 3D model of the scene. To this end, it will be necessary to compute depth maps for the images and then fuse them together (potentially while incorporating information from the semantics) into a single 3D model.

We provide the following data for the challenge:

  • A synthetic training sequences consisting of
    • 20k calibrated images with their camera poses,
    • ground truth semantic annotations for a subset of these images,
    • a semantically annotated 3D point cloud depicting the area of the training sequence.
  • A synthetic testing sequence consisting of 5k calibrated images with their camera poses.
  • A real-world validation sequence consisting of 268 calibrated images with their camera poses.

Both training and testing data are available here. Please see the git repository for details on the file formats.

This year we accept submissions in several categories: semantics and geometry, either joint or separate. For example, if you have a pipeline that first computes semantics and geometry independently and then fuses them, we can compare how the fused result improved accuracy.

A. Semantic Mesh

In order to submit to the main category if the challenge, please create a single semantically annotated 3D triangle mesh using all sequences from the test scene. The mesh should be stored in the PLY text format. The file should store for each triangle a color corresponding to the triangle’s semantic class (see the calibrations/colors.yaml file for the mapping between semantic classes and colors).

We will evaluate the quality of the 3D meshes based on the completeness of the reconstruction, i.e., how much of the ground truth is covered, the accuracy of the reconstruction, i.e., how accurately the 3D mesh models the scene, and the semantic quality of the mesh, i.e., how close the semantics of the mesh are to the ground truth.

B. Geometric Mesh

Same as above, but PLY mesh without semantic annotations.

C. Semantic Image Annotations

Create a set of semantic image annotations for all views in the test, using the same filename convention and PNG format as in the training part. Upload them in a single ZIP archive.

Submission

The deadline for submitting to the challenge is  July 10th July 17th (23:59 GMT) – EXTENDED.

Once you have created the output, please submit it using this link, one file per category and dataset. Please use unique filenames to indentify yourself and result type, eg. smith_method_mesh_B_synthetic.ply.

In addition, please send a summary email to rtylecek@inf.ed.ac.uk that includes

  • the filenames of the file you submitted,
  • synthetic or real dataset,
  • challenge category (A/B/C),
  • the label for your entry, eg. method or group name.

For questions, please contact rtylecek@inf.ed.ac.uk.

Twitter
For those of you interested in semantic scene understanding and 3D modeling, the new deadline for the "3D Reconstru… https://t.co/75c03eFtOJ2 weeks
RT @KinovaRobotics: This #robotic device can perform automatic bush cutting by using 3D reconstruction of the bush’s surface. Watch the @tr…2 weeks
RT @KinovaRobotics: Another great application example of one of our solutions. Meet @trimbot2020, a #robotic arm that can perform automatic…3 weeks
@KinovaRobotics #trimbot navigates the garden and with its #kinova arm trims bushes and cuts roses! It's an #H2020… https://t.co/ogT27kMyxZ3 weeks
At @WUR to prepare the new prototype for the #garden @H2020Projects @EU_H2020 @RoboticsEU https://t.co/1gXdVEVg7Z4 weeks