An overview paper of the project activities has been accepted for oral presentation at the 50th International Symposium on Robotics (20-21 June 2018, Munich).
We present an algorithm that exploits both the underlying 3D structure and image entropy to generate an adaptive matching window.
In this paper we present an algorithm which recovers the rigid transformation that describes the displacement of a binocular stereo rig in a scene, and uses this to include a third image to perform dense trinocular stereo matching and reduce some of the ambiguities inherent to binocular stereo.
FlowNet 2.0 is the first optical flow approach based on deep learning that reaches state-of-the-art accuracy. At the same time it is by a factor 100 faster than previous state-of-the-art techniques. This allows for reliable motion estimation at interactive frame rates. For more information visit the paper page.
DeMoN is the very first work that formulates the problem of joint egomotion and depth estimation as a pure learning problem. Given two images from a single moving camera, DeMoN can estimate depth and camera motion at interactive frame rates. For more information, please visit the website.