Computer Vision for Robotics

Researchers: Rembert Daems, Peter De Roovere, Maxim Bonnaerens, Victor-Louis De Gusseme, Thomas Lips, Andreas Verleysen, Francis wyffels

We aim to expand the realm of robots beyond assembly lines and other well-structured environments. For robots to leave these large-scale industrial environments, they will need to perceive and understand their surroundings. Cameras can play an important role in this, as they provide dense information streams whilst being relatively cheap. We believe computer vision is an important component of any general-purpose robot.

Computer vision has made tremendous progress in areas like classification, detection, and scene understanding, often powered by deep learning. However, there are quite some open questions in computer vision for robotics. At AIRO we try to tackle some of these open questions and we apply these techniques on real-world robots.

Spatially-structured Representations

One line of research focuses on determining the appropriate way to represent scenes. Ideally we want compact representations that accurately capture all relevant state information for the task at hand. At AIRO, we focus on spatially-structured representations such as semantic keypoints.

From left to right: keylines scene representation and semantic keypoint detection on cardboard boxes.

Procedural Data Generation

We use synthetic data and procedural data generation to reduce the need for manual data collection in the real-world, which is expensive and might not even be possible. Using procedural data generation enables us to create data containing all desired variations in object configuration (especially important for deformable or articulated objects, which have many degrees of freedom), lighting conditions, backgrounds, distractor objects, etc.. These datasets can then be used to learn representations and transfer the learned networks to the real world.

A Blender pipeline for procedural generation of cloth scenes.

Efficient Inference

Finally, we also work towards making deep learning inference more efficient, to make sure we can run our neural networks on the embedded hardware that robots have at their disposal.

Application Areas

The techniques we develop can be applied to several domains. We focus on challenging object categories such as deformable objects (cloth in particular),highly-reflective industrial objects and articulated objects. We have a.o. created various high-quality datasets, including one of humans folding cloth here and a dataset of both synthetic and real industrial metal objects here. Another application area is learning the dynamics of robots or other mechanical systems.

Example application: learning a progress metric for cloth folding from human demonstrations.

Publications

  1. Learning self-supervised task progression metrics : a case of cloth folding
    Verleysen, Andreas, Biondina, Matthijs, and wyffels, Francis
    APPLIED INTELLIGENCE 2023
  2. Learning keypoints from synthetic data for robotic cloth folding
    Lips, Thomas, De Gusseme, Victor-Louis, and wyffels, Francis
    In ICRA 2022 Workshop on Representing and Manipulating Deformable Objects 2022