We investigate how robots can acquire human-like manipulation skills autonomously in a data-efficient manner. We focus on supplying rich training signals by exploiting prior knowledge embedded in human-task solving and system modeling. We envision structuring robotic learning like human learning to outsource the burden of repetitive tasks to robots so that people can focus on the skills from which they get true joy.
For robots to leave industrial, structured environments and enter the territory of small, dynamic companies and households, we require learning methods that enable autonomous learning with minimal human intervention. In our research, we look at how we can leverage prior information to accelerate learning. We extensively focus on utilizing human knowledge as a primary source of prior information. We employ human task demonstrations, modeling physical systems for gradient-based learning, co-optimization of robot body and brain, and multi-modal instrumentation as a way for scaffolded learning. Consequently, we do not concentrate on one technique for robot control but focus on various techniques varying from classic control to differentiable programming to reinforcement learning.
Our research applications are centered around the manipulation of deformable materials like clothing, biocomposites, fungal foams, and plastics. These materials are more heterogeneous, irregular, and varied than common materials used in robotic applications. The deformable and fragile nature of these materials makes them challenging and interesting for robotics research. We emphasize making our methods deployable in the real world by using virtual, simulated tasks as a tool, not as an end-goal.
Publications
Effect of compliance on morphological control of dynamic locomotion with HyQ
AUTONOMOUS ROBOTS
2021
Classic control theory applied to compliant and soft robots generally involves an increment of computation that has no equivalent in biology. To tackle this, morphological computation describes a theoretical framework that takes advantage of the computational capabilities of physical bodies. However, concrete applications in robotic locomotion control are still rare. Also, the trade-off between compliance and the capacity of a physical body to facilitate its own control has not been thoroughly studied in a real locomotion task. In this paper, we address these two problems on the state-of-the-art hydraulic robot HyQ. An end-to-end neural network is trained to control HyQ’s joints positions and velocities using only Ground Reaction Forces (GRF). Our simulations and experiments demonstrate better controllability using less memory and computational resources when increasing compliance. However, we show empirically that this effect cannot be attributed to the ability of the body to perform intrinsic computation. It invites to give an increased emphasis on compliance and co-design of the controller and the robot to facilitate attempts in machine learning locomotion.
Learning keypoints from synthetic data for robotic cloth folding
In ICRA 2022 Workshop on Representing and Manipulating Deformable Objects
2022
Robotic cloth manipulation is challenging due to its deformability, which makes determining its full state infeasible. However, for cloth folding, it suffices to know the position of a few semantic keypoints. Convolutional neural networks (CNN) can be used to detect these keypoints, but require large amounts of annotated data, which is expensive to collect. To overcome this, we propose to learn these keypoint detectors purely from synthetic data, enabling low-cost data collection. In this paper, we procedurally generate images of towels and use them to train a CNN. We evaluate the performance of this detector for folding towels on a unimanual robot setup and find that the grasp and fold success rates are 77% and 53%, respectively. We conclude that learning keypoint detectors from synthetic data for cloth folding and related tasks is a promising research direction, discuss some failures and relate them to future work. A video of the system, as well as the codebase, more details on the CNN architecture and the training setup can be found at
https://github.com/tlpss/workshop-icra-2022-cloth-keypoints.git.
Simpler learning of robotic manipulation of clothing by utilizing DIY smart textile technology
Verleysen, Andreas,
Holvoet, Thomas,
Proesmans, Remko,
Den Haese, Cedric,
and
wyffels, Francis
APPLIED SCIENCES-BASEL
2020
Deformable objects such as ropes, wires, and clothing are omnipresent in society and industry but are little researched in robotics research. This is due to the infinite amount of possible state configurations caused by the deformations of the deformable object. Engineered approaches try to cope with this by implementing highly complex operations in order to estimate the state of the deformable object. This complexity can be circumvented by utilizing learning-based approaches, such as reinforcement learning, which can deal with the intrinsic high-dimensional state space of deformable objects. However, the reward function in reinforcement learning needs to measure the state configuration of the highly deformable object. Vision-based reward functions are difficult to implement, given the high dimensionality of the state and complex dynamic behavior. In this work, we propose the consideration of concepts beyond vision and incorporate other modalities which can be extracted from deformable objects. By integrating tactile sensor cells into a textile piece, proprioceptive capabilities are gained that are valuable as they provide a reward function to a reinforcement learning agent. We demonstrate on a low-cost dual robotic arm setup that a physical agent can learn on a single CPU core to fold a rectangular patch of textile in the real world based on a learned reward function from tactile information.
Stance control inspired by cerebellum stabilizes reflex-based locomotion on HyQ robot
In 2020 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA)
2020
Advances in legged robotics are strongly rooted in animal observations. A clear illustration of this claim is the generalization of Central Pattern Generators (CPG), first identified in the cat spinal cord, to generate cyclic motion in robotic locomotion. Despite a global endorsement of this model, physiological and functional experiments in mammals have also indicated the presence of descending signals from the cerebellum, and reflex feedback from the lower limb sensory cells, that closely interact with CPGs. To this day, these interactions are not fully understood. In some studies, it was demonstrated that pure reflex-based locomotion in the absence of oscillatory signals could be achieved in realistic musculoskeletal simulation models or small compliant quadruped robots. At the same time, biological evidence has attested the functional role of the cerebellum for predictive control of balance and stance within mammals. In this paper, we promote both approaches and successfully apply reflex-based dynamic locomotion, coupled with a balance and gravity compensation mechanism, on the state-of-art HyQ robot. We discuss the importance of this stability module to ensure a correct foot lift-off and maintain a reliable gait. The robotic platform is further used to test two different architectural hypotheses inspired by the cerebellum. An analysis of experimental results demonstrates that the most biologically plausible alternative also leads to better results for robust locomotion.