top of page

Eugene Valassakis

Eugene was the first PhD student to enter the Robot Learning Lab, and he passed his viva in December 2022. Eugene's PhD explored the practicalities of sim-to-real transfer for robot manipulation, and proposed a number of applications of sim-to-real to topics such as camera calibration and imitation learning. His thesis is titled Exploring Sim-to-Real Transfer for Learning-Based Robot Manipulation, and a summary is below, together with the key publications from Eugene's PhD.

Thesis Abstract

 

The ability of deep learning-based methods to perceive, reason about, and react to complex sensory signals has the potential to give robots the capability to manipulate the world around them, and interact with unstructured and unpredictable environments such as human homes.In order to practically use these methods for robotics however, the question of data availability needs to be addressed. A way of doing so that naturally comes to mind is to use simulation and synthetic data in order to generate any datasets required for learning. Yet, this is a challenging task. Models deployed in the real-world that are naively trained using simulated data will fail due to the reality gap, and it is also not clear which parts of a robotics pipeline can or should be addressed using learning and sim-to-real transfer.As such, in this thesis we explore how to best use simulation and sim-to-real transfer to enable real-world, learning-based robot-manipulation without using real-world data. We start by conducting an in-depth study on sim-to-real transfer for dynamics with end-to-end control, benchmarking several alternative approaches. We then investigate methods and frameworks that incorporate simulation-trained, learning-based components into otherwise well structured robotics pipelines. Specifically, (1) we develop a framework that can achieve sub-millimetre precision in the control while generalising to wide task spaces, (2) we show how sim-to-real transfer can be used for eye-in-hand camera calibration, an often necessary step in robotics pipelines, and (3) we present a framework for one-shot imitation learning that can perform a task immediately after one demonstration, without the need for further real-world data collection or training.

Key Publications

ezgif.com-gif-maker (33).gif

Demonstrate Once, Imitate Immediately (DOME): Learning Visual Servoing for One-Shot Imitation Learning

Eugene Valassakis, Georgios Papagiannis, Norman Di Palo, and Edward Johns​

Published at IROS 2022

Mini abstract. We introduce an imitation learning method called DOME, which enables tasks on novel objects to be learned from a single demonstration, without requiring any further training or data collection. This is made possible by training in advance, purely in simulation, an object segmentation network and a visual servoing network.

Learning Eye-in-Hand Camera Calibration from a Single Image​​

Eugene Valassakis, Kamil Dreczkowski, and Edward Johns​

Published at CoRL 2021

Mini abstract. We study a range of different learning-based methods for extrinsic calibration of a wrist-mounted RGB camera, when given only a single RGB image from that camera. We found that a simple direct regression of calibration parameters performed the best, and also outperformed classical calibration methods based on markers.

distractors.gif

Coarse-to-Fine for Sim-to-Real: Sub-Millimetre Precision Across Wide Task Spaces

Eugene Valassakis, Norman Di Palo, and Edward Johns​

Published at IROS 2021​

Mini abstract. We develop a framework which allows for precise sub-millimetre control for zero-shot sim-to-real transfer, whilst also enabling interaction across a wide range of object poses. Each trajectory involves first a coarse, ICP-based planning stage, followed by a fine, end-to-end visuomotor control stage.

slide_elongated.gif

Crossing the Gap: A Deep Dive into Zero-Shot Sim-to-Real Transfer for Dynamics

Eugene Valassakis, Zihan Ding, and Edward Johns​

Published at IROS 2020
Mini Abstract. We benchmark sim-to-real for tasks with complex dynamics, where no real-world training is available. We show that previous works require significant simulator tuning to achieve transfer. A simple method which just injects random forces, outperforms domain randomisation whilst being significantly easier to tune.

bottom of page