top of page

Edward Johns

Edward Johns

At The Robot Learning Lab, we are developing advanced robots that are empowered by artificial intelligence, for assisting us all in everyday environments. Our research lies at the intersection of robotics, computer vision, and machine learning, and we are primarily studying robot manipulation: robots that can physically interact with objects using their arms and handsWe are currently investigating new strategies based around Imitation Learning, Reinforcement Learning, and Vision-Language Models, to enable efficient and general learning capabilities. Applications include domestic robots (e.g. tidying the home), manufacturing robots (e.g. assembling products in a factory), and warehouse robots (e.g. picking and placing from/into storage). The lab is led by Dr Edward Johns in the Department of Computing at Imperial College London. Welcome!

Latest News

Post-Doc, PhD, and Research Assistant positions available!
Click here for further information.

January 2024

We achieve in-context imitation learning in robotics, enabling tasks to be learned instantly from one or more demonstrations. A learned diffusion process predicts actions when conditioned on demonstrations and current observation, all jointly expressed in a graph. The only training data needed is simulated "pseudo-demonstrations".

R+X accepted at ICRA 2025!
R+X: Retrieval and Execution from Everyday Human Videos

January 2024

R+X enables robots to learn skills from long, unlabelled first-person videos of humans performing everyday tasks. Given a language command from a human, R+X first retrieves short video clips containing relevant behaviour, and then conditions an in-context imitation learning technique (KAT) on this behaviour to execute the skill.

One-Shot Dual-Arm Imitation Learning accepted at ICRA 2025!

January 2024

We develop a framework that enables robots to learn dual-arm tasks from just a single demonstration. This uses a three-stage visual servoing method for precise alignment between the end-effector and target object, followed by replay of the demonstration trajectory.

bottom of page