top of page

Edward Johns

Edward Johns

At The Robot Learning Lab, we are developing advanced robots that are empowered by artificial intelligence, for assisting us all in everyday environments. Our research lies at the intersection of robotics, computer vision, and machine learning, and we are primarily studying robot manipulation: robots that can physically interact with objects using their arms and handsWe are currently investigating new strategies based around Imitation Learning, Reinforcement Learning, and Vision-Language Models, to enable efficient and general learning capabilities. Applications include domestic robots (e.g. tidying the home), manufacturing robots (e.g. assembling products in a factory), and warehouse robots (e.g. picking and placing from/into storage). The lab is led by Dr Edward Johns in the Department of Computing at Imperial College London. Welcome!

Latest News

November
2024

We achieve in-context imitation learning in robotics, enabling tasks to be learned instantly from one or more demonstrations. A learned diffusion process predicts actions when conditioned on demonstrations and current observation, all jointly expressed in a graph. The only training data needed is simulated "pseudo-demonstrations".

October
2024

We show that self-supervised learning enables robots to learn vision-based policies for precise, complex tasks, such as locking a lock with a key, from just a single demonstration and one environment reset. The self-supervised data collection generates augmentation trajectories which show the robot how to return to, and then follow, the single demonstration.

September
2024

If a robot learns a skills with a grasped object (e.g. a tool), then that skill will usually fail if the robot grasps the object in a different way to when it learned the skill. In this work, we introduce a self-supervised data collection method, which enables a robot to adapt a skill to a novel grasp even though the skill was learned using a different grasp.

R+X accepted at RSS 2024 Workshops!
(Data Generation and Lifelong Robot Learning Workshops)

July 2024

R+X enables robots to learn skills from long, unlabelled first-person videos of humans performing everyday tasks. Given a language command from a human, R+X first retrieves short video clips containing relevant behaviour, and then conditions an in-context imitation learning technique (KAT) on this behaviour to execute the skill.

Keypoint Action Tokens accepted at RSS 2024!

May 2024

By representing observations and actions as 3D keypoints, we can just feed demonstrations into an LLM for in-context imitation learning, using the LLM's inherent pattern recognition ability. This is a very different "LLMs + Robotics" idea to usual: rather than using LLMs for high-level reasoning with natural language, we use LLMs for low-level reasoning with numerical keypoints.

Can LLMs predict dense robot trajectories, using only internal reasoning? We study if a single, task-agnostic prompt, can enable an LLM to solve a range of tasks when given access to an object detector, without requiring any action primitives, in-context examples, or external trajectory optimisers.

May 2024

bottom of page