top of page
Edward Johns
Email:
e.johns@imperial.ac.uk
At The Robot Learning Lab, we are developing advanced robots that are empowered by artificial intelligence, for assisting us all in everyday environments. Our research lies at the intersection of robotics, computer vision, and machine learning, and we are primarily studying robot manipulation: robots that can physically interact with objects using their arms and hands. We are currently investigating new strategies based around Imitation Learning, Reinforcement Learning, and Vision-Language Models, to enable efficient and general learning capabilities. Applications include domestic robots (e.g. tidying the home), manufacturing robots (e.g. assembling products in a factory), and warehouse robots (e.g. picking and placing from/into storage). The lab is led by Dr Edward Johns in the Department of Computing at Imperial College London. Welcome!
Latest News
Instant Policy accepted at CoRL 2024 X-Embodiment Workshop!
Instant Policy: In-Context Imitation Learning via Graph Diffusion
November
2024
We achieve in-context imitation learning in robotics, enabling tasks to be learned instantly from one or more demonstrations. A learned diffusion process predicts actions when conditioned on demonstrations and current observation, all jointly expressed in a graph. The only training data needed is simulated "pseudo-demonstrations".
MILES accepted at CoRL 2024!
Making Imitation Learning Easy with Self-Supervision
October
2024
We show that self-supervised learning enables robots to learn vision-based policies for precise, complex tasks, such as locking a lock with a key, from just a single demonstration and one environment reset. The self-supervised data collection generates augmentation trajectories which show the robot how to return to, and then follow, the single demonstration.
Adapting Skills to Novel Grasps: A Self-Supervised Approach accepted at IROS 2024!
September
2024
If a robot learns a skills with a grasped object (e.g. a tool), then that skill will usually fail if the robot grasps the object in a different way to when it learned the skill. In this work, we introduce a self-supervised data collection method, which enables a robot to adapt a skill to a novel grasp even though the skill was learned using a different grasp.
R+X accepted at RSS 2024 Workshops!
(Data Generation and Lifelong Robot Learning Workshops)
July 2024
R+X enables robots to learn skills from long, unlabelled first-person videos of humans performing everyday tasks. Given a language command from a human, R+X first retrieves short video clips containing relevant behaviour, and then conditions an in-context imitation learning technique (KAT) on this behaviour to execute the skill.
Keypoint Action Tokens accepted at RSS 2024!
May 2024
By representing observations and actions as 3D keypoints, we can just feed demonstrations into an LLM for in-context imitation learning, using the LLM's inherent pattern recognition ability. This is a very different "LLMs + Robotics" idea to usual: rather than using LLMs for high-level reasoning with natural language, we use LLMs for low-level reasoning with numerical keypoints.
Language Models as Zero-Shot Trajectory Generators accepted in RA-Letters!
Can LLMs predict dense robot trajectories, using only internal reasoning? We study if a single, task-agnostic prompt, can enable an LLM to solve a range of tasks when given access to an object detector, without requiring any action primitives, in-context examples, or external trajectory optimisers.
May 2024
bottom of page