top of page
Edward Johns
Email: e.johns@imperial.ac.uk
At The Robot Learning Lab, we are developing advanced robots that are empowered by artificial intelligence, for assisting us all in everyday environments. Our research lies at the intersection of robotics, computer vision, and machine learning, and we are primarily studying robot manipulation: robots that can physically interact with objects using their arms and hands. We are currently investigating new strategies based around Imitation Learning, Reinforcement Learning, and Vision-Language Models, to enable efficient and general learning capabilities. Applications include domestic robots (e.g. tidying the home), manufacturing robots (e.g. assembling products in a factory), and warehouse robots (e.g. picking and placing from/into storage). The lab is led by Dr Edward Johns in the Department of Computing at Imperial College London. Welcome!
Latest News
Adapting Skills to Novel Grasps: A Self-Supervised Approach accepted at IROS 2024!
September
2024
If a robot learns a skills with a grasped object (e.g. a tool), then that skill will usually fail if the robot grasps the object in a different way to when it learned the skill. In this work, we introduce a self-supervised data collection method, which enables a robot to adapt a skill to a novel grasp even though the skill was learned using a different grasp.
R+X accepted at RSS 2024 Workshops!
(Data Generation and Lifelong Robot Learning Workshops)
July 2024
R+X enables robots to learn skills from long, unlabelled first-person videos of humans performing everyday tasks. Given a language command from a human, R+X first retrieves short video clips containing relevant behaviour, and then conditions an in-context imitation learning technique (KAT) on this behaviour to execute the skill.
Keypoint Action Tokens accepted at RSS 2024!
May 2024
By representing observations and actions as 3D keypoints, we can just feed demonstrations into an LLM for in-context imitation learning, using the LLM's inherent pattern recognition ability. This is a very different "LLMs + Robotics" idea to usual: rather than using LLMs for high-level reasoning with natural language, we use LLMs for low-level reasoning with numerical keypoints.
Language Models as Zero-Shot Trajectory Generators accepted in RA-Letters!
Can LLMs predict dense robot trajectories, using only internal reasoning? We study if a single, task-agnostic prompt, can enable an LLM to solve a range of tasks when given access to an object detector, without requiring any action primitives, in-context examples, or external trajectory optimisers.
May 2024
Further Information
To discover more about our research, click here.
To find about our team members, click here.
To follow our seminar series, click here.
For PhD applications, click here.
For internships and visiting positions, click here.
Or for all other enquiries, please contact me at e.johns@imperial.ac.uk.
bottom of page