top of page
Edward Johns
Email:
e.johns@imperial.ac.uk
At The Robot Learning Lab, we are developing advanced robots that are empowered by artificial intelligence, for assisting us all in everyday environments. Our research lies at the intersection of robotics, computer vision, and machine learning, and we are primarily studying robot manipulation: robots that can physically interact with objects using their arms and hands. We are currently investigating new strategies based around Imitation Learning, Reinforcement Learning, and Vision-Language Models, to enable efficient and general learning capabilities. Applications include domestic robots (e.g. tidying the home), manufacturing robots (e.g. assembling products in a factory), and warehouse robots (e.g. picking and placing from/into storage). The lab is led by Dr Edward Johns in the Department of Computing at Imperial College London. Welcome!
Latest News
Instant Policy accepted at ICLR 2025!
Instant Policy: In-Context Imitation Learning via Graph Diffusion
January 2024
We achieve in-context imitation learning in robotics, enabling tasks to be learned instantly from one or more demonstrations. A learned diffusion process predicts actions when conditioned on demonstrations and current observation, all jointly expressed in a graph. The only training data needed is simulated "pseudo-demonstrations".
R+X accepted at ICRA 2025!
R+X: Retrieval and Execution from Everyday Human Videos
January 2024
R+X enables robots to learn skills from long, unlabelled first-person videos of humans performing everyday tasks. Given a language command from a human, R+X first retrieves short video clips containing relevant behaviour, and then conditions an in-context imitation learning technique (KAT) on this behaviour to execute the skill.
MILES accepted at CoRL 2024!
Making Imitation Learning Easy with Self-Supervision
October
2024
We show that self-supervised learning enables robots to learn vision-based policies for precise, complex tasks, such as locking a lock with a key, from just a single demonstration and one environment reset. The self-supervised data collection generates augmentation trajectories which show the robot how to return to, and then follow, the single demonstration.
bottom of page