Cornell University researchers have introduced an artificial intelligence framework that could significantly speed up how robots learn new tasks
The new system, called RHyME , short for Retrieval for Hybrid Imitation under Mismatched Execution, allows robots to learn how to perform complex actions by watching just a single how-to video.
Smarter robots, less training
Usually, teaching robots even basic tasks require hours of programming and large amounts of data. Robots typically need precise instructions and can become confused or fail when something unexpected happens. Even slight differences in how a task is demonstrated can impede a robot’s learning process.
Developed by a team of computer science researchers at Cornell, this new AI framework makes robot learning faster, more flexible, and more human-like.
Instead of depending on slow, step-by-step instruction, RHyME allows robots to draw inspiration from a single video demonstration and complete a sequence of actions, even if they haven’t seen that exact task before.
Learning by watching
RHyME works by tapping into a memory bank of previously seen videos and related experiences. When a robot is shown a new task, such as placing a mug in a sink, it searches through its stored examples to find similar actions like picking up a cup or placing an item down.
By stitching these fragments of memory together, the robot figures out how to perform the new task independently.
This new method makes robot learning faster and requires far less data. While older techniques might need thousands of hours of demonstrations, RHyME needs only about 30 minutes of robot training data. In testing, robots equipped with RHyME were over 50% more successful at completing tasks than those using earlier approaches.
Real-world impact
Unlike past systems that required the demonstrations to be flawless and carefully synchronised with robot movement, RHyME is designed to handle mismatches and adapt accordingly. This means the robot can still learn, even if the human moves differently or if a mistake occurs during the demonstration.
The research team behind RHyME includes doctoral and master’s students in computer science, and the project has received support from Google, OpenAI, the U.S. Office of Naval Research, and the National Science Foundation. Their work will be presented at the upcoming IEEE International Conference on Robotics and Automation in Atlanta.
Though home robots that perform daily chores remain a work in progress, innovations like RHyME bring that vision closer to reality. By reducing the time and cost of training, the framework could help set the way for more practical, intelligent robotic assistants in homes, workplaces, and beyond.
With RHyME, robots are learning not just how to complete tasks, but how to think more like humans, using memory, creativity, and adaptability to navigate an unpredictable world.