To effectively tackle a variety of real-world tasks, robots should be able to reliably grasp objects of different shapes, textures and sizes, without dropping them in undesired locations. Conventional approaches to enhancing the ability of robots to grasp objects work by tightening the grip of a robotic hand to prevent objects from slipping.
Researchers at the University of Lincoln, Toshiba Europe’s Cambridge Research Laboratory, the University of Surrey, Arizona State University and KAIST recently introduced alternative computational strategies for preventing the slip of objects grasped by a robotic hand, which works by modulating the trajectories that a robotic hand follows while performing manipulative movements. Their approach, consisting of a robotic controller and a new bio-inspired predictive trajectory modulation strategy, was presented in a paper published in Nature Machine Intelligence.
“The inspiration for this paper came from a very human experience,” Amir Ghalamzan, senior author of the paper, told Tech Xplore.
“When you carry a fragile or slippery object and feel it beginning to slip, you don’t just squeeze harder. Instead, you subtly adjust your movements—slowing down, tilting, or repositioning your hand—to keep hold of it. Robots, however, have historically just relied on increasing grip force to prevent slipping, which doesn’t always work and can even damage delicate objects. We aimed to investigate whether we could train robots to behave more like humans in these scenarios.”
The main objective of the recent study by Ghalamzan and his colleagues was to develop a controller that can predict when an object might slip from a robot’s grasp and adjust its movements accordingly to prevent it from slipping, similarly to how humans might adjust their movements when handling objects. The controller they developed relies on a bio-inspired trajectory modulation strategy that complements conventional techniques to modulate the force of a robot’s grip, enabling more dexterous manipulation strategies.

“Our approach mimics how humans use internal models to interact with the world,” explained Ghalamzan. “Just as the human brain continuously predicts the outcomes of our actions—like whether a glass might slip if we move too fast—we built a data-driven internal model, or ‘world model,’ that allows a robot to predict the future tactile sensations it will experience. These predictions are then used to detect slip instances and adjust movements in such a way that no slip instance will occur.”
The team’s controller allows robots to slow down, change direction and adapt to the position and orientation of their hands in real-time, instead of simply squeezing harder on objects to prevent them from slipping. This alternative strategy for securing objects by altering a robot’s movements could help to reduce the risk that fragile objects will break when a robot is handling them. The trajectory modulation approach also works in instances where the force of a robot’s grip cannot be altered, enabling more fluid and smarter interactions with a broad range of objects.
“Our study presents two key breakthroughs,” said Ghalamzan. “The first is a motion-based slip controller that is the first of its kind. This strategy complements grip-force-based control and is especially valuable when increasing grip force isn’t feasible—such as with fragile objects, wet or slippery surfaces, or hardware that doesn’t support dynamic grip control.
“The second is a predictive controller powered by a learned tactile forward model (i.e., world model), which enables robots to forecast slip based on their planned actions.”
The newly developed controller was used to plan the motions of a robotic gripper and tested in dynamic, unstructured environments. Notably, it was found to significantly improve the stability of a robot’s grasp in some cases, outperforming conventional controllers that work by solely adapting the force of a robot’s grip.
“Embedding such a model into a predictive control loop has traditionally been too computationally demanding,” said Ghalamzan. “Our study shows that it’s not only feasible, but also effective.”
The recent work by this team of researchers could contribute to the advancement of robotic systems, enabling them to safely handle various physical and potentially also social interactions utilizing a world model. This might allow robots, for instance, to handle different objects in a wide range of real-world settings, including household environments, manufacturing sites and health care facilities.
“We are actively working to make our predictive controller faster and more efficient, so it can be deployed in even more demanding real-time settings,” added Ghalamzan. “This includes exploring different architectural and algorithmic techniques to reduce computational overhead.”
As part of their next studies, the researchers are also expanding their system to support more advanced and complex object manipulation tasks, including the handling of deformable objects or items that need to be manipulated with two hands. Eventually, they also plan to combine their approach with computer vision algorithms, which would allow their approach to plan trajectories for robots based on both tactile and visual information.
“Another important direction is improving the verifiability and explainability of these learned models,” added Ghalamzan. “As we move toward more intelligent and autonomous systems, it’s critical that humans can understand and trust how robots make decisions. Our long-term vision is to develop predictive controllers that are not only effective but also transparent and safe for deployment in the real world.”
Written for you by our author Ingrid Fadelli,
edited by Gaby Clark, and fact-checked and reviewed by Robert Egan—this article is the result of careful human work. We rely on readers like you to keep independent science journalism alive.
If this reporting matters to you,
please consider a donation (especially monthly).
You’ll get an ad-free account as a thank-you.
More information:
Kiyanoush Nazari et al, Bioinspired trajectory modulation for effective slip control in robot manipulation, Nature Machine Intelligence (2025). DOI: 10.1038/s42256-025-01062-2.
© 2025 Science X Network
Citation:
Robots learn human-like movement adjustments to prevent object slipping (2025, August 13)
retrieved 13 August 2025
from
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.