To be successfully deployed on a large-scale and in a wide range of real-world settings, robots should be able to rapidly adjust their movements while interacting with humans and their surroundings, responding to changes in their environment. Many robots developed so far, however, perform significantly better in controlled environments, while they often struggle in unstructured settings.
Researchers at the University of Granada in Spain and at EPFL in Switzerland recently developed a new control solution inspired by neuromechanics, specifically by the integrative action of the central nervous system and the biomechanics of the human body.
Their proposed control system, outlined in a paper published in Science Robotics, was found to modulate the stiffness of robots, improving the accuracy of their movements and boosting their adaptability to changes in their surroundings.
“Our recent article emerged from an exciting collaboration during the final phase of the flagship EU project, the Human Brain Project (HBP),” Niceto R. Luque, senior author of the paper, told Tech Xplore.
“We had the opportunity to work closely with the Biorobotics Lab at the EPFL (Switzerland), led by Professor Auke Ijspeert, whose cutting-edge work in muscle simulation frameworks influenced our research. Inspired by how human muscles operate in pairs (the so-called agonist–antagonist relationship), we focused on how muscle co-contraction dynamically adjusts stiffness.”
The main objective of the recent study by Luque and his colleagues was to develop a new biomechanics-inspired control solution that overcomes the limitations of the conventional impedance/admittance control paradigms underpinning the movements of industrial robots. The solution they developed draws inspiration from the natural mechanisms via which humans learn to adapt their movements to changes in complex and unpredictable environments.
“Traditional control approaches often depend on highly complex mathematical formulations to manage force exchanges between a human and a robot (or between robots),” said Luque. “By contrast, our strategy mimics human muscle co-contraction to modulate stiffness directly, eliminating the need for expensive hardware solutions to determine the exchanged force and obviating the necessity for intricate dynamic formulations.
“This bio-inspired method aims to enable collaborative robots (or cobots) to display a broad spectrum of adaptable motor behaviors, thereby enhancing their performance and robustness across a variety of tasks.”
The neuromechanics-inspired robot control solution developed by these researchers has two key components, which mimic the systems that allow humans to control and adapt their movements. The first of these components is a muscle model, while the second is a so-called cerebellar network.
As suggested by its name, the muscle model is designed to replicate the mechanisms underpinning the movement of human muscles. This model specifically mirrors the fact that human muscles work in pairs, using a process known as “co-contraction.”
“In simple terms, when opposing muscles contract together, they adjust the stiffness of a joint,” explained Luque. “This enables the robot to alter the rigidity or flexibility of its movements according to the task at hand—much like how you might tighten your muscles when you need precision or relax them to move more freely. This ability to modulate stiffness is crucial for handling delicate tasks and absorbing unexpected forces.”
The second component of the team’s control solution, complementing the muscle model, is the so-called cerebellar network. This is a system designed to mimic the function of the human cerebellum, a brain region responsible for fine-tuning people’s movements and adapting them based on feedback originating from both the body and the environment.
“By including this adaptive network, the robot can learn from its experiences and adjust its actions—and, more importantly, its co-contraction and stiffness—when faced with new tasks or unpredictable situations,” said Luque. “This means it does not rely solely on pre-programmed instructions or complex mathematical equations to operate. All in all, our solution provides the cobot with a form of ‘muscle memory’ and the ability to learn and adapt much like a human.”
Luque and his colleagues evaluated their control solution in a series of tests and their findings were highly promising. Specifically, they showed that the co-contraction mechanism modulated the stiffness of robots and their performance accuracy, increasing their resilience against external disturbances.
“We discovered that, similar to human learning, training under low co-contraction conditions results in lower stiffness,” explained Luque. “Although learning in these conditions is more challenging for the cerebellum, it enables effective operation under higher co-contraction without additional training. This indicates a clear preference for motor learning under low co-contraction conditions, which reduces the training time and helps prevent wear and tear.”
Although learning under low co-contraction is more challenging for the cerebellum, it enables effective operation under higher co-contraction, without requiring specific training. The team’s solution thus allows their controller to adapt to low co-contraction, subsequently switching to higher co-contraction behaviors if a higher stiffness is required.
“The fact that we don’t need to train the cerebellum for all possible co-contraction scenarios significantly reduces the time required for training, thus minimizing wear and tear,” said Luque and Ignacio Abadia.
“We also provide variable stiffness based on software without necessitating the addition of specific hardware to the robot, that is, we do not require contact force sensors nor torque sensors, thus simplifying our neuromechanical implementation across diverse robots. This capability is crucial for robots that need to operate in unpredictable environments and interact safely with humans.”
![Neuromechanics-inspired control solution boosts robot adaptability Neuromechanics-inspired control solution boosts robot adaptability](https://scx1.b-cdn.net/csz/news/800a/2025/a-new-neuromechanics-i.jpg)
The recent work by Luque and his colleagues opens new possibilities for the development of versatile and reliable robotic systems for a wide range of applications, ranging from industrial robots to health care and service robots. In their next papers, the researchers plan to improve their controller, upgrading both its software and mechanical components.
“We are currently enhancing the learning capability of our cerebellar controller by increasing their adaptability and versatility,” said Luque. “To achieve this, we are adopting more traditional AI methods based on analog signals in conventional artificial neural networks, and we are integrating these with spiking neural networks that use event-based signals.”
The integration of conventional AI techniques could allow the team’s controller to fully leverage the computational power of the most advanced GPUs on the market, thus boosting its performance in real-time. To advance the application of their control solution, the researchers are also working on a new robotic system that integrates a mechanical co-contraction mechanism.
“Current cobots, which typically have a single motor at the end of the actuator, require co-contraction to be implemented before the final actuator,” added Luque. “With this new development, we will make available a built-in co-contraction. This innovative arrangement aims to transform how cobots are built to better facilitate human–robot interaction.”
More information:
Ignacio Abadía et al, A neuromechanics solution for adjustable robot compliance and accuracy, Science Robotics (2025). DOI: 10.1126/scirobotics.adp2356
© 2025 Science X Network
Citation:
Neuromechanics-inspired control solution boosts robot adaptability (2025, February 13)
retrieved 13 February 2025
from
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.