
Current hand prostheses already work with the help of an app or sensors attached to the forearm. New research at the Technical University of Munich (TUM) shows this: A better understanding of muscle activity patterns enables more intuitive and natural control of the prostheses. This requires a network of 128 sensors and the use of artificial intelligence.
Different types of grippers and bionic design: technological developments have continuously improved hand prostheses in recent decades. Anyone who has lost a hand due to an accident or illness can at least perform some everyday movements again. Some modern prostheses can already be used to move the fingers or rotate the wrist. This requires either a smartphone app or muscle signals from the forearm, which are usually detected by two sensors. For example, by activating the flexor muscle in the wrist, the fingers of the artificial hand can be closed to grip a pen. If the extensor muscle in the wrist is contracted, the hand releases the pen again. If both muscles are contracted at the same time, certain fingers can be moved. "A patient has to learn these movements during rehabilitation," says Cristina Piazza, Professor of Healthcare and Rehabilitation Robotics at TUM. Her research team has now shown that artificial intelligence can help to use a prosthetic hand more intuitively than before. The secret lies in the "synergy principle" and the support of 128 sensors on the forearm.
The synergy principle: the brain activates a group of muscle cells
What is the synergy principle? "We know from neuroscientific studies that certain patterns appear again and again in experiments, both in kinematics and in terms of muscle activation," says Piazza. These patterns provide clues as to how the human brain deals with the complexity of biological systems. The brain activates a group of cells simultaneously. This is also the case in the forearm. Piazza: "When we want to grasp an object with our hand, such as a ball, we move our fingers synchronously and they immediately adapt to the shape of the object as soon as it is touched." The researchers are now using this principle to design and control artificial hands by applying new learning algorithms. For humans, intuitive movement is quite normal: from a robot’s perspective, things look different. When an artificial hand grasps a pen, it performs many individual steps one after the other. First it determines the place from which it wants to grasp something. Then it slowly brings its fingers together before finally grasping the pen. The researchers’ aim is to make a single flowing movement out of it. "With the help of machine learning, we can make the control system more adaptable," says Patricia Capsi Morales, Senior Scientist in Prof. Piazza’s team.
Analyzing patterns from 128 channels

Experiments with the new approach already indicate that conventional control methods could soon be supported by more advanced strategies. To investigate what happens at the level of the central nervous system, the researchers are working with two sensor foils: one for the inside and one for the outside of the forearm. Each foil contains up to 64 sensors. The method also estimates the electrical signals transmitted by the motor neurons in the spinal cord. "The more sensors we use, the better we can record information from different muscle groups and find out which muscle activations are responsible for which hand movements," explains Prof. Piazza. According to researcher Capsi Morales, "characteristic features of the muscle signals" arise depending on whether a person is making a fist, gripping a pen or opening a jam jar - a prerequisite for intuitive movements.
Wrist and hand movement: Eight out of ten people prefer the intuitive method
The current research focuses on the movement of the wrist and the entire hand. It shows that most people (eight out of ten) prefer the intuitive movement of the wrist and hand. This is also the more efficient way. Nevertheless, two out of ten people learn to use the less intuitive method and even become more precise in the end. "Our aim is to investigate the learning effect and find the right solution for each patient," explains Capsi Morales. "This is a step in the right direction," says Piazza, who emphasizes that each system consists of individual mechanics and characteristics of the hand, special training with patients, interpretation and analysis as well as machine learning.
Current challenges in the advanced control of artificial hands
There are still some challenges ahead: the learning algorithm, which is based on the information from the sensors, has to be retrained every time the foil slips or is removed. In addition, the sensors have to be prepared with a gel to ensure the necessary conductivity. This is the only way to precisely record the signals from the muscles. "We use signal processing techniques to filter out the noise and obtain usable signals," continues Capsi Morales. Each time a new patient wears the cuff with the many sensors on the forearm, the algorithm must first identify the activation patterns for each movement sequence in order to later recognize the user’s intention and translate it into commands for the artificial hand.
