Multi-Grasp Classification for the Control of Robot Hands Employing Transformers and Lightmyography Signals

The increasing use of smart technical devices in our everyday lives has necessitated the use of muscle-machine interfaces (MuMI) that are intuitive and that can facilitate immersive interactions with these devices. The most common method to develop MuMIs is using Electromyography (EMG) based signals. However, due to several drawbacks of EMG based interfaces, alternative methods to develop MuMI are being explored. In our previous work, we presented a new MuMI called Lightmyography (LMG), which achieved outstanding results compared to a classic EMG based interface in a 5-gesture classification task. In this study, we extend our previous work experimentally validating the efficiency of the LMG armband in classifying 32 different gestures from several participants using a deep learning technique called Temporal Multi-Channel Vision Transformers (TMC-ViT). Moreover, two different undersampling techniques are compared. The proposed 32-gesture classifiers achieve accuracies as high as 92%. Finally, we employ the LMG interface in the real-time control of a robotic hand using 10 different gestures, successfully reproducing several grasp types from taxonomy grasps presented in the literature.