Comparing Machine Learning Methods and Feature Extraction Techniques for the EMG Based Decoding of Human Intention

With an increasing number of robotic and prosthetic devices, there is a need for intuitive Muscle-Machine Interfaces (MuMIs) that allow the user to have an embodied interaction with the devices they are controlling. Such MuMIs can be developed using machine learning based methods that utilize myoelectric activations from the muscles of the user to decode their intention. However, the choice of the learning method is subjective and depends on the features extracted from the raw Electromyography signals as well as on the intended application. In this work, we compare the performance of five machine learning methods and eight time-domain feature extraction techniques in discriminating between different gestures executed by the user of an EMG based MuMI. From the results, it can be seen that the Willison Amplitude performs consistently better for all the machine learning methods compared in this study, while the Zero Crossings achieves the worst results for the Decision Trees and the Random Forests and the Variance offers the worst performance for all the other learning methods. The Random Forests method is shown to achieve the best results in terms of achieved accuracies (has the lowest variance between subjects). In order to experimentally validate the efficiency of the Random Forest classifier and the Willison Amplitude feature extraction technique, a series of gestures were decoded in real-time from a user and they were used to control a robot hand.