An Adaptive Prosthetic Training Gripper with a Variable Stiffness Differential and a Vision Based Shared Control Scheme

This work presents an adaptive prosthetic training gripper with a compact, variable stiffness differential mechanism and a vision-based based shared control scheme that relies on a Lightmyography (LMG) interface to trigger the selected grasps. The gripper incorporates three monolithic adaptive fingers manufactured using the concept of Hybrid Deposition Manufacturing (HDM) and includes a gear drive system that allows two fingers bases to rotate, implementing abduction / adduction and thereby increasing the available grasping workspace. The fingers are actuated through a compact, series-elastic differential mechanism that reduces the total number of required actuators to two. The developed gripper is operated using a vision-based myoelectric control framework that utilizes an RGB camera and a Convolutional Neural Network (CNN) for object detection and classification as well as for grasp selection and an LMG muscle machine interface for grasp triggering. The efficiency of the proposed gripper and the control framework have been experimentally validated through a series of grasping experiments executed using daily life objects.