A Video Dataset of Everyday Life Grasps for the Training of Shared Control Operation Models for Myoelectric Prosthetic Hands
The control of high degree of freedom prosthetic hands requires high cognitive effort and has proven difficult to navigate particularly as prosthetic hands become more advanced. To reduce control complexity, vision-based shared control methods have been proposed. Such methods rely on image processing and/or machine learning to identify object features and classes and introduce a level of autonomy to the grasping process. However, currently available image datasets lack focus in the area of prosthetic grasping. Thus, in this paper, we present a new dataset capturing user interactions with a wide variety of everyday life objects using a fully actuated, human-like robot hand and an onboard camera. The dataset includes videos of over 100 grasps of 50 objects from 35 classes. A video analysis has been conducted, using established grasp taxonomies to compile a list of grasp types based on interactions of the user with the objects. The resulting dataset can be used to develop more efficient prosthetic hand operation systems based on shared control frameworks.