On Human to Robot Skill Transfer for the Execution of Complex Tactile American Sign Language Tasks with a Bimanual Robot Platform

An estimated 0.2% of the world population is living with severe deafblindness with approximately ~1.5 million Americans using tactile American Sign Language (t-ASL) as their primary form of communication. To allow them to communicate without an in-person interpreter, a bimanual robotic manipulation platform has been employed to mimic humans and execute complex bimanual t-ASL signs. In particular, a human to robot motion mapping framework for the employed platform has been developed that allows the extraction of signing trajectories based on vision-based motion capture data from a human demonstrator. The collected data have also been used to generate programmed poses for text prompt playbacks on the robot system. The human to robot motion mapping problem has been formulated as a constrained optimisation problem that maps the human trajectories to corresponding robot trajectories that are as humanlike as possible, using the functional anthropomorphism approach. The efficiency of the proposed system has been experimentally validated through the execution of complex, bimanual tASL tasks.