Sensors (Basel). 2018 Feb 5;18(2):467. doi: 10.3390/s18020467.
ABSTRACT
Upper-extremity exoskeletons have demonstrated potential as augmentative, assistive, and rehabilitative devices. Typical control of upper-extremity exoskeletons have relied on switches, force/torque sensors, and surface electromyography (sEMG), but these systems are usually reactionary, and/or rely on entirely hand-tuned parameters. sEMG-based systems may be able to provide anticipatory control, since they interface directly with muscle signals, but typically require expert placement of sensors on muscle bodies. We present an implementation of an adaptive sEMG-based exoskeleton controller that learns a mapping between muscle activation and the desired system state during interaction with a user, generating a personalized sEMG feature classifier to allow for anticipatory control. This system is robust to novice placement of sEMG sensors, as well as subdermal muscle shifts. We validate this method with 18 subjects using a thumb exoskeleton to complete a book-placement task. This learning-from-demonstration system for exoskeleton control allows for very short training times, as well as the potential for improvement in intent recognition over time, and adaptation to physiological changes in the user, such as those due to fatigue.
PMID:29401754 | PMC:PMC5856190 | DOI:10.3390/s18020467