EMG-Based Intention Detection Using Deep Learning For Shared Control In Upper-Limb Assistive Exoskeletons

In the field of human-robot interaction, surface electromyography (sEMG) provides a valuable tool for measuring active muscular effort. While numerous studies have investigated real-time control of upper extremity exoskeletons based on user intention and task-specific movements, the prediction of body joint positions based on EMG features have remained largely unexplored. In this paper, we address this gap by proposing a novel approach that leverages Convolutional Neural Networks and Long-Short-Term Memory (CNN-LSTM) models to generate exoskeleton joint trajectories. Our methodology involves collecting data from three channels of EMG and three degrees-of-freedom (DoF) joint angles and enables us to position control a pneumatic cable-driven upper-limb exoskeleton, thereby assisting users in various tasks. Through extensive experimentation, our intention-based model demonstrates robust performance across different speeds and is capable of detecting variations in payload and electrode placement. The empirical results yielded from our study underscore the efficacy of our approach, particularly in reducing the EMG levels of the user during different tasks by providing exoskeleton assistance as needed.