首页> 外文期刊>IEEE Transactions on Biomedical Engineering >Deep Learning Movement Intent Decoders Trained With Dataset Aggregation for Prosthetic Limb Control
【24h】

Deep Learning Movement Intent Decoders Trained With Dataset Aggregation for Prosthetic Limb Control

机译:训练有数据集聚合的深度学习运动意图解码器,用于假肢控制

获取原文
获取原文并翻译 | 示例
       

摘要

Significance: The performance of traditional approaches to decoding movement intent from electromyograms (EMGs) and other biological signals commonly degrade over time. Furthermore, conventional algorithms for training neural network based decoders may not perform well outside the domain of the state transitions observed during training. The work presented in this paper mitigates both these problems, resulting in an approach that has the potential to substantially improve the quality of life of the people with limb loss. Objective: This paper presents and evaluates the performance of four decoding methods for volitional movement intent from intramuscular EMG signals. Methods: The decoders are trained using the dataset aggregation (DAgger) algorithm, in which the training dataset is augmented during each training iteration based on the decoded estimates from previous iterations. Four competing decoding methods, namely polynomial Kalman filters (KFs), multilayer perceptron (MLP) networks, convolutional neural networks (CNN), and long short-term memory (LSTM) networks, were developed. The performances of the four decoding methods were evaluated using EMG datasets recorded from two human volunteers with transradial amputation. Short-term analyses, in which the training and cross-validation data came from the same dataset, and long-term analyses, in which the training and testing were done in different datasets, were performed. Results: Short-term analyses of the decoders demonstrated that CNN and MLP decoders performed significantly better than KF and LSTM decoders, showing an improvement of up to 60% in the normalized mean-square decoding error in cross-validation tests. Long-term analyses indicated that the CNN, MLP, and LSTM decoders performed significantly better than a KF-based decoder at most analyzed cases of temporal separations (0-150 days) between the acquisition of the training and testing datasets. Conclusion: The short-term and long-term performances of MLP- and CNN-based decoders trained with DAgger demonstrated their potential to provide more accurate and naturalistic control of prosthetic hands than alternate approaches.
机译:意义:传统方法从肌电图(EMG)和其他生物信号中解码运动意图的性能通常会随着时间而下降。此外,用于训练基于神经网络的解码器的常规算法在训练期间观察到的状态转换的范围之外可能无法很好地执行。本文中提出的工作缓解了这两个问题,从而产生了一种可以显着改善肢体残缺者生活质量的方法。目的:本文介绍并评估了四种肌内肌电信号自主运动意图的解码方法的性能。方法:使用数据集聚合(DAgger)算法对解码器进行训练,其中,在每次训练迭代期间,根据来自先前迭代的解码后的估计值对训练数据集进行扩充。开发了四种竞争性解码方法,即多项式卡尔曼滤波器(KFs),多层感知器(MLP)网络,卷积神经网络(CNN)和长短期记忆(LSTM)网络。使用由两名经radi骨截肢的人类志愿者记录的EMG数据集评估了四种解码方法的性能。进行了短期分析(其中训练和交叉验证数据来自同一数据集)和长期分析(其中训练和测试在不同数据集中进行)。结果:解码器的短期分析表明,CNN和MLP解码器的性能明显优于KF和LSTM解码器,在交叉验证测试中,归一化均方解码误差提高了60%。长期分析表明,在大多数分析的训练和测试数据集之间的时间间隔(0-150天)情况下,CNN,MLP和LSTM解码器的性能明显优于基于KF的解码器。结论:经过DAgger训练的基于MLP和CNN的解码器的短期和长期性能证明,与替代方法相比,它们具有提供更准确,自然的假手控制的潜力。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号