首页> 外文会议>Latin American Robotic Symposium;Brazilian Symposium on Robotics;Workshop on Robotics in Education >User-Prosthesis Interface for Upper Limb Prosthesis Based on Object Classification
【24h】

User-Prosthesis Interface for Upper Limb Prosthesis Based on Object Classification

机译:基于对象分类的上肢假肢用户假肢界面

获取原文

摘要

The complexity of User-Prosthesis Interfaces (UPIs) to control and select different grip modes and gestures of active upper-limb prostheses, as well as the issues presented by the use of electromyography (EMG), along with the long periods of training and adaptation influence amputees on stopping using the device. Moreover, development cost and challenging research makes the final product too expensive for the vast majority of transradial amputees and often leaves the amputee with an interface that does not satisfy his needs. Usually, EMG controlled multi grasping prosthesis are mapping the challenging detection of a specific contraction of a group of muscle to one type of grasping, limiting the number of possible grasps to the number of distinguishable muscular contraction. To reduce costs and to facilitate the interaction between the user and the system in a customized way, we propose a hybrid UPI based on object classification from images and EMG, integrated with a 3D printed upper-limb prosthesis, controlled by a smartphone application developed in Android. This approach allows easy updates of the system and lower cognitive effort required from the user, satisfying a trade-off between functionality and low cost. Therefore, the user can achieve endless predefined types of grips, gestures, and sequence of actions by taking pictures of the object to interact with, only using four muscle contractions to validate and actuate a suggested type of interaction. Experimental results showed great mechanical performances of the prosthesis when interacting with everyday life objects, and high accuracy and responsiveness of the controller and classifier.
机译:用户假肢界面(UPI)的复杂性,用于控制和选择活动上肢假体的不同握持模式和手势,以及使用肌电图(EMG)带来的问题,以及长期的训练和适应影响截肢者停止使用设备。此外,开发成本和具有挑战性的研究使最终产品对于绝大多数经radi直截肢者来说过于昂贵,并且常常使截肢者的接口无法满足其需求。通常,EMG控制的多抓取假体将一组肌肉的特定收缩的具有挑战性的检测映射到一种类型的抓取,将可能抓取的次数限制为可区分的肌肉收缩的次数。为了降低成本并以自定义方式促进用户与系统之间的交互,我们提出了一种基于图像和EMG的对象分类的混合UPI,并结合了3D打印的上肢假体,该假体由在安卓这种方法可以轻松地对系统进行更新,并降低用户所需的认知工作量,从而满足功能和低成本之间的折衷。因此,用户仅通过使用四个肌肉收缩来验证和激活建议的交互类型,就可以通过拍摄要与之交互的对象的图片来实现无尽的预定类型的抓握,手势和动作序列。实验结果表明,假肢在与日常生活对象互动时具有出色的机械性能,并且控制器和分类器具有很高的准确性和响应能力。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号