首页> 外文期刊>Robotics and Computer-Integrated Manufacturing >A real-time human-robot interaction framework with robust background invariant hand gesture detection
【24h】

A real-time human-robot interaction framework with robust background invariant hand gesture detection

机译:具有强大的背景不变手势检测的实时人机交互框架

获取原文
获取原文并翻译 | 示例

摘要

In the light of factories of the future, to ensure productive and safe interaction between robot and human coworkers, it is imperative that the robot extracts the essential information of the coworker. We address this by designing a reliable framework for real-time safe human-robot collaboration, using static hand gestures and 3D skeleton extraction. OpenPose library is integrated with Microsoft Kinect V2, to obtain a 3D estimation of the human skeleton. With the help of 10 volunteers, we recorded an image dataset of alpha-numeric static hand gestures, taken from the American Sign Language. We named our dataset OpenSign and released it to the community for benchmarking. Inception V3 convolutional neural network is adapted and trained to detect the hand gestures. To augment the data for training the hand gesture detector, we use OpenPose to localize the hands in the dataset images and segment the backgrounds of hand images, by exploiting the Kinect V2 depth map. Then, the backgrounds are substituted with random patterns and indoor architecture templates. Fine-tuning of Inception V3 is performed in three phases, to achieve validation accuracy of 99.1% and test accuracy of 98.9%. An asynchronous integration of image acquisition and hand gesture detection is performed to ensure real-time detection of hand gestures. Finally, the proposed framework is integrated in our physical human-robot interaction library OpenPHRI. This integration complements OpenPHRI by providing successful implementation of the ISO/TS 15066 safety standards for "safety rated monitored stop" and "speed and separation monitoring" collaborative modes. We validate the performance of the proposed framework through a complete teaching by demonstration experiment with a robotic manipulator.
机译:鉴于未来的工厂,为确保机器人与人类同事之间的生产性和安全互动,机器人必须提取同事的基本信息。我们通过使用静态手势和3D骨骼提取为实时安全的人机协作设计可靠的框架来解决此问题。 OpenPose库与Microsoft Kinect V2集成在一起,以获得人体骨骼的3D估计。在10名志愿者的帮助下,我们记录了取自美国手语的字母数字静态手势的图像数据集。我们将数据集命名为OpenSign,并将其发布给社区进行基准测试。改编和训练了Inception V3卷积神经网络以检测手势。为了扩充用于训练手势检测器的数据,我们使用OpenPose通过利用Kinect V2深度图在数据集中的图像中定位手并分割手图像的背景。然后,将背景替换为随机图案和室内建筑模板。 Inception V3的微调分为三个阶段,以实现99.1%的验证精度和98.9%的测试精度。执行图像采集和手势检测的异步集成,以确保手势的实时检测。最后,提出的框架已集成到我们的物理人机交互库OpenPHRI中。通过为“安全等级受监视的停机”和“速度和分离监视”协作模式提供成功实施的ISO / TS 15066安全标准,此集成对OpenPHRI起到了补充作用。我们通过机器人操纵器的演示实验,通过完整的教学来验证所提出框架的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号