首页> 外文会议>IEEE International Conference on Wearable and Implantable Body Sensor Networks >A robust user interface for IoT using context-aware Bayesian fusion
【24h】

A robust user interface for IoT using context-aware Bayesian fusion

机译:使用上下文感知贝叶斯融合的IOT强大的用户界面

获取原文

摘要

As the Internet of Things (IoT) continues to expand into our daily lives, consumers are finding a growing catalogue of smart devices to boost the intelligence of their homes. Currently, the user must manage a proprietary user interface (UI) for each device, and each application comes with its own UI, creating a cumbersome app environment. Clearly, a single UI that can control all of these devices would be preferable. This interface should be accessible using forms of communication that feel natural, for example, speech, body language, and facial expressions, to name a few. In this paper, we propose a framework for multimodal UI using a flexible, slotted command ontology and decision-level Bayesian fusion. Our case study explores command recognition for device control with a wearable system accessed via speech and gestures, using a wrist-mounted inertial measurement unit (IMU) for hand gesture recognition. We achieve an accuracy of 94.82% on a set of 17 commands.
机译:随着事物互联网(物联网)继续扩大到我们的日常生活中,消费者正在寻找一个越来越多的智能设备目录,以提高他们的家园的智慧。目前,用户必须为每个设备管理专有用户界面(UI),每个应用程序都附带自己的UI,创建了一个麻烦的应用环境。显然,可以控制所有可以控制所有这些设备的单个UI。应该使用感受自然的沟通形式访问此接口,例如语音,肢体语言和面部表情,以命名几个。在本文中,我们向多模式UI提出了一种使用灵活的开槽命令本体和决策级贝叶斯融合的多模式UI框架。我们的案例研究探讨了通过通过语音和手势访问的可穿戴系统的设备控制的指令识别,用于使用腕部惯性测量单元(IMU)进行手势识别。我们在一组17个命令上达到94.82 %的准确性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号