首页> 外文OA文献 >You-Do, I-Learn:Egocentric Unsupervised Discovery of Objects and their Modes of Interaction Towards Video-Based Guidance
【2h】

You-Do, I-Learn:Egocentric Unsupervised Discovery of Objects and their Modes of Interaction Towards Video-Based Guidance

机译:你做,我学习:对象的自我中心无监督发现及其向基于视频的引导的交互方式

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

This paper presents an unsupervised approach towards automatically extractingvideo-based guidance on object usage, from egocentric video and wearable gazetracking, collected from multiple users while performing tasks. The approachi) discovers task relevant objects, ii) builds a model for each, iii) distinguishes different ways in which each discovered object has been used and vi) discovers thedependencies between object interactions. The work investigates using appearance, position, motion and attention, and presents results using each and a combination of relevant features. Moreover, an online scalable approach is presented and is compared to offline results. The paper proposes a method for selecting a suitable video guide to be displayed to a novice user indicating how to use an object, purely triggered by the user’s gaze. The potential assistive mode can also recommend an object to be used next based on the learnt sequence of object interactions. The approach was tested on a variety of daily tasks such as initialising a printer, preparing a coffee and setting up a gym machine.
机译:本文提出了一种无监督的方法,该方法可从以自我为中心的视频和可穿戴式凝视跟踪中自动提取基于视频的对象使用指南,并在执行任务时从多个用户那里收集。方法i)发现与任务相关的对象,ii)为每个对象建立模型,iii)区分使用每个发现对象的不同方式,vi)发现对象交互之间的依赖关系。该作品使用外观,位置,动作和注意力进行调查,并使用相关特征中的每一个及其组合来呈现结果。此外,提出了一种在线可扩展方法,并将其与离线结果进行了比较。本文提出了一种方法,用于选择要显示给新手用户的视频指南,以指示如何使用对象,这完全是由用户的注视触发的。潜在的辅助模式还可以根据学习到的对象交互顺序推荐下一个要使用的对象。该方法已在各种日常任务上进行了测试,例如初始化打印机,准备咖啡和设置健身机。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号