首页> 外文会议>IEEE International Conference on Robotics & Automation >Crowdsourced saliency for mining robotically gathered 3D maps using multitouch interaction on smartphones and tablets
【24h】

Crowdsourced saliency for mining robotically gathered 3D maps using multitouch interaction on smartphones and tablets

机译:众包显着性,用于在智能手机和平板电脑上使用多点触摸交互来自动采集3D地图

获取原文

摘要

This paper presents a system for crowdsourcing saliency interest points for robotically gathered 3D maps rendered on smartphones and tablets. An app was created that is capable of interactively rendering 3D reconstructions gathered with an Autonomous Underwater Vehicle. Through hundreds of thousands of logged user interactions with the models we attempt to data-mine salient interest points. To this end we propose two models for calculating saliency from human interaction with the data. The first uses the view frustum of the camera to track the amount of time points are on screen. The second treats the camera's path as a time series and uses a Hidden Markov model to learn the classification of salient and non-salient points. To provide a comparison to existing techniques, several traditional visual saliency approaches are applied to orthographic views of the models' photo-texturing. The results of all approaches are validated with human attention ground truth gathered using a remote gaze-tracking system that recorded the locations of the person's attention while exploring the models.
机译:本文提出了一种用于在智能手机和平板电脑上以机器人方式收集的3D地图的众包显着性兴趣点的众包系统。创建了一个应用程序,该应用程序可以交互式地渲染使用自动水下航行器收集的3D重建。通过与模型进行的数十万次记录的用户交互,我们尝试对显着的兴趣点进行数据挖掘。为此,我们提出了两种用于根据人类与数据的交互来计算显着性的模型。第一种使用摄像机的视锥浏览器跟踪屏幕上显示的时间点数量。第二种方法将相机的路径视为时间序列,并使用隐马尔可夫模型来学习显着点和非显着点的分类。为了与现有技术进行比较,将几种传统的视觉显着性方法应用于模型照片纹理的正交视图。所有方法的结果均通过使用远程注视跟踪系统收集的人类关注地面事实进行验证,该系统记录了探索模型时人的注意力位置。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号