【24h】

Interactive object detection

机译:交互式物体检测

获取原文
获取原文并翻译 | 示例

摘要

In recent years, the rise of digital image and video data available has led to an increasing demand for image annotation. In this paper, we propose an interactive object annotation method that incrementally trains an object detector while the user provides annotations. In the design of the system, we have focused on minimizing human annotation time rather than pure algorithm learning performance. To this end, we optimize the detector based on a realistic annotation cost model based on a user study. Since our system gives live feedback to the user by detecting objects on the fly and predicts the potential annotation costs of unseen images, data can be efficiently annotated by a single user without excessive waiting time. In contrast to popular tracking-based methods for video annotation, our method is suitable for both still images and video. We have evaluated our interactive annotation approach on three datasets, ranging from surveillance, television, to cell microscopy.
机译:近年来,可用的数字图像和视频数据的增长导致对图像注释的需求增加。在本文中,我们提出了一种交互式对象注释方法,该方法在用户提供注释时逐步训练对象检测器。在系统的设计中,我们专注于最小化人类注释时间,而不是单纯的算法学习性能。为此,我们基于基于用户研究的现实注释成本模型优化了检测器。由于我们的系统通过检测飞行中的物体向用户提供实时反馈,并预测看不见图像的潜在注释成本,因此单个用户可以有效地注释数据,而无需等待太多时间。与流行的基于跟踪的视频注释方法相反,我们的方法适用于静止图像和视频。我们已经对从监视,电视到细胞显微镜的三个数据集评估了交互式注释方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号