首页> 外文会议>IEEE/CVF Conference on Computer Vision and Pattern Recognition >Towards dense object tracking in a 2D honeybee hive
【24h】

Towards dense object tracking in a 2D honeybee hive

机译:在2D HoneyBee Hive中致密的物体跟踪

获取原文

摘要

From human crowds to cells in tissue, the detection and efficient tracking of multiple objects in dense configurations is an important and unsolved problem. In the past, limitations of image analysis have restricted studies of dense groups to tracking a single or subset of marked individuals, or to coarse-grained group-level dynamics, all of which yield incomplete information. Here, we combine convolutional neural networks (CNNs) with the model environment of a honeybee hive to automatically recognize all individuals in a dense group from raw image data. We create new, adapted individual labeling and use the segmentation architecture U-Net with a loss function dependent on both object identity and orientation. We additionally exploit temporal regularities of the video recording in a recurrent manner and achieve near human-level performance while reducing the network size by 94% compared to the original U-Net architecture. Given our novel application of CNNs, we generate extensive problem-specific image data in which labeled examples are produced through a custom interface with Amazon Mechanical Turk. This dataset contains over 375,000 labeled bee instances across 720 video frames at 2 FPS, representing an extensive resource for the development and testing of tracking methods. We correctly detect 96% of individuals with a location error of ~ 7% of a typical body dimension, and orientation error of 12°, approximating the variability of human raters. Our results provide an important step towards efficient image-based dense object tracking by allowing for the accurate determination of object location and orientation across time-series image data efficiently within one network architecture.
机译:从人群到组织中的细胞,在密集配置中的检测和有效跟踪多个对象是一个重要和未解决的问题。在过去,图像分析的局限性具有对致密基团的限制,以跟踪标记的单个或标记个体的子集,或粗粒群级动态,所有这些都产生不完整的信息。在这里,我们将卷积神经网络(CNNS)与蜜蜂蜂巢的模型环境相结合,以从原始图像数据自动识别密集组中的所有个人。我们创建新的,适应的单独标签,并使用分段架构U-Net,损失函数依赖于对象标识和方向。我们还以经常性的方式利用视频录制的时间规律,并在与原始U-Net架构相比,在降低网络大小的同时降低网络大小。鉴于我们的CNN的新应用,我们生成了广泛的特定于特定于特定的图像数据,其中通过与亚马逊机械土库的自定义界面产生标记的示例。该数据集包含超过375,000个标记为375,000个标记的BEE实例,横跨2个FPS,表示追踪和测试跟踪方法的广泛资源。我们正确地检测96%的个体,定位误差〜7%的典型体尺寸,方向误差为12°,近似于人类评估者的可变性。我们的结果通过允许在一个网络架构内有效地确定对象位置和横跨时间序列图像数据的对象位置和方向来提供基于基于图像的密集物体跟踪的重要一步。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号