首页> 外文会议>IEEE Applied Imagery Pattern Recognition Workshop >Automated generation of convolutional neural network training data using video sources
【24h】

Automated generation of convolutional neural network training data using video sources

机译:使用视频来源自动生成卷积神经网络培训数据

获取原文

摘要

One of the challenges of using techniques such as convolutional neural networks and deep learning for automated object recognition in images and video is to be able to generate sufficient quantities of labeled training image data in a cost-effective way. It is generally preferred to tag hundreds of thousands of frames for each category or label, and a human being tagging images frame by frame might expect to spend hundreds of hours creating such a training set. One alternative is to use video as a source of training images. A human tagger notes the start and stop time in each clip for the appearance of objects of interest. The video is broken down into component frames using software such as ffmpeg. The frames that fall within the time intervals for objects of interest are labeled as “targets,” and the remaining frames are labeled as “non-targets.” This separation of categories can be automated. The time required by a human viewer using this method would be around ten hours, at least 1-2 orders of magnitude lower than a human tagger labeling frame by frame. The false alarm rate and target detection rate can by optimized by providing the system unambiguous training examples.
机译:使用卷积神经网络等技术的挑战之一和图像和视频中的自动对象识别的深度学习是能够以成本有效的方式产生足够的标记训练图像数据。对于每个类别或标签的数十万帧,通常优先于数十万个帧,并且通过帧标记图像帧的人可能期望花费数百小时创建这样的训练集。一种替代方案是使用视频作为训练图像的来源。人类标签指示每个剪辑中的开始和停止时间,以便出现感兴趣的对象。使用诸如FFMPEG等软件分解为组件帧中的视频。落入感兴趣对象的时间间隔内的帧被标记为“目标”,并且其余帧被标记为“非目标”。这种类别的分离可以自动化。人类观看者使用这种方法所需的时间大约十个小时,比帧的人类标签标签框架低至少1-2个数量级。通过提供系统明确的训练示例,通过优化误报率和目标检测率。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号