首页> 外文期刊>Nature reviews Cancer >Exploring Semi-Supervised Methods for Labeling Support in Multimodal Datasets
【24h】

Exploring Semi-Supervised Methods for Labeling Support in Multimodal Datasets

机译:在多模式数据集中探索用于标记支持的半监督方法

获取原文
获取原文并翻译 | 示例
       

摘要

Working with multimodal datasets is a challenging task as it requires annotations which often are time consuming and difficult to acquire. This includes in particular video recordings which often need to be watched as a whole before they can be labeled. Additionally, other modalities like acceleration data are often recorded alongside a video. For that purpose, we created an annotation tool that enables to annotate datasets of video and inertial sensor data. In contrast to most existing approaches, we focus on semi-supervised labeling support to infer labels for the whole dataset. This means, after labeling a small set of instances our system is able to provide labeling recommendations. We aim to rely on the acceleration data of a wrist-worn sensor to support the labeling of a video recording. For that purpose, we apply template matching to identify time intervals of certain activities. We test our approach on three datasets, one containing warehouse picking activities, one consisting of activities of daily living and one about meal preparations. Our results show that the presented method is able to give hints to annotators about possible label candidates.
机译:使用多模式数据集是一个具有挑战性的任务,因为它需要往往是耗时和难以获取的注释。这包括特定的视频记录,这些视频记录通常需要在它们可以标记之前整体观看。另外,像加速数据一样的其他方式通常与视频一起记录。为此目的,我们创建了一个注释工具,使得能够注释视频和惯性传感器数据的数据集。与大多数现有方法相比,我们专注于半监督标签支持,为整个数据集推断出来的标签。这意味着,在标记一小组实例之后,我们的系统能够提供标签推荐。我们的目标是依赖腕带传感器的加速数据来支持视频录制的标签。为此目的,我们应用模板匹配以识别某些活动的时间间隔。我们在三个数据集中测试我们的方法,其中一个包含仓库采摘活动,其中一个由日常生活和关于膳食准备的活动组成。我们的研究结果表明,该方法能够向注册器提供关于可能的标签候选者的提示。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号