...
首页> 外文期刊>International journal of communication systems >Web video classification with visual and contextual semantics
【24h】

Web video classification with visual and contextual semantics

机译:具有视觉和上下文语义的网络视频分类

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

On the social Web, the amount of video content either originated from wireless devices or previously received from media servers has increased enormously in the recent years. The astounding growth of Web videos has stimulated researchers to propose new strategies to organize them into their respective categories. Because of complex ontology and large variation in content and quality of Web videos, it is difficult to get sufficient, precisely labeled training data, which causes hindrance in automatic video classification. In this paper, we propose a novel content- and context-based Web video classification framework by rendering external support through category discriminative terms (CDTs) and semantic relatedness measure (SRM). Mainly, a three-step framework is proposed. Firstly, content-based video classification is proposed, where twofold use of high-level concept detectors is leveraged to classify Web videos. Initially, category classifiers induced from VIREO-374 detectors are trained to classify Web videos, and then concept detectors with high confidence for each video are mapped to CDT through SRM-assisted semantic content fusion function to further boost the category classifiers, which intuitively provide a more robust measure for Web video classification. Secondly, a context-based video classification is proposed, where twofold use of contextual information is also harnessed. Initially, cosine similarity and then semantic similarity are measured between text features of each video and CDT through vector space model (VSM)- and SRM-assisted semantic context fusion function, respectively. Finally, classification results from content and context are fused to compensate for the shortcomings of each other, which enhance the video classification performance. Experiments on large-scale video dataset validate the effectiveness of the proposed solution.
机译:在社交网上,近年来,源自无线设备或先前从媒体服务器接收的视频内容的数量在近年来巨大增加。 Web视频的令人震惊的增长刺激了研究人员,提出了将他们组织进入各自的类别的新策略。由于复杂的本体论和网络视频内容和质量的大变化,因此难以获得足够的精确标记的培训数据,这会导致自动视频分类中的障碍。在本文中,我们通过类别鉴别术语(CDTS)和语义相关性测量(SRM)来提出一种新的内容和基于语境的Web视频分类框架。主要是提出了三步框架。首先,提出了基于内容的视频分类,其中利用高级概念探测器的双重使用以分类Web视频。最初,从Vireo-374探测器引起的类别分类器训练,以对Web视频进行分类,然后通过SRM辅助语义内容融合功能映射到CDT的概念探测器以进一步提升直观提供A类的类别分类器的CDT。更强大的网络视频分类措施。其次,提出了一种基于上下文的视频分类,其中还利用了双重使用上下文信息。最初,余弦相似度,然后通过矢量空间模型(VSM)和SRM辅助语义上下文融合功能分别在每个视频和CDT的文本特征之间测量语义相似性。最后,来自内容和上下文的分类结果融合以补偿彼此的缺点,这提高了视频分类性能。大型视频数据集的实验验证了所提出的解决方案的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号