首页> 外文学位 >Ontology-driven annotation and access of educational video data.
【24h】

Ontology-driven annotation and access of educational video data.

机译:本体驱动的注释和教育视频数据的访问。

获取原文
获取原文并翻译 | 示例

摘要

The tremendous growth in multimedia data calls for efficient and flexible access mechanisms. In this dissertation, we propose an ontology-driven framework for video annotation and access. The goal is to integrate ontology into video systems to improve users' video access experience.; To realize ontology-driven video annotation, the first and foremost step is video segmentation. Current research in video segmentation has mainly focused on the visual and/or auditory modalities. In this dissertation, we investigate how to combine visual, auditory, and textual information in the segmentation of educational video data. Experiments show that text-based segmentation generally decomposes videos into semantic segments, which facilitates video content understanding and video annotation data extraction.; To extract annotation data from videos and video segments, and to organize them in a way that facilitates video access, we propose a multi-ontology based multimedia annotation model. In this model, domain-independent multimedia ontology is integrated with multiple domain-dependent ontologies. Preliminary evaluation suggests that multi-ontology based multimedia annotation provides multiple, domain-specific views of the same multimedia content and, thus, better meets different users' information needs.; With extracted annotation data, ontology-driven video access explores domain knowledge embedded in domain ontology and tailors the video access to the specific needs of individual users from different domains. Our experience shows that ontology-driven video access can improve video retrieval relevancy and, thus, enhance users' video access experience.
机译:多媒体数据的巨大增长要求高效而灵活的访问机制。本文提出了一种本体驱动的视频注释和访问框架。目标是将本体集成到视频系统中,以改善用户的视频访问体验。为了实现本体驱动的视频注释,第一步是视频分割。当前在视频分割上的研究主要集中在视觉和/或听觉形式上。在本文中,我们研究了在视频视频分割中如何结合视觉,听觉和文本信息。实验表明,基于文本的分割通常将视频分解为语义段,从而有助于视频内容的理解和视频注释数据的提取。为了从视频和视频片段中提取注释数据,并以促进视频访问的方式组织它们,我们提出了一种基于多本体的多媒体注释模型。在该模型中,独立于域的多媒体本体与多个依赖于域的本体集成在一起。初步评估表明,基于多本体的多媒体注释可提供同一多媒体内容的多个特定于域的视图,从而更好地满足不同用户的信息需求。利用提取的注释数据,本体驱动的视频访问将探索嵌入域本体中的域知识,并根据来自不同域的单个用户的特定需求来定制视频访问。我们的经验表明,本体驱动的视频访问可以改善视频检索的相关性,从而增强用户的视频访问体验。

著录项

  • 作者

    Dong, Aijuan.;

  • 作者单位

    North Dakota State University.;

  • 授予单位 North Dakota State University.;
  • 学科 Computer Science.
  • 学位 Ph.D.
  • 年度 2006
  • 页码 154 p.
  • 总页数 154
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类 自动化技术、计算机技术;
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号