首页> 外文会议>International Conference on User Modeling, Adaptation, and Personalization >From Artifact to Content Source: Using Multimodality in Video to Support Personalized Recomposition
【24h】

From Artifact to Content Source: Using Multimodality in Video to Support Personalized Recomposition

机译:从工件到内容来源:在视频中使用多模块来支持个性化recoposition

获取原文
获取外文期刊封面目录资料

摘要

Video content is being produced in ever increasing quantities. It is practically impossible for any user to see every piece of video which could be useful to them. We need to look at video content differently. Videos are composed of a set of features, namely the moving video track, the audio track and other derived features, such as a transcription of the spoken words. These different features have the potential to be recomposed to create new video offerings. However, a key step in achieving such recomposition is the appropriate decomposition of those features into useful assets. Video artifacts can therefore be considered a type of multimodal source which may be used to support personalized and contextually aware recomposition. This work aims to propose and validate an approach which will convert a video from a single artifact into a diverse query-able content source.
机译:视频内容正在增加数量。任何用户实际上是不可能看到对它们有用的每一段视频。我们需要以不同的方式查看视频内容。视频由一组特征组成,即移动视频轨道,音轨和其他派生功能,例如口语单词的转录。这些不同的功能有可能被重新编译以创建新的视频产品。然而,实现这种重新定位的关键步骤是将这些特征的适当分解成有用的资产。因此,视频工件可以被视为一种多模式源,其可用于支持个性化和上下文意识的重新归类。这项工作旨在提出和验证一种方法,该方法将从单个工件转换为不同的查询内容源。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号