首页> 外文会议>IEEE International Conference on Multimedia and Expo Workshops >LET THE DEAF UNDERSTAND: MAINSTREAMING THE MARGINALIZED IN CONTEXT WITH PERSONALIZED DIGITAL MEDIA SERVICES AND SOCIAL NEEDS
【24h】

LET THE DEAF UNDERSTAND: MAINSTREAMING THE MARGINALIZED IN CONTEXT WITH PERSONALIZED DIGITAL MEDIA SERVICES AND SOCIAL NEEDS

机译:让聋人理解:用个性化的数字媒体服务和社会需求将边缘化的边缘化纳入主流

获取原文

摘要

This paper presents a pilot study for a personalized media service which aims at creating an intelligent, sentimentaware, and language-independent access to large archives of audiovisual documents, providing equal services to both mainstream and marginalized users. The proposed multi-modal framework analyzes aural, visual, and human descriptions, integrating them into an automatic content analyzer. Firstly, text is extracted from the aural stream and mapped to American Sign Language (ASL), translating conventional video to content suitable for the deaf. Next, sentiment is estimated from text, aural, and visual contents using two deep convolutional neural networks (CNN), extracting discriminative features from each modality. This provides output predictions for two broad classes: positive and negative sentiments. Preliminary results indicate that the proposed approach is capable of accurately estimating the sentiment of multimedia contents, which is an important step for personalized and intelligent media services.
机译:本文介绍了个性化媒体服务的试验研究,旨在创建一个智能,SentiMateWare和对大型档案的视听文件档案,为主流和边缘化用户提供平等的服务。所提出的多模态框架分析了听觉,视觉和人的描述,将它们集成到自动内容分析仪中。首先,文本是从耳流中提取的,并映射到美国手语(ASL),将传统视频转换为适合聋人的内容。接下来,使用两个深度卷积神经网络(CNN)从文本,听觉和视觉内容估算情绪,从每个模态提取鉴别特征。这为两个广泛课程提供了输出预测:积极和消极情绪。初步结果表明,该方法能够准确地估计多媒体内容的情绪,这是个性化和智能媒体服务的重要步骤。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号