首页> 美国卫生研究院文献>Sensors (Basel Switzerland) >AVaTER: Fusing Audio Visual and Textual Modalities Using Cross-Modal Attention for Emotion Recognition
【2h】

AVaTER: Fusing Audio Visual and Textual Modalities Using Cross-Modal Attention for Emotion Recognition

机译:AVaTER:使用跨模态注意力融合音频、视觉和文本模态进行情绪识别

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Multimodal emotion classification (MEC) involves analyzing and identifying human emotions by integrating data from multiple sources, such as audio, video, and text. This approach leverages the complementary strengths of each modality to enhance the accuracy and robustness of emotion recognition systems. However, one significant challenge is effectively integrating these diverse data sources, each with unique characteristics and levels of noise. Additionally, the scarcity of large, annotated multimodal datasets in Bangla limits the training and evaluation of models. In this work, we unveiled a pioneering multimodal Bangla dataset, MAViT-Bangla (Multimodal Audio Video Text Bangla dataset). This dataset, comprising 1002 samples across audio, video, and text modalities, is a unique resource for emotion recognition studies in the Bangla language. It features emotional categories such as anger, fear, joy, and sadness, providing a comprehensive platform for research. Additionally, we developed a framework for audio, video and textual emotion recognition (i.e., AVaTER) that employs a cross-modal attention mechanism among unimodal features. This mechanism fosters the interaction and fusion of features from different modalities, enhancing the model’s ability to capture nuanced emotional cues. The effectiveness of this approach was demonstrated by achieving an F1-score of 0.64, a significant improvement over unimodal methods.
机译:多模态情绪分类 (MEC) 涉及通过集成来自多个来源(例如音频、视频和文本)的数据来分析和识别人类情绪。这种方法利用了每种模式的互补优势来提高情绪识别系统的准确性和稳健性。但是,一个重大挑战是有效集成这些不同的数据源,每个数据源都有独特的特征和噪声级别。此外,孟加拉语中大型带注释的多模态数据集的稀缺性限制了模型的训练和评估。在这项工作中,我们推出了一个开创性的多模态孟加拉语数据集 MAViT-Bangla(多模态音频视频文本孟加拉语数据集)。该数据集包含音频、视频和文本模态的 1002 个样本,是孟加拉语情感识别研究的独特资源。它以愤怒、恐惧、喜悦和悲伤等情绪类别为特色,为研究提供了一个全面的平台。此外,我们还开发了一个音频、视频和文本情感识别框架 (即 AVaTER),该框架在单模态特征之间采用跨模态注意力机制。这种机制促进了来自不同模态的特征的交互和融合,增强了模型捕捉细微情感线索的能力。这种方法的有效性通过达到 0.64 的 F1 评分来证明,比单峰方法有显着改进。

著录项

代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号