首页> 外文会议>Conference on sound and music technology >Multimodel Music Emotion Recognition Using Unsupervised Deep Neural Networks
【24h】

Multimodel Music Emotion Recognition Using Unsupervised Deep Neural Networks

机译:使用无监督深度神经网络的多模型音乐情感识别

获取原文

摘要

In most studies on multimodal music emotion recognition, different modalities are generally combined in a simple way and used for supervised training. The improvement of the experiment results illustrates the correlations between different modalities. However, few studies focus on modeling the relationships between different modal data. In this paper, we propose to model the relationships between different modalities (i.e., lyric and audio data) by deep learning methods in multimodal music emotion recognition. Several deep networks are first applied to perform unsupervised feature learning over multiple modalities. We, then, design a series of music emotion recognition experiments to evaluate the learned features. The experiment results show that the deep networks perform well on unsupervised feature learning for multimodal data and can model the relationships effectively. In addition, we demonstrate a unimodal enhancement experiment, where better features for one modality (e.g., lyric) can be learned by the proposed deep network, if the other modality (e.g., audio) is also present at unsupervised feature learning time.
机译:在大多数关于多模式音乐情感识别的研究中,通常以简单的方式将不同的模式组合起来,并用于监督训练。实验结果的改进说明了不同模态之间的相关性。但是,很少有研究专注于对不同模态数据之间的关系进行建模。在本文中,我们建议通过多模式音乐情感识别中的深度学习方法来模拟不同模态(即歌词和音频数据)之间的关系。首先应用了几种深度网络,以通过多种模式执行无监督的特征学习。然后,我们设计了一系列音乐情感识别实验,以评估学习的功能。实验结果表明,深度网络在多模式数据的无监督特征学习中表现良好,可以有效地建立关系模型。此外,我们演示了单峰增强实验,其中如果在无人监督的特征学习时还存在另一种形式(例如音频),则可以通过提出的深层网络来学习一种形式(例如歌词)的更好功能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号