首页> 外文期刊>Soft computing: A fusion of foundations, methodologies and applications >Integration of nonparametric fuzzy classification with an evolutionary-developmental framework to perform music sentiment-based analysis and composition
【24h】

Integration of nonparametric fuzzy classification with an evolutionary-developmental framework to perform music sentiment-based analysis and composition

机译:非参数模糊分类与进化发展框架的集成,以执行基于音乐情绪的分析和组成

获取原文
       

摘要

Over the past years, several approaches have been developed to create algorithmic music composers. Most existing solutions focus on composing music that appears theoretically correct or interesting to the listener. However, few methods have targeted sentiment-based music composition: generating music that expresses human emotions. The few existing methods are restricted in the spectrum of emotions they can express (usually to two dimensions: valence and arousal) as well as the level of sophistication of the music they compose (usually monophonic, following translation-based, predefined templates or heuristic textures). In this paper, we introduce a new algorithmic framework for autonomous music sentiment-based expression and composition, titled MUSEC, that perceives an extensible set of six primary human emotions (e.g., anger, fear, joy, love, sadness, and surprise) expressed by a MIDI musical file and then composes (creates) new polyphonic (pseudo) thematic, and diversified musical pieces that express these emotions. Unlike existing solutions, MUSEC is: (i) a hybrid crossover between supervised learning (SL, to learn sentiments from music) and evolutionary computation (for music composition, MC), where SL serves at the fitness function of MC to compose music that expresses target sentiments, (ii) extensible in the panel of emotions it can convey, producing pieces that reflect a target crisp sentiment (e.g., love) or a collection of fuzzy sentiments (e.g., 65% happy, 20% sad, and 15% angry), compared with crisp-only or two-dimensional (valence/arousal) sentiment models used in existing solutions, (iii) adopts the evolutionary-developmental model, using an extensive set of specially designed music-theoretic mutation operators (trille, staccato, repeat, compress, etc.), stochastically orchestrated to add atomic (individual chord-level) and thematic (chord pattern-level) variability to the composed polyphonic pieces, compared with traditional evolutionary solutions producing monophonic and non-thematic music. We conducted a large battery of tests to evaluate MUSEC's effectiveness and efficiency in both sentiment analysis and composition. It was trained on a specially constructed set of 120 MIDI pieces, including 70 sentiment-annotated pieces: the first significant dataset of sentiment-labeled MIDI music made available online as a benchmark for future research in this area. Results are encouraging and highlight the potential of our approach in different application domains, ranging over music information retrieval, music composition, assistive music therapy, and emotional intelligence.
机译:在过去几年中,已经开发了几种方法来创建算法音乐作曲家。大多数现有的解决方案专注于编写似乎理论上正确或有趣的音乐。然而,很少有方法具有基于情绪的音乐作品:产生表达人类情绪的音乐。少数现有方法受到他们可以表达的情绪的频谱(通常是两个维度:价值和唤醒)以及他们撰写的音乐的复杂程度(通常是单声道,以下的翻译,预定义模板或启发式纹理)。在本文中,我们为基于自主音乐情绪的表达和组成介绍了一种新的算法框架,标题为Musec,感知一个可扩展的六种主要人类情绪(例如,愤怒,恐惧,喜悦,悲伤和惊喜)由MIDI音乐档案,然后撰写(创建)新的Polyphonic(伪)主题,以及表达这些情绪的多样性音乐作品。与现有的解决方案不同,Musec是:(i)监督学习(SL,音乐情绪的情绪之间的混合交叉)和进化计算(用于音乐组合,MC),其中SL在MC的健身功能下为表达的音乐提供服务目标情绪,(ii)在情绪小组中可扩展,它可以传达,生产反映目标清脆情绪的作品(例如,爱情)或模糊情绪的集合(例如,65%快乐,20%悲伤,15%愤怒)与现有解决方案中使用的脆弱或二维(价/唤醒)情绪模型相比,(iii)采用进化发展模型,采用广泛的专门设计的音乐理论突变算子(Trilly,Staccato,与传统的进化溶液相比,重复,压缩等),以将原子(单独的弦级)和主题(Chord图案级)和主题(弦式图案级)可变性与产生单声道和非专题产生单声道和非专题相比 音乐。我们进行了大量的测试,以评估Musec在情绪分析和组成方面的效率和效率。它训练在一个专门建造的120个Midi件中,包括70个情绪注释的碎片:在线提供的情绪标记的MIDI音乐的第一个重要数据集作为该领域未来研究的基准。结果令人鼓舞并突出了我们在不同应用领域中的方法的潜力,从音乐信息检索,音乐作品,辅助音乐疗法和情商界。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号