首页> 外文期刊>Frontiers in Psychology >Phrase-Level Modeling of Expression in Violin Performances
【24h】

Phrase-Level Modeling of Expression in Violin Performances

机译:小提琴表演表达的短语级模型

获取原文
       

摘要

Background: Expression is a key skill in music performance, and one that is difficult to address in music lessons. Computational models that learn from expert performances can help providing suggestions and feedback to students. Aim: We propose and analyze an approach to modeling variations in dynamics and note onset timing for solo violin pieces with the purpose of facilitating expressive performance learning in new pieces, for which no reference performance is available. Method: The method generates phrase–level predictions based on musical score information on the assumption that expressiveness is idiomatic, and thus influenced by similar–sounding melodies. Predictions were evaluated numerically using three different datasets and against note–level machine–learning models, and also perceptually by listeners, who were presented to synthesized versions of musical excerpts, and asked to choose the most human–sounding one. Some of the presented excerpts were synthesized to reflect the variations in dynamics and timing predicted by the model, whereas others were shaped to reflect the dynamics and timing of an actual expert performance, and a third group was presented with no expressive variations. Results: surprisingly, none of the three synthesized versions was consistently selected as human–like nor preferred with statistical significance by listeners. Possible interpretations of these results include the fact that the melodies might have been impossible to interpret outside their musical context, or that expressive features that were left out of the modeling such as note articulation and vibrato are, in fact, essential to the perception of expression in violin performance. Positive feedback by some listeners toward the modeled melodies in a blind setting indicate that the modeling approach was capable of generating appropriate renditions at least for a subset of the data. Numerically, performance in phrase–level suffers a small degradation if compared to note–level, but produces predictions easier to interpret visually, thus more useful in a pedagogical setting.
机译:背景:表达是音乐性能的关键技能,并且一个难以在音乐课上解决的技能。从专业表演中学到的计算模型可以帮助为学生提供建议和反馈。目的:我们提出并分析了一种对独奏小提琴件的动态和注意事项开始模拟的方法,其目的是促进新件中的表现性能学习,没有参考性能。方法:该方法基于表达性是惯用的假设的乐谱信息生成短语级预测,从而受到类似探测旋转的影响。使用三个不同的数据集和记录级机器学习模型进行数字地评估预测,并通过侦听器感知,他们被提交给合成版本的音乐摘录,并要求选择最令人满意的人。合成了一些呈现的摘录,以反映模型预测的动态和时序的变化,而其他摘录的变化是反映实际专家性能的动态和时间,而第三组被呈现,没有表达变化。结果:令人惊讶的是,三个合成版本中没有一个始终如一地选择作为人类的,也不是听众的统计显着性。这些结果的可能解释包括旋律可能是不可能解释在其音乐背景之外的旋律,或者在诸如注释铰接和颤音等建模中遗漏的表达特征实际上是对表达的感知至关重要小提琴表现。一些听众对盲目设置的模型旋转的正反馈表明,建模方法能够至少用于数据的子集。在数值上,如果与音符级别相比,短语级的性能会遭受小的降级,但是在视觉上更容易地解释预测,因此在教学环境中更有用。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号