首页> 外文期刊>Journal on multimodal user interfaces >Expressive non-verbal interaction in a string quartet: an analysis through head movements
【24h】

Expressive non-verbal interaction in a string quartet: an analysis through head movements

机译:弦乐四重奏中的表达性非语言互动:通过头部运动的分析

获取原文
获取原文并翻译 | 示例
       

摘要

The present study investigates expressive nonverbal interaction in the musical context starting from behavioral features extracted at individual and group levels. Four groups of features are defined, which are related to head movement and direction, and may help gaining insight on the expressivity and cohesion of the performance, discriminating between different performance conditions. Then, the features are evaluated both at a global scale and at a local scale. The findings obtained from the analysis of a string quartet recorded in an ecological setting show that using these features alone or in their combination may help in distinguishing between two types of performance: (a) a concert-like condition, where all musicians aim at performing at best, (b) a per-turbed one, where the 1st violinist devises alternative interpretations of the music score without discussing them with the other musicians. In the global data analysis, the discriminative power of the features is investigated through statisti- cal tests. Then, in the local data analysis, a larger amount of data is used to exploit more sophisticated machine learning techniques to select suitable subsets of the features, which are then used to train an SVM classifier to perform binary classification. Interestingly, the features whose discriminative power is evaluated as large (respectively, small) in the global analysis are also evaluated in a similar way in the local analysis. When used together, the 22 features that have been defined in the paper demonstrate to be efficient for classification, leading to a percentage of about 90 % successfully classified examples among the ones not used in the training phase. Similar results are obtained considering only a subset of 15 features.
机译:本研究调查了从个人和群体层面提取的行为特征开始的音乐情境中的表达性非语言交互。定义了四组特征,这些特征与头部的运动和方向有关,可以帮助您了解演奏的表现力和内聚力,从而区分不同的演奏条件。然后,在全球范围内和局部范围内对要素进行评估。通过对在生态环境中录制的弦乐四重奏的分析获得的结果表明,单独使用或结合使用这些功能可以帮助区分两种类型的演奏:(a)音乐会般的状态,所有音乐家都致力于演奏充其量是(b)一个受干扰的乐曲,第一小提琴家可以对乐谱进行另类解释,而不必与其他音乐家讨论。在全局数据分析中,通过统计检验来研究特征的区分能力。然后,在本地数据分析中,将使用大量数据来开发更复杂的机器学习技术,以选择适当的特征子集,然后将这些子集用于训练SVM分类器以执行二进制分类。有趣的是,在全局分析中将判别力评估为大(分别为小)的特征在局部分析中也以类似的方式进行评估。当一起使用时,已在本文中定义的22个功能被证明对分类有效,从而导致在训练阶段未使用的示例中成功分类的示例约占90%。仅考虑15个特征的子集即可获得相似的结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号