首页> 外文会议>ACM international conference on Multimedia >A multimodal framework for music inputs (poster session)
【24h】

A multimodal framework for music inputs (poster session)

机译:音乐输入的多模式框架(海报会议)

获取原文

摘要

The growth of digital music databases imposes new content-based methods of interfacing with stored data; although indexing and retrieval techniques are deeply investigated, an integrated view of querying mechanism has never been established before. Moreover, the multimodal nature of music should be exploited to match the users' expectations as well as their skills. In this paper, we propose a hierarchy of music-interfaces that is suitable for existent prototypes of music information retrieval systems; according to this framework, human/computer interaction should be improved by singing, playing or notating music. Dealing with multiple inputs poses many challenging problems for both their combination and the low-level translation needed to transform an acoustic signal into a symbolic representation. This paper addresses the latter problem in some details, aiming to develop music-interfaces available not only to trained-musician.

机译:

数字音乐数据库的发展催生了新的基于内容的与存储数据接口的方法。尽管对索引和检索技术进行了深入研究,但以前从未建立过查询机制的综合视图。此外,应该利用音乐的多模式性质来匹配用户的期望以及他们的技能。在本文中,我们提出了一种音乐接口的层次结构,该层次结构适用于音乐信息检索系统的现有原型。根据此框架,应该通过唱歌,播放或记谱音乐来改善人机交互。处理多个输入会给它们的组合以及将声音信号转换为符号表示所需的低级转换带来许多挑战性的问题。本文针对后一个问题进行了一些详细介绍,旨在开发不仅适用于受过训练的音乐家的音乐界面。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号