首页> 外文期刊>Audio, Speech, and Language Processing, IEEE Transactions on >A Generative Context Model for Semantic Music Annotation and Retrieval
【24h】

A Generative Context Model for Semantic Music Annotation and Retrieval

机译:语义音乐注释和检索的生成上下文模型

获取原文
获取原文并翻译 | 示例

摘要

While a listener may derive semantic associations for audio clips from direct auditory cues (e.g., hearing “bass guitar”) as well as from “context” (e.g., inferring “bass guitar” in the context of a “rock” song), most state-of-the-art systems for automatic music annotation ignore this context. Indeed, although contextual relationships correlate tags, many auto-taggers model tags independently. This paper presents a novel, generative approach to improve automatic music annotation by modeling contextual relationships between tags. A Dirichlet mixture model (DMM) is proposed as a second, additional stage in the modeling process, to supplement any auto-tagging system that generates a semantic multinomial (SMN) over a vocabulary of tags when annotating a song. For each tag in the vocabulary, a DMM captures the broader context the tag defines by modeling tag co-occurrence patterns in the SMNs of songs associated with the tag. When annotating songs, the DMMs refine SMN annotations by leveraging contextual evidence. Experimental results demonstrate the benefits of combining a variety of auto-taggers with this generative context model. It generally outperforms other approaches to modeling context as well.
机译:尽管听众可以从直接听觉提示(例如,听到“低音吉他”)以及从“上下文”(例如,在“摇滚”歌曲的背景下推断出“低音吉他”)来获得音频剪辑的语义关联,但是大多数自动音乐注释的最新系统会忽略此上下文。确实,尽管上下文关系使标签相关联,但许多自动标记器还是独立地对标签进行建模。本文提出了一种新颖的生成方法,通过对标签之间的上下文关系进行建模来改善自动音乐注释。 Dirichlet混合模型(DMM)被提议作为建模过程中的第二个附加阶段,以补充在注释歌曲时在标签词汇表上生成语义多项式(SMN)的任何自动标签系统。对于词汇表中的每个标签,DMM通过对与标签关联的歌曲的SMN中的标签共现模式进行建模,来捕获标签定义的更广泛的上下文。在注释歌曲时,DMM通过利用上下文证据来完善SMN注释。实验结果证明了将多种自动标记与该生成上下文模型相结合的好处。通常,它也优于其他建模上下文的方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号