首页> 外文会议>AES international conference on spatial reproduction >A framework for intelligent metadata adaptation in object-based audio
【24h】

A framework for intelligent metadata adaptation in object-based audio

机译:基于对象的音频中智能元数据适配的框架

获取原文

摘要

Object-based audio can be used to customize, personalize, and optimize audio reproduction depending on the specific listening scenario. To investigate and exploit the benefits of object-based audio, a framework for intelligent metadata adaptation was developed. The framework uses detailed semantic metadata that describes the audio objects, the loudspeakers, and the room. It features an extensible software tool for real-time metadata adaptation that can incorporate knowledge derived from perceptual tests and/or feedback from perceptual meters to drive adaptation and facilitate optimal rendering. One use case for the system is demonstrated through a rule-set (derived from perceptual tests with experienced mix engineers) for automatic adaptation of object levels and positions when rendering 3D content to two- and five-channel systems.
机译:基于对象的音频可用于根据特定的收听场景来自定义,个性化和优化音频再现。为了研究和利用基于对象的音频的好处,开发了用于智能元数据适配的框架。该框架使用详细的语义元数据来描述音频对象,扬声器和房间。它具有用于实时元数据适应的可扩展软件工具,该工具可以合并从感知测试和/或感知仪表的反馈中获得的知识,以驱动适应并促进最佳渲染。该系统的一个用例通过规则集(源自经验丰富的混音工程师的感知测试)进行了演示,该规则集可在将3D内容渲染到两通道和五通道系统时自动调整对象的级别和位置。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号