首页> 外文期刊>ACM Transactions on Graphics >Motion-driven Concatenative Synthesis of Cloth Sounds
【24h】

Motion-driven Concatenative Synthesis of Cloth Sounds

机译:运动驱动的布质声音的串联合成

获取原文
获取原文并翻译 | 示例

摘要

We present a practical data-driven method for automatically synthesizing plausible soundtracks for physics-based cloth animations running at graphics rates. Given a cloth animation, we analyze the deformations and use motion events to drive crumpling and friction sound models estimated from cloth measurements. We synthesize a low-quality sound signal, which is then used as a target signal for a concatenative sound synthesis (CSS) process. CSS selects a sequence of microsound units, very short segments, from a database of recorded cloth sounds, which best match the synthesized target sound in a low-dimensional feature-space after applying a handtuned warping function. The selected microsound units are concatenated together to produce the final cloth sound with minimal filtering. Our approach avoids expensive physics-based synthesis of cloth sound, instead relying on cloth recordings and our motiondriven CSS approach for realism. We demonstrate its effectiveness on a variety of cloth animations involving various materials and character motions, including first-person virtual clothing with binaural sound.
机译:我们提出了一种实用的数据驱动方法,用于自动合成以图形速率运行的基于物理学的布料动画的合理音轨。给定布料动画,我们将分析变形并使用运动事件来驱动根据布料测量值估算的起皱和摩擦声模型。我们合成了低质量的声音信号,然后将其用作级联声音合成(CSS)过程的目标信号。 CSS从记录的布料声音数据库中选择一系列非常短的微音单元序列,这些声音单元在应用了手动调整的扭曲函数后,在低维特征空间中与合成目标声音最匹配。选定的微音单元串联在一起,以最少的滤波产生最终的布料声音。我们的方法避免了昂贵的基于物理学的布料声音合成,而是依靠布料录音和我们的运动驱动CSS方法来实现逼真度。我们在涉及各种材质和角色动作的各种布料动画中(包括带有双耳声音的第一人称虚拟服装)证明了其有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号