【24h】

Music Generation using Deep Generative Modelling

机译:使用深生成型建模的音乐一代

获取原文

摘要

Efficient synthesis of musical sequences is a challenging task from a machine learning perspective, as human perception is aware of the global context to shorter sequences as well of audio waveforms on a smaller scale. Autoregressive models such as WaveNet use iterative subsampling to generate short sequences that are a result of a localized modeling process but lacking in overall global structures. In juxtaposition, Generative Adversarial Networks (GANs) are effective for modeling globally coherent sequence structures, but struggle to generate localized sequences. Through this project, we aim to propose a system that combines the random subsampling approach of GANs with a recurrent autoregressive model. Such a model will help to model coherent musical structures effectively on both, global and local levels.
机译:高效合成音乐序列是机器学习角度的具有挑战性的任务,因为人类的感知是了解全局上下文的较短序列,在较小的规模上是较短的序列。诸如Wavenet的自回归模型使用迭代的限位来生成短序列,这是局部建模过程但缺乏整体全局结构的结果。在并置,生成的对抗网络(GANS)对于建模全局相干序列结构是有效的,但是难以产生局部序列。通过这个项目,我们的目标是提出一个系统,该系统将GAN的随机数据采样方法与复制自回归模型结合起来。这种模型将有助于在全球和地方各级有效地模拟相干音乐结构。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号