首页> 外文期刊>Neurocomputing >Music auto-tagging using deep Recurrent Neural Networks
【24h】

Music auto-tagging using deep Recurrent Neural Networks

机译:Music auto-tagging using deep Recurrent Neural Networks

获取原文
获取原文并翻译 | 示例
       

摘要

Musical tags are used to describe music and are cruxes of music information retrieval. Existing methods for music auto-tagging usually consist of preprocessing phase (feature extraction) and machine learning phase. However, the preprocessing phase of most existing method is suffered either information loss or non-sufficient features, while the machine learning phase depends on heavily the feature extracted in the preprocessing phase, lacking the ability to make use of information. To solve this problem, we propose a content-based automatic tagging algorithm using deep Recurrent Neural Network (RNN) with scattering transformed inputs in this paper. Acting as the first phase, scattering transform extracts features from the raw data, meanwhile retains much more information than traditional methods such as mel-frequency cepstral coefficient (MFCC) and mel-frequency spectrogram. Five-layer RNNs with Gated Recurrent Unit (GRU) and sigmoid output layer are used as the second phase of our algorithm, which are extremely powerful machine learning tools capable of making full use of data fed to them. To evaluate the performance of the architecture, we experiment on Magnatagatune dataset using the measurement of the area under the ROC-curve (AUC-ROC). Experimental results show that the tagging performance can be boosted by the proposed method compared with the state-of-the-art models. Additionally, our architecture results in faster training speed and less memory usage. (c) 2018 Elsevier B.V. All rights reserved.

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号