Representation Learning Using Artist labels for Audio Classification Tasks

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 526
  • Download : 0
In this work, we use a deep convolutional neural network (DCNN) trained with a public dataset, the Million Song Dataset, as a feature extractor. We trained the network from audio mel-spectrogram using artist labels in a discriminative manner. In particular, we used a large number of neurons in the output layer where each neuron represents an artist label. The output of the last hidden layer of the DCNN is regarded as an identity feature of the input data. The DCNN extracts feature vectors by taking 3-second audio segments as input, and summarizes them as an musical feature for the input audio. These extracted features are used for training a Support Vector Machine (SVM) classifier to perform MIREX audio classification tasks such as genre or mood classification. The results show that the proposed approach effectively captures general music audio features.
Publisher
ISMIR
Issue Date
2017-10-27
Language
English
Citation

Music Information Retrieval Evaluation eXchange (MIREX) in the 18th International Society for Musical Information Retrieval Conference (ISMIR)

URI
http://hdl.handle.net/10203/237345
Appears in Collection
GCT-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0