DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Nam, Juhan | - |
dc.contributor.advisor | 남주한 | - |
dc.contributor.author | Park, Jiyoung | - |
dc.date.accessioned | 2019-08-28T02:46:02Z | - |
dc.date.available | 2019-08-28T02:46:02Z | - |
dc.date.issued | 2018 | - |
dc.identifier.uri | http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=733783&flag=dissertation | en_US |
dc.identifier.uri | http://hdl.handle.net/10203/266014 | - |
dc.description | 학위논문(석사) - 한국과학기술원 : 문화기술대학원, 2018.2,[iv, 34 p. :] | - |
dc.description.abstract | While music is becoming easily accessible due to the increasing number of online music services, it is getting more difficult to find the songs that users like. Therefore, it is important to grasp information about a large number of released songs and build retrieval and recommendation systems based on the information. The most popular method is to use text-based meta data or user data. However, this approach has a problem of rarely being searched for new or unknown songs, and it is difficult and time consuming to construct the data. On the other hand, content-based methods that extract features from music data directly and use the features to train the system become important because it can somewhat solve these problems. Recently, representation learning or feature learning has drawn great attention in various types of machine learning tasks. In music domain, feature learning is either unsupervised or supervised by semantic labels such as music genre. However, finding discriminative features in an unsupervised way is challenging, and supervised feature learning using semantic labels may involve noisy or expensive annotation. In this thesis, we present a feature learning approach that utilizes artist labels attached in every single music track as objective meta data. We train a deep convolutional neural network to classify audio tracks into a large number of artists. We regard the trained model as a general feature extractor and apply it to other tasks such as artist recognition, genre classification and music auto-tagging in transfer learning settings. The results show that our approach outperforms or is comparable to previous state-of-the-art methods, indicating that the proposed approach effectively captures general music audio features. Finally, we utilize the proposed approach for a music retrieval system. | - |
dc.language | eng | - |
dc.publisher | 한국과학기술원 | - |
dc.subject | Representation learning▼aartist recognition▼atransfer learning▼agenre classification▼amusic auto-tagging▼amusic information retrieval | - |
dc.subject | 표현 학습▼a아티스트 인식▼a전이 학습▼a장르 분류▼a음악 오토태깅▼a음악 정보 검색 | - |
dc.title | Representation learning of music using artist labels | - |
dc.title.alternative | 아티스트 레이블을 이용한 음악의 표현 학습 | - |
dc.type | Thesis(Master) | - |
dc.identifier.CNRN | 325007 | - |
dc.description.department | 한국과학기술원 :문화기술대학원, | - |
dc.contributor.alternativeauthor | 박지영 | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.