In this thesis, we propose various deep learning (DL) based methods for vocal melody extraction. Vocal melody extraction is the task that identifies the melody pitch contour of the singing voice from multiple sources. Previous studies have been proposed as methods of calculating the pitch saliency from a spectrogram or isolating the melody source from the mixture. However, these methods have limitations in obtaining optimal outputs for various music. Although the performance of melody extraction has improved with the recent advances in DL, there are still limitations in terms of overall performance, the model using music-related knowledge and the lack of labeled data.
Here we report the effective methods to estimate the pitch of melody and detect singing voice by introducing novel DL models and loss function. We also propose a multi-task network that allows pitch estimation and voice detection are tightly coupled. To address the lack of labeled data, we applied the semi-supervised learning that utilizes large amounts of unlabeled data. We explored the effects of three teacher-student model setups, data augmentation, unlabeled data, and proposed the most effective learning method for vocal melody extraction.
In addition, we apply semi-supervised learning to the singing vocal detection and show that it can be extended to other MIR tasks that suffer from lack of labeled data.