Realistic acoustic guitar synthesis with diffusion inpainting and transfer learning디퓨전 인페인팅과 전이학습을 통한 어쿠스틱 기타 소리 합성

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 2
  • Download : 0
Neural MIDI-to-audio synthesis is a task where given note melody of a specific instrument, realistic audio containing appropriate musical expressions is synthesized. Acoustic guitar possesses various performing techniques, which leads to a rich amount of musical expressions. In this work, we propose an end-to-end neural synthesizer based on diffusion-based generative model that could close the gap between MIDI and realistic guitar sound. We take advantage of the solid conditional nature of MIDI-to-audio synthesis task and propose an effective autoregressive continuation algorithm based on inpainting methods that have emerged in diffusion models. Furthermore, due to the lack of MIDI and audio pair datasets on acoustic guitar, we propose a large dataset where audio is synthesized based on virtual musical instruments and pre-train the model on this dataset in the context of transfer learning.
Advisors
남주한researcher
Description
한국과학기술원 :문화기술대학원,
Publisher
한국과학기술원
Issue Date
2023
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 문화기술대학원, 2023.8,[iii, 25 p. :]

Keywords

뉴럴 오디오 합▼a어쿠스틱 기타 소리 합성▼a디퓨전 기반 생성 모델; Neural audio synthesis▼aAcoustic guitar sound synthesis▼aDiffusion-based generative model

URI
http://hdl.handle.net/10203/320582
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1045770&flag=dissertation
Appears in Collection
GCT-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0