VirtuosoNet: A Hierarchical RNN-based System for Modeling Expressive Piano Performance

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 658
  • Download : 0
In this paper, we present our application of deep neural network to modeling piano performance, which imitates the expressive control of tempo, dynamics, articulations and pedaling from pianists. Our model consists of recurrent neural networks with hierarchical attention and conditional variational autoencoder. The model takes a sequence of note-level score features extracted from MusicXML as input and predicts piano performance features of the corresponding notes. To render musical expressions consistently over long-term sections, we first predict tempo and dynamics in measure-level and, based on the result, refine them in note-level. The evaluation through listening test shows that our model achieves a more human-like expressiveness compared to previous models.We also share the dataset we used for the experiment.
Publisher
International Society for Music Information Retrieval Conference (ISMIR)
Issue Date
2019-11-04
Language
English
Citation

The 20th International Society for Music Information Retrieval Conference (ISMIR), pp.908 - 915

URI
http://hdl.handle.net/10203/269876
Appears in Collection
GCT-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0