V2SFlow: Video-to-Speech Generation with Speech Decomposition and Rectified Flow

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 31
  • Download : 0
In this paper, we introduce V2SFlow, a novel Video-to-Speech (V2S) framework designed to generate natural and intelligible speech directly from silent talking face videos. While recent V2S systems have shown promising results on constrained datasets with limited speakers and vocabularies, their performance often degrades on real-world, unconstrained datasets due to the inherent variability and complexity of speech signals. To address these challenges, we decompose the speech signal into manageable subspaces (content, pitch, and speaker information), each representing distinct speech attributes, and predict them directly from the visual input. To generate coherent and realistic speech from these predicted attributes, we employ a rectified flow matching decoder built on a Transformer architecture, which models efficient probabilistic pathways from random noise to the target speech distribution. Extensive experiments demonstrate that V2SFlow significantly outperforms state-of-the-art methods, even surpassing the naturalness of ground truth utterances.
Publisher
Institute of Electrical and Electronics Engineers Inc.
Issue Date
2025-04-10
Language
English
Citation

2025 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2025

DOI
10.1109/ICASSP49660.2025.10889780
URI
http://hdl.handle.net/10203/336089
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0