Demos from "ZERO-SHOT PERSONALIZED LIP-TO-SPEECH SYNTHESIS WITH FACE IMAGE BASED VOICE CONTROL"

Paper: TO DO

Authors: Zheng-Yan Sheng, Yang Ai, Zhen-Hua Ling

Abstract: Lip-to-Speech (Lip2Speech) synthesis, which predicts corresponding speech from talking face images, has witnessed significant progress with various models and training strategies in a series of independent studies. However, existing studies can not achieve voice control under zero-shot condition, because extra speaker embeddings need to be extracted from natural reference speech and are unavailable when only the silent video of an unseen speaker is given. In this paper, we propose a zero-shot personalized Lip2Speech synthesis method, in which face images control speaker identities.A variational autoencoder is adopted to disentangle the speaker identity and linguistic content representations, which enables speaker embeddings to control the voice characteristics of synthetic speech for unseen speakers.Furthermore, we propose associated cross-modal representation learning to promote the ability of face-based speaker embeddings (FSE) on voice control. Extensive experiments verify the effectiveness of the proposed method whose synthetic utterances are more natural and matching with the personality of input video than the compared methods. To our best knowledge, this paper makes the first attempt on zero-shot personalized Lip2Speech synthesis with a face image rather than reference audio to control voice characteristics.


1. Evaluation Results of Unseen Speakers

Ground True Proposed_Speech Proposed Proposed-VAE Proposed-CML Text
set white at b seven soon
bin white with n one please
lay white at j five please
set red with m five please
bin white by e three soon

2. Evaluation Results of Seen Speakers

Ground True Proposed_Speech Proposed Proposed-VAE Proposed-CML Text
bin blue at l six please
place red at c one now
lay red by x six again
set blue at y eight now
lay green with q two now