Face-Driven Zero-Shot Voice Conversion with Memory-based Face-Voice Alignment

Zhengyan Sheng Yang Ai Yannian Chen Zhenhua Ling*

Abstract

This paper presents a novel task, zero-shot voice conversion based on face images (zero-shot FaceVC), which aims at converting the voice characteristics of an utterance from any source speaker to a newly coming target speaker, solely relying on a single face image of the target speaker. To address this task, we propose a face-voice memory-based zero-shot FaceVC method. This method leverages a memory-based face-voice alignment module, in which slots act as the bridge to align these two modalities, allowing for the capture of voice characteristics from face images. A mixed supervision strategy is also introduced to mitigate the long-standing issue of the inconsistency between training and inference phases for voice conversion tasks. To obtain speaker-independent content-related representations, we transfer the knowledge from a pretrained zero-shot voice conversion model to our zero-shot FaceVC model. Considering the differences between FaceVC and traditional voice conversion tasks, systematic subjective and objective metrics are designed to thoroughly evaluate the homogeneity, diversity and consistency of voice characteristics controlled by face images. Through extensive experiments, we demonstrate the superiority of our proposed method on the zero-shot FaceVC task.

Released Code: https://github.com/Levent9/Zero-shot-FaceVC

SVG Image


Converted Utterance Examples

1. Converted utterances by the different systems.

Sample 1

Source speaker Target Speaker
Source utterance Reference utterance
SpeechVC Auto-FaceVC attentionCVAE FVMVC
Converted utterance

Sample 2

Source Speaker Target Speaker
Source utterance Reference utterance
SpeechVC Auto-FaceVC attentionCVAE FVMVC
Converted utterance

Sample 3

Source Speaker Target Speaker
Source utterance Reference utterance
SpeechVC Auto-FaceVC attentionCVAE FVMVC
Converted utterance

Sample 4

Source Speaker Target Speaker
Source utterance Reference utterance
SpeechVC Auto-FaceVC attentionCVAE FVMVC
Converted utterance

2. Converted utterances by the different face images of the same target speaker.

Sample 1

Source Speaker Target Speaker Image1 Target Speaker Image2 Target Speaker Image3

Sample 2

Source Speaker Target Speaker Image1 Target Speaker Image2 Target Speaker Image3


2. Converted utterances by different target speakers.


Voice interpolation

Speaker A Speaker B Speaker C Speaker D Speaker E Speaker F
A C E
0.6A+0.4B 0.6C+0.4D 0.6E+0.4F
0.4A+0.6B 0.4C+0.6D 0.4E+0.6F
B D F

Converted Utterance Examples on other dataset

Converted utterances by the pre-trained proposed FVMVC of LRS3 dataset on the VGGFace2 and Voxceleb2.

Source Speaker Target Speaker Image1 Target Speaker Image2 Target Speaker Image3
Source Speaker Target Speaker Image1 Target Speaker Image2 Target Speaker Image3
Source Speaker Target Speaker Image1 Target Speaker Image2 Target Speaker Image3
Source Speaker Target Speaker Image1 Target Speaker Image2 Target Speaker Image3