Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/137296
Citations
Scopus Web of Science® Altmetric
?
?
Type: Conference paper
Title: V2C: Visual Voice Cloning
Author: Chen, Q.
Tan, M.
Qi, Y.
Zhou, J.
Li, Y.
Wu, Q.
Citation: Proceedings / CVPR, IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2022, vol.2022-June, pp.21210-21219
Publisher: IEEE
Publisher Place: Online
Issue Date: 2022
Series/Report no.: IEEE Conference on Computer Vision and Pattern Recognition
ISBN: 9781665469463
ISSN: 1063-6919
Conference Name: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (18 Jun 2022 - 24 Jun 2022 : New Orleans, Louisiana)
Statement of
Responsibility: 
Qi Chen, Mingkui Tan, Yuankai Qi, Jiaqiu Zhou, Yuanqing Li, Qi Wu
Abstract: Existing Voice Cloning (VC) tasks aim to convert a paragraph text to a speech with desired voice specified by a reference audio. This has significantly boosted the development of artificial speech applications. However, there also exist many scenarios that cannot be well reflected by these VC tasks, such as movie dubbing, which requires the speech to be with emotions consistent with the movie plots. To fill this gap, in this work we propose a new task named Visual Voice Cloning (V2C), which seeks to convert a paragraph of text to a speech with both desired voice specified by a reference audio and desired emotion specified by a reference video. To facilitate research in this field, we construct a dataset, V2C-Animation, and propose a strong baseline based on existing state-of-the-art (SoTA) VC techniques. Our dataset contains 10,217 animated movie clips covering a large variety of genres (e.g., Comedy, Fantasy) and emotions (e.g., happy, sad). We further design a set of evaluation metrics, named MCD-DTW-SL, which help evaluate the similarity between ground-truth speeches and the synthesised ones. Extensive experimental results show that even SoTA VC methods cannot generate satisfying speeches for our V2C task. We hope the proposed new task together with the constructed dataset and evaluation metric will facilitate the research in the field of voice cloning and broader vision-and-language community. Source code and dataset will be released in https://github.com/chenqi008/V2C.
Keywords: Datasets and evaluation; Vision + language; Vision + X
Rights: ©2022 IEEE
DOI: 10.1109/CVPR52688.2022.02056
Grant ID: http://purl.org/au-research/grants/arc/DE190100539
Published version: https://ieeexplore.ieee.org/xpl/conhome/9878378/proceeding
Appears in Collections:Australian Institute for Machine Learning publications

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.