Learning Audio-Driven Viseme Dynamics for 3D Face Animation

  • Linchao Bao, Haoxian Zhang, Yue Qian, Tangli Xue, Changhai Chen, Xuefei Zhe, Di Kang

logo-AI-lab
teaser

Abstract

We present a novel audio-driven facial animation approach that can generate realistic lip-synchronized 3D facial animations from the input audio. Our approach learns viseme dynamics from speech videos, produces animator-friendly viseme curves, and supports multilingual speech inputs. The core of our approach is a novel parametric viseme fitting algorithm that utilizes phoneme priors to extract viseme parameters from speech videos. With the guidance of phonemes, the extracted viseme curves can better correlate with phonemes, thus more controllable and friendly to animators. To support multilingual speech inputs and generalizability to unseen voices, we take advantage of deep audio feature models pretrained on multiple languages to learn the mapping from audio to viseme curves. Our audio-to-curves mapping achieves state-of-the-art performance even when the input audio suffers from distortions of volume, pitch, speed, or noise. Lastly, a viseme scanning approach for acquiring high-fidelity viseme assets is presented for efficient speech animation production. We show that the predicted viseme curves can be applied to different viseme-rigged characters to yield various personalized animations with realistic and natural facial motions. Our approach is artist-friendly and can be easily integrated into typical animation production workflows including blendshape or bone based animation.

Video (containing audio)


Contact

If you have any question, please contact Linchao Bao.

Citation

@article{bao2023viseme,
  title={Learning Audio-Driven Viseme Dynamics for 3D Face Animation},
  author={Bao, Linchao and Zhang, Haoxian and Qian, Yue and Xue, Tangli and Chen, Changhai and Zhe, Xuefei and Kang, Di},
  journal={arXiv preprint arXiv:2301.06059},
  year={2023}
}