Synergizing Motion and Appearance: Multi-Scale Compensatory Codebooks for Talking Head Video Generation

Shuling Zhao1 Fa-Ting Hong1 Xiaoshui Huang2 Dan Xu1
1The Hong Kong University of Science and Technology   2Shanghai Jiao Tong University

TL;DR: We conduct motion and appearance compensation with jointly learned compensatory codebooks for high-quality talking head video generation.

Driving Video

Generated Videos

Abstract

Talking head video generation aims to generate a realistic talking head video that preserves the person’s identity from a source image and the motion from a driving video. Despite the promising progress made in the field, it remains a challenging and critical problem to generate videos with accurate poses and fine-grained facial details simultaneously. Essentially, facial motion is often highly complex to model precisely, and the one-shot source face image cannot provide sufficient appearance guidance during generation due to dynamic pose changes. To tackle the problem, we propose to jointly learn motion and appearance codebooks and perform multi-scale codebook compensation to effectively refine both the facial motion conditions and appearance features for talking face image decoding. Specifically, the designed multi-scale motion and appearance codebooks are learned simultaneously in a unified framework to store representative global facial motion flow and appearance patterns. Then, we present a novel multi-scale motion and appearance compensation module, which utilizes a transformer-based codebook retrieval strategy to query complementary information from the two codebooks for joint motion and appearance compensation. The entire process produces motion flows of greater flexibility and appearance features with fewer distortions across different scales, resulting in a high-quality talking head video generation framework. Extensive experiments on various benchmarks validate the effectiveness of our approach and demonstrate superior generation results from both qualitative and quantitative perspectives when compared to state-of-the-art competitors.

Method

Interpolate start reference image.

Overview of the framework. For each scale, multi-scale motion and appearance codebook compensation consists of two submodules. (i) Motion Codebook Compensation (MCC) compensates for a motion flow with the motion codebook. (ii) To refine the source facial feature warped by the compensated motion flow, Appearance Codebook Compensation (ACC) produces the compensated appearance feature with the appearance codebook for image decoding. These two sub-modules are employed for all scales. We learn the motion and appearance codebooks jointly with the whole framework.

Results

Effect of Joint Motion and Appearance Codebook Learning

Motion Flow Reconstruction with Motion Codebook

Driving Appearance Feature Reconstruction with Appearance Codebook

Effect of Multi-scale Codebook Compensation

Multi-scale Motion Codebook Compensation

Multi-scale Appearance Codebook Compensation

Comparison with State-of-the-art Methods

Interpolate start reference image.

Same-identity Reconstruction

Cross-identity Reenactment

BibTeX

@misc{zhao2024synergizingmotionappearancemultiscale,
  title={Synergizing Motion and Appearance: Multi-Scale Compensatory Codebooks for Talking Head Video Generation}, 
  author={Shuling Zhao and Fa-Ting Hong and Xiaoshui Huang and Dan Xu},
  year={2024},
  eprint={2412.00719},
  archivePrefix={arXiv},
  primaryClass={cs.CV},
  url={https://arxiv.org/abs/2412.00719}, 
}