The generation and editing of audio-conditioned talking portraits guided by multimodal inputs, including text, images, and videos, remains under explored. In this paper, we present SkyReels-Audio, a unified framework for synthesizing high-fidelity and temporally coherent talking portrait videos. Built upon pretrained video diffusion transformers, our framework supports infinite-length generation and editing, while enabling diverse and controllable conditioning through multimodal inputs. We employ a hybrid curriculum learning strategy to progressively align audio with facial motion, enabling fine-grained multimodal control over long video sequences. To enhance local facial coherence, we introduce a facial mask loss and an audio-guided classifier-free guidance mechanism. A sliding-window denoising approach further fuses latent representations across temporal segments, ensuring visual fidelity and temporal consistency across extended durations and diverse identities. More importantly, we construct a dedicated data pipeline for curating high-quality triplets consisting of synchronized audio, video, and textual descriptions. Comprehensive benchmark evaluations show that SkyReels-Audio achieves superior performance in lip-sync accuracy, identity consistency, and realistic facial dynamics, particularly under complex and challenging conditions.
Overview of SkyReels-Audio. Whisper encodes resampled audio and fuse video tokens with cross-attention layers. Image and video controls are joint featured with VAE before combine with input noise to provide a video identity and environment priors.
Given a portrait image, video, or text along with audio input, SkyReels-Audio can generate and edit portraits with strong identity consistency, expressive facial and natural body dynamics.
SkyReels-Audio also support lip movement alignment given reference videos and audio clips.
SkyReels-Audio can handle reference images of different objectives, sizes, and styles, and claim naturally consistent video results.