WANANIMATE – BACKGROUND ADD ComfyUI workflow

Hi my friends.

Today I’m presenting a cutting-edge ComfyUI workflow that addresses a frequent request from the community: adding a dynamic background to the final video output of a WanAnimate generation using the Phantom-Wan model. This setup is a potent demonstration of how modular tools like Comfy U I allow for complex, multi-stage creative processes.

This workflow is a beast, meticulously engineered to combine character animation with a distinct, independently generated background video.

The input image and videos I’m using are sourced by PEXELS and PIXABAY.

It starts by loading a Reference Image and a Driving Video which dictates the motion.

The ResolutionMaster is key here, setting the base output resolution (832×480 in this example) and linking these dimensions to all subsequent resizing and generation steps, ensuring coherence.

The “Image Resize node” resizes the driving video frames to the target dimensions and feeds them forward. A separate Background Video is also loaded and sized.

Then the Pose, Face, and Character Mask Extraction.

The Driving Video frames are first processed by the D W Preprocessor to extract Pose Keypoints.

The Pose Keypoints are used by “Face Mask From Pose Keypoints” and “Image Crop By Mask And Resize” to isolate the face and provide a dedicated Face Video input for the main model.

A frame from the Driving Video is sent to the “SAM 2 Segmentation node” to generate a precise Character Mask based on user-defined positive and negative points: this mask is critical for separating the subject from the background.

The mask is grown and blockified to optimize it for the generation process, which is a clever detail for clean subject isolation

The “Wan Animate To Video node” is the core engine, taking the prepared elements:

The Character Mask and the Background Video are finally fed in, allowing the model to focus on the character and seamlessly integrate the new background.

The final “K Sampler Advanced” uses the outputs from the Phantom step and a separate Phantom_Wan_14 B model (loaded in the subgraph) to perform a focused, multi-step refinement on the character based on the generated latent.

This workflow is attached below, along with the necessary video and photographic assets sourced from Pexels and Pixabay, copyright-free under their respective licenses for both personal and commercial use.

Thank you for the support, and I’ll see you in the next post—right here on Patreon and across my web and social channels!

www.carminecristalloscalzi.com

www.faidenblass.com

www.instagram.com/carminecristalloscalzi

Share:

More Posts

ai
carminecristalloscalzi@gmail.com

Novità: Patreon!

È con estremo piacere che abbiamo deciso di sbarcare su PATREON. Gli argomenti della pagina gestita da CCS (rigorosamente in inglese), verteranno quasi esclusivamente sull’AI

Read More »

Send Us A Message