MVOC: a training-free multiple video object composition method with diffusion models

MVOC: a training-free multiple video object composition method with diffusion models

Yuegen Liu1
Qi Yuan1
Shubin Yang1,2
Yanru Zhang2
1Sobey Media Intelligence Laboratory
2University of Electronic Science and Technology of China

*Equal Contribution. #Corresponding Author.

Paper


MVOC: a training-free multiple video object composition method with diffusion models

Wei Wang, Yaosen Chen, Yuegen Liu, Qi Yuan, Shubin Yang, Yanru Zhang

description arXiv version

Given multiple video objects (e.g. Background, Object1, Object2), our method enables presenting the interaction effects between multiple video objects and maintaining the motion and identity consistency of each object in the composited video.

Abstract


Video composition is the core task of video editing. Although image composition based on diffusion models has been highly successful, it is not straightforward to extend the achievement to video object composition tasks, which not only exhibit corresponding interaction effects but also ensure that the objects in the composited video maintain motion and identity consistency, which is necessary to composite a physical harmony video. To address this challenge, we propose a Multiple Video Object Composition (MVOC) method based on diffusion models. Specifically, we first perform DDIM inversion on each video object to obtain the corresponding noise features. Secondly, we combine and edit each object by image editing methods to obtain the first frame of the composited video. Finally, we use the image-to-video generation model to composite the video with feature and attention injections in the Video Object Dependence Module, which is a training-free conditional guidance operation for video generation, and enables the coordination of features and attention maps between various objects that can be non-independent in the composited video. The final generative model not only constrains the objects in the generated video to be consistent with the original object motions and identities, but also introduces interaction effects between objects. Extensive experiments have demonstrated that the proposed method outperforms existing state-of-the-art approaches.

Approach


Multiple video object composition framework. Our method presents a two-stage approach: video object preprocessing and generative video editing. In preprocessing stage, we perform DDIM inversion, object extraction and paste, mask extraction and first frame editing. In editing stage, we edit the first frame by an image editing model, then use video object dependence for conditional guidance video generation

Comparison


Comparison on BoatSurf.

Comparison on BirdSeal.

Comparison on MonkeySwan.

Comparison on DuckCrane.

Comparison on CraneSeal.

Comparison on RiderDeer.

Comparison on RobotCat.


Quantitative Comparison

The comparisons of short and long-range consistency are shown in Table.1 and Table.2, respectively. CutPaste, Poisson, and Harmonizer are non-generative methods, which essentially have better temporal consistency, but cannot produce interactive effects and are not harmonious; others and our method are generative methods, and the compositied videos are more harmonious. Nonetheless, the average of our metrics on temporal consistency is still superior all compared methods.


Citation



        @inproceedings{wang2024mvoc,
        title = {MVOC: a training-free multiple video object composition method with diffusion models},
        author = {Wei Wang and Yaosen Chen and Yuegen Liu and  Qi Yuan and  Shubin Yang and  Yanru Zhang},
        year = {2024},
        booktitle = {arxiv}
        }