Advertisement

AnimateDiff: A Little Helper for Anime Creation

AnimateDiff is an open-source project that can animate images generated by Text-2-Image without specific Fine-Tune.

Just using the models from C station, a series of animations can be generated.

Paper link: https://arxiv.org/abs/2307.04725

The following is its principle:

The core of this framework is to add a newly initialized motion modeling module into the frozen text-based image model and train it on subsequent video clips to extract reasonable motion priors. Once training is complete, simply injecting this motion modeling module allows all personalized versions derived from the same base model to immediately become text-driven models that produce diverse and personalized animated images.


You can use the official GUI to generate animations.

You can also embed it into the WebUI of Stable Diffusion. https://github.com/continue-revolution/sd-webui-animatediff