Hello HN! We have released a major update of our image-to-video diffusion model, DynamiCrafter, with better dynamic, higher resolution, and stronger coherence.
DynamiCrafter can animate open-domain still images based on text prompt by leveraging the pre-trained video diffusion priors. Please check our project page and paper for more information. We will continue to improve the model's performance.
Comparisons with Stable Video Diffusion and PikaLabs can be found at https://www.youtube.com/watch?v=0NfmIsNAg-g
Online demo: https://huggingface.co/spaces/Doubiiu/DynamiCrafter
Our project page: https://doubiiu.github.io/projects/DynamiCrafter/
Arxiv link: https://arxiv.org/abs/2310.12190
Things lifted wholesale from training data but plastered together to create new works will leave us in an uncanny state of semi-permanent déjà vu where our little pattern matching blobs constantly chirp out subtle connections.