In this session, you'll learn how to utilize Warpfusion to process video-to-video generations. Warpfusion utilizes Stable Diffusion to generate user customized images for each frame. We will be able to control and customize Stable Diffusion with several tools including ControlNet.
Requirements:
- At least 2GB available on your Google Drive
- Google Colab Pro or Colab Pro+
- At least 16GB of Nvidia GPU memory
- Warpfusion script v15
Directions:
- Open the Notebook in Colab
- Change the runtime type to
T4
or higher GPU - Run parts 1.1-1.3 to install Stable Diffusion dependencies (~ 6 min)
- Check the box for
Skip Install
in part 1.3 - Restart the kernel and run all all of the cells from the beginning