Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,18 @@
|
|
1 |
---
|
|
|
2 |
license: cc-by-nc-4.0
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
pipeline_tag: video-to-video
|
3 |
license: cc-by-nc-4.0
|
4 |
---
|
5 |
+
![model example](https://i.imgur.com/ze1DGOJ.png)
|
6 |
+
[example outputs](https://www.youtube.com/watch?v=HO3APT_0UA4) (courtesy of [dotsimulate](https://www.instagram.com/dotsimulate/))
|
7 |
+
|
8 |
+
# zeroscope_v2 1111 models
|
9 |
+
A collection of watermark-free Modelscope-based video models capable of generating high quality video at [448x256](https://huggingface.co/cerspense/zeroscope_v2_dark_30x448x256), [576x320](https://huggingface.co/cerspense/zeroscope_v2_576w) and [1024 x 576](https://huggingface.co/cerspense/zeroscope_v2_XL). These models were trained from the [original weights](https://huggingface.co/damo-vilab/modelscope-damo-text-to-video-synthesis) with offset noise using 9,923 clips and 29,769 tagged frames.<br />
|
10 |
+
This collection makes it easy to switch between models with the new dropdown menu in the 1111 extension.
|
11 |
+
|
12 |
+
### Using it with the 1111 text2video extension
|
13 |
+
Simply download the contents of this repo to 'stable-diffusion-webui\models\text2video'
|
14 |
+
Or, manually download the model folders you want, along with VQGAN_autoencoder.pth.
|
15 |
+
|
16 |
+
Thanks to [dotsimulate](https://www.instagram.com/dotsimulate/) for the config files.
|
17 |
+
|
18 |
+
Thanks to [camenduru](https://github.com/camenduru), [kabachuha](https://github.com/kabachuha), [ExponentialML](https://github.com/ExponentialML), [VANYA](https://twitter.com/veryVANYA), [polyware](https://twitter.com/polyware_ai), [tin2tin](https://github.com/tin2tin)<br />
|