clementchadebec commited on
Commit
84e9360
•
1 Parent(s): 9ded571

PR : Comfy and LoRA (#5)

Browse files

- add safetensor for comfy (c6ecd8341ca1daaaed27d5babb2dbd234e20fec3)
- add workflow for comfy (d115e170adf904da51663829ee92902cd409f042)
- update Readme with comfy instructions (ebec059af191bae6e3d7f904d277b816235822c9)
- readme re-order (c331c5526b5e4d54681f875050195f49090cbc06)

Files changed (3) hide show
  1. README.md +14 -0
  2. comfy/FlashSDXL.safetensors +3 -0
  3. comfy/workflow.json +519 -0
README.md CHANGED
@@ -55,6 +55,18 @@ image = pipe(prompt, num_inference_steps=4, guidance_scale=0).images[0]
55
  <img style="width:400px;" src="images/raccoon.png">
56
  </p>
57
 
 
 
 
 
 
 
 
 
 
 
 
 
58
  # Combining Flash Diffusion with Existing LoRAs 🎨
59
 
60
  FlashSDXL can also be combined with existing LoRAs to unlock few steps generation in a **training free** manner. It can be integrated straight to Hugging Face pipelines. See an example below.
@@ -98,6 +110,8 @@ image = pipe(
98
  <img style="width:400px;" src="images/corgi.jpg">
99
  </p>
100
 
 
 
101
  # Combining Flash Diffusion with Existing ControlNets 🎨
102
 
103
  FlashSDXL can also be combined with existing ControlNets to unlock few steps generation in a **training free** manner. It can be integrated straight to Hugging Face pipelines. See an example below.
 
55
  <img style="width:400px;" src="images/raccoon.png">
56
  </p>
57
 
58
+ # How to use in Comfy?
59
+
60
+ To use FlashSDXL locally using Comfyui you need to :
61
+
62
+ 1. Make sure your comfyUI install is up to date
63
+ 2. Download the checkpoint from [huggingface](https://huggingface.co/jasperai/flash-sdxl).
64
+ In case you wonder how, go to "Files and Version" go to `comfy/` folder and hit the download button next to the `FlashSDXL.safetensors`
65
+ 3. Move the new checkpoint file to your local `comfyUI/models/loras/.` folder
66
+ 4. Use it as a LoRA on top of `sd_xl_base_1.0_0.9vae.safetensors`, a simple comfyui `workflow.json` is provided in this repo (available in the same `comfy/` folder)
67
+
68
+ > Disclaimer : Model has been trained to work with a cfg scale of 1 and a lcm scheduler but parameters can be tweaked a bit.
69
+
70
  # Combining Flash Diffusion with Existing LoRAs 🎨
71
 
72
  FlashSDXL can also be combined with existing LoRAs to unlock few steps generation in a **training free** manner. It can be integrated straight to Hugging Face pipelines. See an example below.
 
110
  <img style="width:400px;" src="images/corgi.jpg">
111
  </p>
112
 
113
+ > Hint 💡 : You can also use additional LoRA using the provided comfy workflow and test it on your machine.
114
+
115
  # Combining Flash Diffusion with Existing ControlNets 🎨
116
 
117
  FlashSDXL can also be combined with existing ControlNets to unlock few steps generation in a **training free** manner. It can be integrated straight to Hugging Face pipelines. See an example below.
comfy/FlashSDXL.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fd925177abbcf0a8e290f00e0264e1badbc292ef25cf99f455a5edb9c6b43c1b
3
+ size 371841776
comfy/workflow.json ADDED
@@ -0,0 +1,519 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "last_node_id": 16,
3
+ "last_link_id": 24,
4
+ "nodes": [
5
+ {
6
+ "id": 14,
7
+ "type": "CheckpointLoaderSimple",
8
+ "pos": [
9
+ -125,
10
+ 278
11
+ ],
12
+ "size": {
13
+ "0": 315,
14
+ "1": 98
15
+ },
16
+ "flags": {},
17
+ "order": 0,
18
+ "mode": 0,
19
+ "outputs": [
20
+ {
21
+ "name": "MODEL",
22
+ "type": "MODEL",
23
+ "links": [
24
+ 16
25
+ ],
26
+ "shape": 3,
27
+ "slot_index": 0
28
+ },
29
+ {
30
+ "name": "CLIP",
31
+ "type": "CLIP",
32
+ "links": [
33
+ 17
34
+ ],
35
+ "shape": 3,
36
+ "slot_index": 1
37
+ },
38
+ {
39
+ "name": "VAE",
40
+ "type": "VAE",
41
+ "links": [
42
+ 18
43
+ ],
44
+ "shape": 3,
45
+ "slot_index": 2
46
+ }
47
+ ],
48
+ "properties": {
49
+ "Node name for S&R": "CheckpointLoaderSimple"
50
+ },
51
+ "widgets_values": [
52
+ "sd_xl_base_1.0_0.9vae.safetensors"
53
+ ]
54
+ },
55
+ {
56
+ "id": 16,
57
+ "type": "LoraLoader",
58
+ "pos": [
59
+ -126,
60
+ -66
61
+ ],
62
+ "size": {
63
+ "0": 315,
64
+ "1": 126
65
+ },
66
+ "flags": {},
67
+ "order": 3,
68
+ "mode": 0,
69
+ "inputs": [
70
+ {
71
+ "name": "model",
72
+ "type": "MODEL",
73
+ "link": 20
74
+ },
75
+ {
76
+ "name": "clip",
77
+ "type": "CLIP",
78
+ "link": 21
79
+ }
80
+ ],
81
+ "outputs": [
82
+ {
83
+ "name": "MODEL",
84
+ "type": "MODEL",
85
+ "links": [
86
+ 24
87
+ ],
88
+ "shape": 3,
89
+ "slot_index": 0
90
+ },
91
+ {
92
+ "name": "CLIP",
93
+ "type": "CLIP",
94
+ "links": [
95
+ 22,
96
+ 23
97
+ ],
98
+ "shape": 3,
99
+ "slot_index": 1
100
+ }
101
+ ],
102
+ "properties": {
103
+ "Node name for S&R": "LoraLoader"
104
+ },
105
+ "widgets_values": [
106
+ "Little_Tinies.safetensors",
107
+ 1,
108
+ 1
109
+ ]
110
+ },
111
+ {
112
+ "id": 12,
113
+ "type": "LoraLoader",
114
+ "pos": [
115
+ -123,
116
+ 104
117
+ ],
118
+ "size": {
119
+ "0": 315,
120
+ "1": 126
121
+ },
122
+ "flags": {},
123
+ "order": 2,
124
+ "mode": 0,
125
+ "inputs": [
126
+ {
127
+ "name": "model",
128
+ "type": "MODEL",
129
+ "link": 16,
130
+ "slot_index": 0
131
+ },
132
+ {
133
+ "name": "clip",
134
+ "type": "CLIP",
135
+ "link": 17
136
+ }
137
+ ],
138
+ "outputs": [
139
+ {
140
+ "name": "MODEL",
141
+ "type": "MODEL",
142
+ "links": [
143
+ 20
144
+ ],
145
+ "shape": 3,
146
+ "slot_index": 0
147
+ },
148
+ {
149
+ "name": "CLIP",
150
+ "type": "CLIP",
151
+ "links": [
152
+ 21
153
+ ],
154
+ "shape": 3,
155
+ "slot_index": 1
156
+ }
157
+ ],
158
+ "properties": {
159
+ "Node name for S&R": "LoraLoader"
160
+ },
161
+ "widgets_values": [
162
+ "FlashSDXL.safetensors",
163
+ 1,
164
+ 1
165
+ ]
166
+ },
167
+ {
168
+ "id": 6,
169
+ "type": "CLIPTextEncode",
170
+ "pos": [
171
+ 328,
172
+ -81
173
+ ],
174
+ "size": {
175
+ "0": 422.84503173828125,
176
+ "1": 164.31304931640625
177
+ },
178
+ "flags": {},
179
+ "order": 4,
180
+ "mode": 0,
181
+ "inputs": [
182
+ {
183
+ "name": "clip",
184
+ "type": "CLIP",
185
+ "link": 22
186
+ }
187
+ ],
188
+ "outputs": [
189
+ {
190
+ "name": "CONDITIONING",
191
+ "type": "CONDITIONING",
192
+ "links": [
193
+ 4
194
+ ],
195
+ "slot_index": 0
196
+ }
197
+ ],
198
+ "properties": {
199
+ "Node name for S&R": "CLIPTextEncode"
200
+ },
201
+ "widgets_values": [
202
+ "a raccoon reading a book in a lush forest\n"
203
+ ]
204
+ },
205
+ {
206
+ "id": 3,
207
+ "type": "KSampler",
208
+ "pos": [
209
+ 834,
210
+ -44
211
+ ],
212
+ "size": {
213
+ "0": 315,
214
+ "1": 262
215
+ },
216
+ "flags": {},
217
+ "order": 6,
218
+ "mode": 0,
219
+ "inputs": [
220
+ {
221
+ "name": "model",
222
+ "type": "MODEL",
223
+ "link": 24
224
+ },
225
+ {
226
+ "name": "positive",
227
+ "type": "CONDITIONING",
228
+ "link": 4
229
+ },
230
+ {
231
+ "name": "negative",
232
+ "type": "CONDITIONING",
233
+ "link": 6
234
+ },
235
+ {
236
+ "name": "latent_image",
237
+ "type": "LATENT",
238
+ "link": 2
239
+ }
240
+ ],
241
+ "outputs": [
242
+ {
243
+ "name": "LATENT",
244
+ "type": "LATENT",
245
+ "links": [
246
+ 7
247
+ ],
248
+ "slot_index": 0
249
+ }
250
+ ],
251
+ "properties": {
252
+ "Node name for S&R": "KSampler"
253
+ },
254
+ "widgets_values": [
255
+ 273106919086700,
256
+ "randomize",
257
+ 4,
258
+ 1,
259
+ "lcm",
260
+ "normal",
261
+ 1
262
+ ]
263
+ },
264
+ {
265
+ "id": 7,
266
+ "type": "CLIPTextEncode",
267
+ "pos": [
268
+ 331,
269
+ 139
270
+ ],
271
+ "size": {
272
+ "0": 425.27801513671875,
273
+ "1": 180.6060791015625
274
+ },
275
+ "flags": {},
276
+ "order": 5,
277
+ "mode": 0,
278
+ "inputs": [
279
+ {
280
+ "name": "clip",
281
+ "type": "CLIP",
282
+ "link": 23
283
+ }
284
+ ],
285
+ "outputs": [
286
+ {
287
+ "name": "CONDITIONING",
288
+ "type": "CONDITIONING",
289
+ "links": [
290
+ 6
291
+ ],
292
+ "slot_index": 0
293
+ }
294
+ ],
295
+ "properties": {
296
+ "Node name for S&R": "CLIPTextEncode"
297
+ },
298
+ "widgets_values": [
299
+ ""
300
+ ]
301
+ },
302
+ {
303
+ "id": 5,
304
+ "type": "EmptyLatentImage",
305
+ "pos": [
306
+ 384,
307
+ 393
308
+ ],
309
+ "size": {
310
+ "0": 315,
311
+ "1": 106
312
+ },
313
+ "flags": {},
314
+ "order": 1,
315
+ "mode": 0,
316
+ "outputs": [
317
+ {
318
+ "name": "LATENT",
319
+ "type": "LATENT",
320
+ "links": [
321
+ 2
322
+ ],
323
+ "slot_index": 0
324
+ }
325
+ ],
326
+ "properties": {
327
+ "Node name for S&R": "EmptyLatentImage"
328
+ },
329
+ "widgets_values": [
330
+ 1024,
331
+ 1024,
332
+ 1
333
+ ]
334
+ },
335
+ {
336
+ "id": 8,
337
+ "type": "VAEDecode",
338
+ "pos": [
339
+ 1218,
340
+ -1
341
+ ],
342
+ "size": {
343
+ "0": 210,
344
+ "1": 46
345
+ },
346
+ "flags": {},
347
+ "order": 7,
348
+ "mode": 0,
349
+ "inputs": [
350
+ {
351
+ "name": "samples",
352
+ "type": "LATENT",
353
+ "link": 7
354
+ },
355
+ {
356
+ "name": "vae",
357
+ "type": "VAE",
358
+ "link": 18
359
+ }
360
+ ],
361
+ "outputs": [
362
+ {
363
+ "name": "IMAGE",
364
+ "type": "IMAGE",
365
+ "links": [
366
+ 19
367
+ ],
368
+ "slot_index": 0
369
+ }
370
+ ],
371
+ "properties": {
372
+ "Node name for S&R": "VAEDecode"
373
+ }
374
+ },
375
+ {
376
+ "id": 15,
377
+ "type": "PreviewImage",
378
+ "pos": [
379
+ 1471,
380
+ -1
381
+ ],
382
+ "size": {
383
+ "0": 210,
384
+ "1": 246
385
+ },
386
+ "flags": {},
387
+ "order": 8,
388
+ "mode": 0,
389
+ "inputs": [
390
+ {
391
+ "name": "images",
392
+ "type": "IMAGE",
393
+ "link": 19
394
+ }
395
+ ],
396
+ "properties": {
397
+ "Node name for S&R": "PreviewImage"
398
+ }
399
+ }
400
+ ],
401
+ "links": [
402
+ [
403
+ 2,
404
+ 5,
405
+ 0,
406
+ 3,
407
+ 3,
408
+ "LATENT"
409
+ ],
410
+ [
411
+ 4,
412
+ 6,
413
+ 0,
414
+ 3,
415
+ 1,
416
+ "CONDITIONING"
417
+ ],
418
+ [
419
+ 6,
420
+ 7,
421
+ 0,
422
+ 3,
423
+ 2,
424
+ "CONDITIONING"
425
+ ],
426
+ [
427
+ 7,
428
+ 3,
429
+ 0,
430
+ 8,
431
+ 0,
432
+ "LATENT"
433
+ ],
434
+ [
435
+ 16,
436
+ 14,
437
+ 0,
438
+ 12,
439
+ 0,
440
+ "MODEL"
441
+ ],
442
+ [
443
+ 17,
444
+ 14,
445
+ 1,
446
+ 12,
447
+ 1,
448
+ "CLIP"
449
+ ],
450
+ [
451
+ 18,
452
+ 14,
453
+ 2,
454
+ 8,
455
+ 1,
456
+ "VAE"
457
+ ],
458
+ [
459
+ 19,
460
+ 8,
461
+ 0,
462
+ 15,
463
+ 0,
464
+ "IMAGE"
465
+ ],
466
+ [
467
+ 20,
468
+ 12,
469
+ 0,
470
+ 16,
471
+ 0,
472
+ "MODEL"
473
+ ],
474
+ [
475
+ 21,
476
+ 12,
477
+ 1,
478
+ 16,
479
+ 1,
480
+ "CLIP"
481
+ ],
482
+ [
483
+ 22,
484
+ 16,
485
+ 1,
486
+ 6,
487
+ 0,
488
+ "CLIP"
489
+ ],
490
+ [
491
+ 23,
492
+ 16,
493
+ 1,
494
+ 7,
495
+ 0,
496
+ "CLIP"
497
+ ],
498
+ [
499
+ 24,
500
+ 16,
501
+ 0,
502
+ 3,
503
+ 0,
504
+ "MODEL"
505
+ ]
506
+ ],
507
+ "groups": [],
508
+ "config": {},
509
+ "extra": {
510
+ "ds": {
511
+ "scale": 0.9090909090909092,
512
+ "offset": {
513
+ "0": 453.4622885983967,
514
+ "1": 178.34795716069297
515
+ }
516
+ }
517
+ },
518
+ "version": 0.4
519
+ }