File size: 1,733 Bytes
f19f465 173a875 7e7cd35 f19f465 173a875 afff6ae d3f3303 afff6ae 891ef27 7a2937b afff6ae 891ef27 7a2937b 891ef27 7a2937b 891ef27 afff6ae |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 |
---
license: apache-2.0
pipeline_tag: unconditional-image-generation
tags:
- biology
library_name: diffusers
---
Diffusion model trained on a public dataset of images from [image data resource](https://idr.openmicroscopy.org/cell/) to create highly detailed accurate depictions of flourescent and super-resolution cell images.

# Ground-truth image data obtained from idr:

```py
from diffusers import DDPMPipeline
model_id = "nakajimayoshi/ddpm-iris-256"
# load model and scheduler
ddpm = DDPMPipeline.from_pretrained(model_id) # you can replace DDPMPipeline with DDIMPipeline or PNDMPipeline for faster inference
# run pipeline in inference (sample random noise and denoise)
image = ddpm().images[0]
# save image
image.save("ddpm_generated_image.png")
```
The role of generative AI in the science is a new discussion and the merits of it have yet to be evaluated. Whilst current image-to-image and text-to-image models make it easier than ever to create stunning images, they lack the specific training sets to replicate accurate and detailed images found in flourescent cell microscopy.
We propose ddpm-IRIS, a difusion network leveraging Google's [Diffusion Model](https://arxiv.org/abs/2006.11239) to generate visual depitctions of cell features with more detail than traditional models.
Hyperparameters:
- image_size = 256
- train_batch_size = 16
- eval_batch_size = 16
- num_epochs = 50
- gradient_accumulation_steps = 1
- learning_rate = 1e-4
- lr_warmup_steps = 500
- save_image_epochs = 10
- save_model_epochs = 30
- mixed_precision = 'fp16'
trained on 1 Nvidia A100 40GB GPU over 50 epochs for 2.5 hours.
|