language:
- en
license: other
license_name: flux-1-dev-non-commercial-license
license_link: LICENSE.md
extra_gated_prompt: >-
By clicking "Agree", you agree to the [FluxDev Non-Commercial License
Agreement](https://huggingface.co./black-forest-labs/FLUX.1-Canny-dev/blob/main/LICENSE.md)
and acknowledge the [Acceptable Use
Policy](https://huggingface.co./black-forest-labs/FLUX.1-Canny-dev/blob/main/POLICY.md).
tags:
- image-generation
- flux
- diffusion-single-file
FLUX.1 Canny [dev]
is 12 billion parameter rectified flow transformer capable of generating an image based on a text description while following the structure of a given input image. For more information, please read our blog post.
Key Features
- Cutting-edge output quality.
- It blends impressive prompt adherence with maintaining the structure of source images based on canny edges.
- Trained using guidance distillation, making
FLUX.1 Canny [dev]
more efficient. - Open weights to drive new scientific research, and empower artists to develop innovative workflows.
- Generated outputs can be used for personal, scientific, and commercial purposes as described in the
FLUX.1 [dev]
Non-Commercial License.
Usage
We provide a reference implementation of FLUX.1 Canny [dev]
, as well as sampling code, in a dedicated github repository.
Developers and creatives looking to build on top of FLUX.1 Canny [dev]
are encouraged to use this as a starting point.
API Endpoints
FLUX.1 Canny [pro]
is available in our API bfl.ml
Diffusers
To use FLUX.1-Canny-dev
with the 🧨 diffusers python library, first install or upgrade diffusers
and controlnet_aux
.
pip install -U diffusers controlnet_aux
Then you can use FluxControlPipeline
to run the model
import torch
from controlnet_aux import CannyDetector
from diffusers import FluxControlPipeline
from diffusers.utils import load_image
pipe = FluxControlPipeline.from_pretrained("black-forest-labs/FLUX.1-Canny-dev", torch_dtype=torch.bfloat16).to("cuda")
prompt = "A robot made of exotic candies and chocolates of different kinds. The background is filled with confetti and celebratory gifts."
control_image = load_image("https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/robot.png")
processor = CannyDetector()
control_image = processor(control_image, low_threshold=50, high_threshold=200, detect_resolution=1024, image_resolution=1024)
image = pipe(
prompt=prompt,
control_image=control_image,
height=1024,
width=1024,
num_inference_steps=50,
guidance_scale=30.0,
).images[0]
image.save("output.png")
Limitations
- This model is not intended or able to provide factual information.
- As a statistical model this checkpoint might amplify existing societal biases.
- The model may fail to generate output that matches the prompts.
- Prompt following is heavily influenced by the prompting-style.
Out-of-Scope Use
The model and its derivatives may not be used
- In any way that violates any applicable national, federal, state, local or international law or regulation.
- For the purpose of exploiting, harming or attempting to exploit or harm minors in any way; including but not limited to the solicitation, creation, acquisition, or dissemination of child exploitative content.
- To generate or disseminate verifiably false information and/or content with the purpose of harming others.
- To generate or disseminate personal identifiable information that can be used to harm an individual.
- To harass, abuse, threaten, stalk, or bully individuals or groups of individuals.
- To create non-consensual nudity or illegal pornographic content.
- For fully automated decision making that adversely impacts an individual's legal rights or otherwise creates or modifies a binding, enforceable obligation.
- Generating or facilitating large-scale disinformation campaigns.
License
This model falls under the FLUX.1 [dev]
Non-Commercial License.