Tensor Diffusion

community

AI & ML interests

StableDIffusion, Computer Vision, NLP

Recent Activity

DamarJati  updated a model 9 days ago
tensor-diffusion/EveryLoRA_v1.0
DamarJati  published a model 9 days ago
tensor-diffusion/EveryLoRA_v1.0
DamarJati  updated a Space 16 days ago
tensor-diffusion/README
View all activity

tensor-diffusion's activity

DamarJati 
updated a Space 16 days ago
eienmojiki 
posted an update 30 days ago
not-lain 
posted an update about 1 month ago
not-lain 
posted an update about 2 months ago
view post
Post
1649
we now have more than 2000 public AI models using ModelHubMixin🤗
not-lain 
posted an update about 2 months ago
DamarJati 
posted an update 2 months ago
view post
Post
3006
Happy New Year 2025 🤗
For the Huggingface community.
1aurent 
posted an update 2 months ago
not-lain 
posted an update 4 months ago
view post
Post
2323
ever wondered how you can make an API call to a visual-question-answering model without sending an image url 👀

you can do that by converting your local image to base64 and sending it to the API.

recently I made some changes to my library "loadimg" that allows you to make converting images to base64 a breeze.
🔗 https://github.com/not-lain/loadimg

API request example 🛠️:
from loadimg import load_img
from huggingface_hub import InferenceClient

# or load a local image
my_b64_img = load_img(imgPath_url_pillow_or_numpy ,output_type="base64" ) 

client = InferenceClient(api_key="hf_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx")

messages = [
	{
		"role": "user",
		"content": [
			{
				"type": "text",
				"text": "Describe this image in one sentence."
			},
			{
				"type": "image_url",
				"image_url": {
					"url": my_b64_img # base64 allows using images without uploading them to the web
				}
			}
		]
	}
]

stream = client.chat.completions.create(
    model="meta-llama/Llama-3.2-11B-Vision-Instruct", 
	messages=messages, 
	max_tokens=500,
	stream=True
)

for chunk in stream:
    print(chunk.choices[0].delta.content, end="")
1aurent 
posted an update 6 months ago
view post
Post
1360
Hey everyone 🤗!
We (finegrain) have created some custom ComfyUI nodes to use our refiners micro-framework inside comfy! 🎉

We only support our new Box Segmenter at the moment, but we're thinking of adding more nodes since there seems to be a demand for it. We leverage the new (beta) Comfy Registry to host our nodes. They are available at: https://registry.comfy.org/publishers/finegrain/nodes/comfyui-refiners. You can install them by running:
comfy node registry-install comfyui-refiners

Or by unzipping the archive you can download by clicking "Download Latest" into your custom_nodes comfy folder.
We are eager to hear your feedbacks and suggestions for new nodes and how you'll use them! 🙏
1aurent 
posted an update 6 months ago
view post
Post
4430
Hey everyone 🤗!
Check out this awesome new model for object segmentation!
finegrain/finegrain-object-cutter.

We (finegrain) have trained this new model in partnership with Nfinite and some of their synthetic data, the resulting model is incredibly accurate 🚀.
It’s all open source under the MIT license ( finegrain/finegrain-box-segmenter), complete with a test set tailored for e-commerce ( finegrain/finegrain-product-masks-lite). Have fun experimenting with it!
DamarJati 
posted an update 7 months ago
view post
Post
4382
Improved ControlNet!
Now supports dynamic resolution for perfect landscape and portrait outputs. Generate stunning images without distortion—optimized for any aspect ratio!
...
DamarJati/FLUX.1-DEV-Canny
not-lain 
posted an update 7 months ago