AI & ML interests

Dark Data Makes Stronger Outcomes

Recent Activity

DataTonic's activity

TonicΒ 
posted an update 3 days ago
view post
Post
744
πŸ™‹πŸ»β€β™‚οΈHey there folks,

Did you know that you can use ModernBERT to detect model hallucinations ?

Check out the Demo : Tonic/hallucination-test

See here for Medical Context Demo : MultiTransformer/tonic-discharge-guard

check out the model from KRLabs : KRLabsOrg/lettucedect-large-modernbert-en-v1

and the library they kindly open sourced for it : https://github.com/KRLabsOrg/LettuceDetect

πŸ‘†πŸ»if you like this topic please contribute code upstream πŸš€

  • 2 replies
Β·
TonicΒ 
posted an update 4 days ago
view post
Post
570
Powered by KRLabsOrg/lettucedect-large-modernbert-en-v1 from KRLabsOrg.

Detect hallucinations in answers based on context and questions using ModernBERT with 8192-token context support!

### Model Details
- **Model Name**: [lettucedect-large-modernbert-en-v1]( KRLabsOrg/lettucedect-large-modernbert-en-v1)
- **Organization**: [KRLabsOrg](https://huggingface.co./KRLabsOrg)
- **Github**: [https://github.com/KRLabsOrg/LettuceDetect](https://github.com/KRLabsOrg/LettuceDetect)
- **Architecture**: ModernBERT (Large) with extended context support up to 8192 tokens
- **Task**: Token Classification / Hallucination Detection
- **Training Dataset**: [RagTruth]( wandb/RAGTruth-processed)
- **Language**: English
- **Capabilities**: Detects hallucinated spans in answers, provides confidence scores, and calculates average confidence across detected spans.

LettuceDetect excels at processing long documents to determine if an answer aligns with the provided context, making it a powerful tool for ensuring factual accuracy.
LocutusqueΒ 
posted an update 12 days ago
view post
Post
2457
πŸŽ‰ Exciting news, everyone! I've just released **Thespis-Llama-3.1-8B**, a new language model designed for enhanced roleplaying! ✨️

It's built on Llama-3.1 and fine-tuned with a focus on Theory of Mind reasoning to create more believable and engaging characters. It even learned a few tricks on its own, like adding in-character thought processes! 🧠

Check it out here: Locutusque/Thespis-Llama-3.1-8B

Give it a try and let me know what you think! I'm especially interested in feedback on how well the characters stay in role and if the responses feel natural. Looking forward to seeing what amazing stories you create! ✍️
TonicΒ 
posted an update about 1 month ago
view post
Post
2348
πŸ™‹πŸ»β€β™‚οΈhey there folks ,

Goedel's Theorem Prover is now being demo'ed on huggingface : Tonic/Math

give it a try !
not-lainΒ 
posted an update about 1 month ago
TonicΒ 
posted an update about 1 month ago
view post
Post
2950
πŸ™‹πŸ»β€β™‚οΈ Hey there folks ,

our team made a game during the @mistral-game-jam and we're trying to win the community award !

try our game out and drop us a ❀️ like basically to vote for us !

Mistral-AI-Game-Jam/TextToSurvive

hope you like it !
not-lainΒ 
posted an update about 2 months ago
view post
Post
1649
we now have more than 2000 public AI models using ModelHubMixinπŸ€—
TonicΒ 
posted an update about 2 months ago
view post
Post
1883
πŸ™‹πŸ»β€β™‚οΈ Hey there folks ,

Facebook AI just released JASCO models that make music stems .

you can try it out here : Tonic/audiocraft

hope you like it
TonicΒ 
posted an update about 2 months ago
view post
Post
2455
πŸ™‹πŸ»β€β™‚οΈHey there folks , Open LLM Europe just released Lucie 7B-Instruct model , a billingual instruct model trained on open data ! You can check out my unofficial demo here while we wait for the official inference api from the group : Tonic/Lucie-7B hope you like it πŸš€
not-lainΒ 
posted an update about 2 months ago
TonicΒ 
posted an update 2 months ago
view post
Post
1713
microsoft just released Phi-4 , check it out here : Tonic/Phi-4

hope you like it :-)
not-lainΒ 
posted an update 4 months ago
view post
Post
2323
ever wondered how you can make an API call to a visual-question-answering model without sending an image url πŸ‘€

you can do that by converting your local image to base64 and sending it to the API.

recently I made some changes to my library "loadimg" that allows you to make converting images to base64 a breeze.
πŸ”— https://github.com/not-lain/loadimg

API request example πŸ› οΈ:
from loadimg import load_img
from huggingface_hub import InferenceClient

# or load a local image
my_b64_img = load_img(imgPath_url_pillow_or_numpy ,output_type="base64" ) 

client = InferenceClient(api_key="hf_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx")

messages = [
	{
		"role": "user",
		"content": [
			{
				"type": "text",
				"text": "Describe this image in one sentence."
			},
			{
				"type": "image_url",
				"image_url": {
					"url": my_b64_img # base64 allows using images without uploading them to the web
				}
			}
		]
	}
]

stream = client.chat.completions.create(
    model="meta-llama/Llama-3.2-11B-Vision-Instruct", 
	messages=messages, 
	max_tokens=500,
	stream=True
)

for chunk in stream:
    print(chunk.choices[0].delta.content, end="")