Trillama-8B is a 8B LLM that builds upon the foundation of Llama-3-8B, the lastest model from Meta. It's a fine-tune focused on improving the model's already strong logic and reasoning.

import transformers
import torch

model_id = "senseable/Trillama-8B"

pipeline = transformers.pipeline(
    "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto"
)
pipeline("Explain the meaning of life.")
Downloads last month
38
Safetensors
Model size
8.03B params
Tensor type
FP16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for senseable/Trillama-8B

Quantizations
2 models

Spaces using senseable/Trillama-8B 5