mlx-community/Pearl-7B

This model was converted to MLX format from louisbrulenaudet/Pearl-7B-0211-ties using mlx-vlm version 0.15.2. Refer to the original model card for more details on the model.

Use with mlx

pip install -U mlx-vlm
python -m mlx_vlm.generate --model  mlx-community/Pearl-7B --max-tokens 100 --temp 0.0
from mlx_lm import load, generate

model, tokenizer = load("mlx-community/Pearl-7B")
response = generate(model, tokenizer, prompt="hello", verbose=True)

Citing & Authors

If you use this code in your research, please use the following BibTeX entry.

@misc{louisbrulenaudet2024,
  author =       {Louis Brulé Naudet},
  title =        {Pearl-7B-0211-ties, an xtraordinary 7B model},
  year =         {2024}
  howpublished = {\url{https://huggingface.co./louisbrulenaudet/Pearl-7B-0211-ties}},
}

Feedback

If you have any feedback, please reach out at [email protected].

Downloads last month
18
Safetensors
Model size
1.24B params
Tensor type
FP16
·
U32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for mlx-community/Pearl-7B

Finetuned
(1)
this model

Datasets used to train mlx-community/Pearl-7B