Mathstral 7b v0.1 - llamafile

This is a large language model, tuned for instructions, that has strong mathematical reasoning skills. It achieved 56.6% on MATH benchmark and 63.47% on MMLU. Mathstral was released on July 16th, 2024.

The model is packaged into executable weights, which we call llamafiles. This makes it easy to use the model on Linux, MacOS, Windows, FreeBSD, OpenBSD, and NetBSD for AMD64 and ARM64.

Quickstart

Running the following on a desktop OS will launch a tab in your web browser with a chatbot interface.

wget https://huggingface.co./Mozilla/mathstral-7B-v0.1-llamafile/resolve/main/mathstral-7B-v0.1.Q6_K.llamafile
chmod +x mathstral-7B-v0.1.Q6_K.llamafile
./mathstral-7B-v0.1.Q6_K.llamafile

You then need to fill out the prompt / history template (see below).

This model has a max context window size of 32k tokens. By default, a context window size of 512 tokens is used. You may increase this to the maximum by passing the -c 0 flag. By default the temperature for this model is set to zero. You may change it by setting e.g. --temp 0.8.

On GPUs with sufficient RAM, the -ngl 999 flag may be passed to use the system's NVIDIA or AMD GPU(s). On Windows, only the graphics card driver needs to be installed. If the prebuilt DSOs should fail, the CUDA or ROCm SDKs may need to be installed, in which case llamafile builds a native module just for your system.

For further information, please see the llamafile README.

Having trouble? See the "Gotchas" section of the README.

Prompting

Here's an example of how to prompt Mathstral from the command line:

./mathstral-7B-v0.1.Q6_K.llamafile --log-disable --no-display-prompt -p '[INST]What is 2.11 + 3.9?[/INST]'

Model Card for Mathstral-7B-v0.1

Mathstral 7B is a model specializing in mathematical and scientific tasks, based on Mistral 7B. You can read more in the official blog post.

Installation

It is recommended to use mistralai/mathstral-7B-v0.1 with mistral-inference

pip install mistral_inference>=1.2.0

Download

from huggingface_hub import snapshot_download
from pathlib import Path

mistral_models_path = Path.home().joinpath('mistral_models', 'mathstral-7B-v0.1')
mistral_models_path.mkdir(parents=True, exist_ok=True)

snapshot_download(repo_id="mistralai/mathstral-7B-v0.1", allow_patterns=["params.json", "consolidated.safetensors", "tokenizer.model.v3"], local_dir=mistral_models_path)

Chat

After installing mistral_inference, a mistral-demo CLI command should be available in your environment.

mistral-chat $HOME/mistral_models/mathstral-7B-v0.1 --instruct --max_tokens 256

You can then start chatting with the model, e.g. prompt it with something like:

"Albert likes to surf every week. Each surfing session lasts for 4 hours and costs $20 per hour. How much would Albert spend in 5 weeks?"

Usage in transformers

To use this model within the transformers library, install the latest release with pip install --upgrade transformers and run, for instance:

from transformers import MistralForCausalLM
from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('mistralai/mathstral-7B-v0.1')

prompt = "What are the roots of unity?"
tokenized_prompts = tokenizer(prompt, return_tensors="pt") 

model = MistralForCausalLM.from_pretrained('mistralai/mathstral-7B-v0.1')
generation = model.generate(**tokenized_prompts, max_new_tokens=512)
print(tokenizer.decode(generation[0]))
>>> """<s>What are the roots of unity?

The roots of unity are the solutions to the equation $z^n = 1$, where $n$ is a positive integer.
These roots are complex numbers and they form a regular $n$-gon in the complex plane.

For example, the roots of unity for $n=1$ are just $1$,
and for $n=2$ they are $1$ and $-1$. For $n=3$, they are $1$, $\\frac{-1+\\sqrt{3}i}{2}$, and $\\frac{-1-\\sqrt{3}i}{2}$.

The roots of unity have many interesting properties and they are used in many areas of mathematics, including number theory, algebra, and geometry.</s>"""

Evaluation

We evaluate Mathstral 7B and open-weight models of the similar size on industry-standard benchmarks.

Benchmarks MATH GSM8K (8-shot) Odyssey Math maj@16 GRE Math maj@16 AMC 2023 maj@16 AIME 2024 maj@16
Mathstral 7B 56.6 77.1 37.2 56.9 42.4 2/30
DeepSeek Math 7B 44.4 80.6 27.6 44.6 28.0 0/30
Llama3 8B 28.4 75.4 24.0 26.2 34.4 0/30
GLM4 9B 50.2 48.8 18.9 46.2 36.0 1/30
QWen2 7B 56.8 32.7 24.8 58.5 35.2 2/30
Gemma2 9B 48.3 69.5 18.6 52.3 31.2 1/30

The Mistral AI Team

Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Alok Kothari, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Augustin Garreau, Austin Birky, Bam4d, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Carole Rambaud, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gaspard Blanchet, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Henri Roussez, Hichem Sattouf, Ian Mack, Jean-Malo Delignon, Jessica Chudnovsky, Justus Murke, Kartik Khandelwal, Lawrence Stewart, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Marjorie Janiewicz, Mickaël Seznec, Nicolas Schuhl, Niklas Muhs, Olivier de Garrigues, Patrick von Platen, Paul Jacob, Pauline Buche, Pavan Kumar Reddy, Perry Savas, Pierre Stock, Romain Sauvestre, Sagar Vaze, Sandeep Subramanian, Saurabh Garg, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibault Schueller, Thibaut Lavril, Thomas Wang, Théophile Gervet, Timothée Lacroix, Valera Nemychnikova, Wendy Shang, William El Sayed, William Marshall

Downloads last month
411
Inference API
Unable to determine this model's library. Check the docs .