GGUF
English
gemma2
Inference Endpoints

GemMath

This model is a domain-specific fine-tuned LLM model which is specialized in mathematical inferences and deductions, based on Gemma2 LLM model.

Model Description

  • Developed by: Nathan Kim (NK590)
  • Model type: LLM model
  • Language(s) (NLP): English
  • License: MIT
  • Finetuned from model: gemma2-2b

Uses

You can import .gguf file with ollama to run this model. For example, make Modelfile file below in the same directory with .gguf file,

FROM unsloth.Q8_0.gguf

TEMPLATE """{{- if .System }}
<s>{{ .System }}</s>
{{- end }}
<s>Human:
{{ .Prompt }}</s>
<s>Assistant:
"""

SYSTEM """A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions."""

PARAMETER temperature 0
PARAMETER num_predict 3000
PARAMETER num_ctx 4096
PARAMETER stop <s>
PARAMETER stop </s>

then run the following command to read on ollama:

ollama create {model_name} -f Modelfile

finally, you can run this model on ollama. Enjoy!

ollama run {model_name}

Training Data

This model was fine-tuned using orca-math-word-problems-200k dataset.

Downloads last month
3
GGUF
Model size
2.61B params
Architecture
gemma2

8-bit

Inference API
Unable to determine this model's library. Check the docs .

Model tree for NK590/GemMath-2b-gguf

Base model

google/gemma-2-2b
Quantized
(122)
this model

Dataset used to train NK590/GemMath-2b-gguf