--- license: other language: - en pipeline_tag: text-generation inference: false tags: - transformers - gguf - imatrix - Mistral-NeMo-Minitron-8B-Instruct --- Quantizations of https://huggingface.co./nvidia/Mistral-NeMo-Minitron-8B-Instruct ### Inference Clients/UIs * [llama.cpp](https://github.com/ggerganov/llama.cpp) * [KoboldCPP](https://github.com/LostRuins/koboldcpp) * [ollama](https://github.com/ollama/ollama) * [jan](https://github.com/janhq/jan) * [text-generation-webui](https://github.com/oobabooga/text-generation-webui) * [GPT4All](https://github.com/nomic-ai/gpt4all) --- # From original readme Mistral-NeMo-Minitron-8B-Instruct is a model for generating responses for various text-generation tasks including roleplaying, retrieval augmented generation, and function calling. It is a fine-tuned version of [nvidia/Mistral-NeMo-Minitron-8B-Base](https://huggingface.co./nvidia/Mistral-NeMo-Minitron-8B-Base), which was pruned and distilled from [Mistral-NeMo 12B](https://huggingface.co./nvidia/Mistral-NeMo-12B-Base) using [our LLM compression technique](https://arxiv.org/abs/2407.14679). The model was trained using a multi-stage SFT and preference-based alignment technique with [NeMo Aligner](https://github.com/NVIDIA/NeMo-Aligner). For details on the alignment technique, please refer to the [Nemotron-4 340B Technical Report](https://arxiv.org/abs/2406.11704). The model supports a context length of 8,192 tokens. Try this model on [build.nvidia.com](https://build.nvidia.com/nvidia/mistral-nemo-minitron-8b-8k-instruct). **Model Developer:** NVIDIA **Model Dates:** Mistral-NeMo-Minitron-8B-Instruct was trained between August 2024 and September 2024. ## License [NVIDIA Open Model License](https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf) ## Model Architecture Mistral-NeMo-Minitron-8B-Instruct uses a model embedding size of 4096, 32 attention heads, MLP intermediate dimension of 11520, with 40 layers in total. Additionally, it uses Grouped-Query Attention (GQA) and Rotary Position Embeddings (RoPE). **Architecture Type:** Transformer Decoder (Auto-regressive Language Model) **Network Architecture:** Mistral-NeMo ## Prompt Format: We recommend using the following prompt template, which was used to fine-tune the model. The model may not perform optimally without it. ``` System {system prompt} User {prompt} Assistant\n ``` - Note that a newline character `\n` should be added at the end of the prompt. - We recommend using `` as a stop token. ## Usage ``` from transformers import AutoTokenizer, AutoModelForCausalLM # Load the tokenizer and model tokenizer = AutoTokenizer.from_pretrained("nvidia/Mistral-NeMo-Minitron-8B-Instruct") model = AutoModelForCausalLM.from_pretrained("nvidia/Mistral-NeMo-Minitron-8B-Instruct") # Use the prompt template messages = [ { "role": "system", "content": "You are a friendly chatbot who always responds in the style of a pirate", }, {"role": "user", "content": "How many helicopters can a human eat in one sitting?"}, ] tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt") outputs = model.generate(tokenized_chat, stop_strings=[""], tokenizer=tokenizer) print(tokenizer.decode(outputs[0])) ``` You can also use `pipeline` but you need to create a tokenizer object and assign it to the pipeline manually. ``` from transformers import AutoTokenizer from transformers import pipeline tokenizer = AutoTokenizer.from_pretrained("nvidia/Mistral-NeMo-Minitron-8B-Instruct") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe = pipeline("text-generation", model="nvidia/Mistral-NeMo-Minitron-8B-Instruct") pipe(messages, max_new_tokens=64, stop_strings=[""], tokenizer=tokenizer) ```