Nemotron Mini 4B Instruct ONNX INT4

Model Developer: NVIDIA

Model Description

Nemotron-Mini-4B Instruct is a model for generating responses for roleplaying, retrieval augmented generation, and function calling. It is a small language model (SLM) optimized through distillation, pruning and quantization for speed and on-device deployment. VRAM usage has been minimized to approximately 2 GB, providing significantly faster time to first token compared to LLMs. The NVIDIA Nemotron-Mini-4B Instruct ONNX INT4 model is quantized with TensorRT Model Optimizer.

Steps followed to generate this quantized model:

    1. Download Nemotron-Mini-4B Instruct model in Pytorch bfloat16 format from HuggingFace.
    1. Convert PyTorch model to ONNX FP16 using onnxruntime-genai model builder.
    1. Quantize Nemotron-Mini-4B Instruct ONNX FP16 model to Nemotron-Mini-4B Instruct ONNX INT4 AWQ model using TensorRT Model Optimizer – Windows.

This model is ready for commercial/non-commercial use. 

License/Terms of Use:

GOVERNING TERMS: Use of this model is governed by the NVIDIA Open Model License Agreement (found at https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf ). ADDITIONAL INFORMATION: Apache License, Version 2.0 (found at https://huggingface.co./datasets/choosealicense/licenses/blob/main/markdown/apache-2.0.md).

Reference:

Nemotron Mini 4B Model

Model Architecture:

Architecture Type: Transformer

Network Architecture: Decoder-only

Input

  • Input Type: Text

  • Input Format: String

  • Input Parameters: Sequence (1D)

  • Other Properties Related to Input: The model has a maximum of 4096 input tokens.

Output

  • Output Type: Text

  • Output Format: String

  • Output Parameters: Sequence (1D)

  • Other Properties Related to Output: The model has a maximum of 4096 input tokens. Maximum output for both versions can be set apart from input.

Software Integration:

  • Supported Hardware Microarchitecture Compatibility : Nvidia Ampere and newer GPUs. 6GB or higher VRAM GPUs are recommended. Higher VRAM may be required for larger context length use cases. 

  • Supported Operating System(s):  Windows

Model Version(s):  1.0

Training, Testing, and Evaluation Datasets:

Refer to Nemotron-Mini-4B Model Card for the details.

Calibration Dataset: cnn_daily mail used for calibration.

Link: https://huggingface.co./datasets/abisee/cnn_dailymail

  • Data Collection Method by dataset: Automated

  • Labeling Method by dataset: [Unknown]

Evaluation Dataset:

Link: https://people.eecs.berkeley.edu/~hendrycks/data.tar

  • Data Collection Method by dataset  - Unknown

  • Labeling Method by dataset  - Not Applicable

Evaluation Results:

MMLU (5# shots):

With GenAI ORT->DML backend, we got below mentioned accuracy numbers on a desktop RTX 4090 GPU system. 

"overall_accuracy": 56.01

Test configuration:

  • GPU: RTX 4090  

  • Windows 11: 23H2

  • NVIDIA Graphics driver: R565 or higher

Inference:

We used GenAI ORT->DML backend for inference. The instructions to use this backend are given in readme.txt file available under Files section. 

Bias

Field Response
Participation considerations from adversely impacted groups (protected classes) in model design and testing: None
Measures taken to mitigate against unwanted bias: None

Explainability

Field Response
Intended Application & Domain: Game NPC Development
Model Type: Generative Pre-Trained Transformer (GPT)
Intended User: Enterprise developers building game NPCs.
Output: Text String(s)
Describe how the model works: Generates a response using the input text and context such as NPC background information.
Name the adversely impacted groups this has been tested to deliver comparable outcomes regardless of: Not Applicable
Verified to have met prescribed NVIDIA quality standards: Yes
Performance Metrics: Accuracy, Latency, and Throughput
Potential Known Risks: The model was trained on data that contains toxic language and societal biases originally crawled from the internet. Therefore, the model may amplify those biases and return toxic responses especially when prompted with toxic prompts. The model may generate answers that may be inaccurate, omit key information, or include irrelevant or redundant text producing socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. This issue could be exacerbated without the use of the recommended prompt template. The model may also amplify biases and return toxic responses especially when prompted with toxic prompts. If you are going to use this model in an agentic workflow, validate that the imported packages are from a trusted source to ensure end-to-end security.
Technical Limitation : The model was trained on data that contains toxic language and societal biases originally crawled from the internet. Therefore, the model may amplify those biases and return toxic responses especially when prompted with toxic prompts. The model may generate answers that may be inaccurate, omit key information, or include irrelevant or redundant text producing socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive.
Licensing: NVIDIA Open Model License Agreement

Privacy

Field Response
Generatable or reverse engineerable personal data? None
Was consent obtained for any personal data used? Not Applicable
Protected class data used to create this model? Datasets used for fine-tuning did not introduce any personal data that did not exist in the base model.
How often is dataset reviewed? Before Release
Is a mechanism in place to honor data subject right of access or deletion of personal data? Not Applicable
If personal data collected for the development of the model, was it collected directly by NVIDIA? Not Applicable
If personal data collected for the development of the model by NVIDIA, do you maintain or have access to disclosures made to data subjects? Not Applicable
If personal data collected for the development of this AI model, was it minimized to only what was required? Not Applicable
Is there provenance for all datasets used in training? Yes
Does data labeling (annotation, metadata) comply with privacy laws? Yes
Is data compliant with data subject requests for data correction or removal, if such a request was made? Not Applicable

Safety

Field Response
Model Application(s): NPC Conversation
Describe the life-critical impact (if present). None Known
Use Case Restrictions: Abide by NVIDIA Open Model Community License Agreement
Model and dataset restrictions: The Principle of least privilege (PoLP) is applied limiting access for dataset generation and model development. Restrictions enforce dataset access during training, and dataset license constraints adhered to.

Ethical Considerations:

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications.  When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. 

Please report security vulnerabilities or NVIDIA AI Concerns here.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .

Model tree for nvidia/Nemotron-Mini-4B-Instruct-ONNX-INT4

Quantized
(19)
this model

Collection including nvidia/Nemotron-Mini-4B-Instruct-ONNX-INT4