NeMo Megatron-GPT 5B
Model Description
Megatron-GPT 5B is a transformer-based language model. GPT refers to a class of transformer decoder-only models similar to GPT-2 and 3 while 5B refers to the total trainable parameter count (5 Billion) [1, 2].
This model was trained with NeMo Megatron.
Getting started
Step 1: Install NeMo and dependencies
You will need to install NVIDIA Apex and NeMo.
git clone https://github.com/ericharper/apex.git
cd apex
git checkout nm_v1.11.0
pip install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" --global-option="--fast_layer_norm" --global-option="--distributed_adam" --global-option="--deprecated_fused_adam" ./
pip install nemo_toolkit['nlp']==1.11.0
Alternatively, you can use NeMo Megatron training docker container with all dependencies pre-installed.
Step 2: Launch eval server
Note. The example below launches a model variant with Tensor Parallelism (TP) of 2 and Pipeline Parallelism (PP) of 1 on two GPUs.
git clone https://github.com/NVIDIA/NeMo.git
cd NeMo/examples/nlp/language_modeling
git checkout v1.11.0
python megatron_gpt_eval.py gpt_model_file=nemo_gpt5B_fp16_tp2.nemo server=True tensor_model_parallel_size=2 trainer.devices=2
Step 3: Send prompts to your model!
import json
import requests
port_num = 5555
headers = {"Content-Type": "application/json"}
def request_data(data):
resp = requests.put('http://localhost:{}/generate'.format(port_num),
data=json.dumps(data),
headers=headers)
sentences = resp.json()['sentences']
return sentences
data = {
"sentences": ["Tell me an interesting fact about space travel."]*1,
"tokens_to_generate": 50,
"temperature": 1.0,
"add_BOS": True,
"top_k": 0,
"top_p": 0.9,
"greedy": False,
"all_probs": False,
"repetition_penalty": 1.2,
"min_tokens_to_generate": 2,
}
sentences = request_data(data)
print(sentences)
Training Data
The model was trained on "The Piles" dataset prepared by Eleuther.AI. [4]
Evaluation results
Zero-shot performance. Evaluated using LM Evaluation Test Suite from AI21
ARC-Challenge | ARC-Easy | RACE-middle | RACE-high | Winogrande | RTE | BoolQA | HellaSwag | PiQA |
---|---|---|---|---|---|---|---|---|
0.3976 | 0.5566 | 0.5007 | 0.4171 | 0.6133 | 0.5812 | 0.6356 | 0.6298 | 0.7492 |
Limitations
The model was trained on the data originally crawled from the Internet. This data contains toxic language and societal biases. Therefore, the model may amplify those biases and return toxic responses especially when prompted with toxic prompts.
References
[1] Improving Language Understanding by Generative Pre-Training
[2] Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism
[4] The Pile: An 800GB Dataset of Diverse Text for Language Modeling
Licence
License to use this model is covered by the CC-BY-4.0. By downloading the public and release version of the model, you accept the terms and conditions of the CC-BY-4.0 license.
- Downloads last month
- 32