Text Generation
Transformers
PyTorch
mpt
Composer
MosaicML
llm-foundry
custom_code
text-generation-inference
jacobfulano commited on
Commit
9580a8e
1 Parent(s): 8b737ec

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -8
README.md CHANGED
@@ -35,21 +35,21 @@ Apache-2.0 (commercial use permitted)
35
  ### Example Dialogue
36
 
37
  **SamIAm85**:
38
- I want you to come up with a tweet based on this summary of the article:
39
 
40
- "Introducing MPT-7B, the latest entry in our MosaicML Foundation Series.
41
- MPT-7B is a transformer trained from scratch on IT tokens of text and code.
42
- It is open source, available for commercial use, and it matches the quality of LLaMA-7B.
43
- MPT-7B was trained on the MosaicML platform in 9.5 days with zero human intervention at a cost of ~$200k.
44
- Starting today, you can train, finetune, and deploy your own private MPT models, either starting from one of our checkpoints or training from scratch.
45
- For inspiration, we are also releasing three finetuned models in addition to the base MPT-7B: MPT-7B-Instruct, MPT-7B-Chat, and MPT-7B-Story Writer-65k+, the last of which uses a context length of 65k tokens!
46
 
47
  **MPT-7B-Instruct**:
48
  >Here's my suggestion for a tweet:
49
  >"Exciting news! @MosaicML has launched #MPT_7B - a new standard for open-source, commercially usable LLMs. With training on 17 tokens of text & code, it matches the quality of LLaMA-7B. Now you can train, fine-tune, and deploy your own private MPT models via their platform. Check out the full blog post here: https://bit.ly/3j6UJzZ"
50
 
51
  **SamIAm85**:
52
- Is there anything you want to say to MosaicML about this exciting news?
53
 
54
  **MPT-7B-Instruct**:
55
  > As an Al language model myself, I am very impressed with the advancements made by MosaicML in developing such high-quality open-source LLMs.
 
35
  ### Example Dialogue
36
 
37
  **SamIAm85**:
38
+ >I want you to come up with a tweet based on this summary of the article:
39
 
40
+ >"Introducing MPT-7B, the latest entry in our MosaicML Foundation Series.
41
+ >MPT-7B is a transformer trained from scratch on IT tokens of text and code.
42
+ >It is open source, available for commercial use, and it matches the quality of LLaMA-7B.
43
+ >MPT-7B was trained on the MosaicML platform in 9.5 days with zero human intervention at a cost of ~$200k.
44
+ >Starting today, you can train, finetune, and deploy your own private MPT models, either starting from one of our checkpoints or training from scratch.
45
+ >For inspiration, we are also releasing three finetuned models in addition to the base MPT-7B: MPT-7B-Instruct, MPT-7B-Chat, and MPT-7B-Story Writer-65k+, the last of which uses a context length of 65k tokens!
46
 
47
  **MPT-7B-Instruct**:
48
  >Here's my suggestion for a tweet:
49
  >"Exciting news! @MosaicML has launched #MPT_7B - a new standard for open-source, commercially usable LLMs. With training on 17 tokens of text & code, it matches the quality of LLaMA-7B. Now you can train, fine-tune, and deploy your own private MPT models via their platform. Check out the full blog post here: https://bit.ly/3j6UJzZ"
50
 
51
  **SamIAm85**:
52
+ >Is there anything you want to say to MosaicML about this exciting news?
53
 
54
  **MPT-7B-Instruct**:
55
  > As an Al language model myself, I am very impressed with the advancements made by MosaicML in developing such high-quality open-source LLMs.