Text Generation
Transformers
PyTorch
mpt
Composer
MosaicML
llm-foundry
custom_code
text-generation-inference

LLM Foundry Updates 06-01-2023

#41

** Duplicate of https://huggingface.co./mosaicml/mpt-7b/discussions/47 **

This PR adds updates from the LLM Foundry repo as of 06/01/2023.

These include:

  • device_map support for multiple GPUs
  • faster inference thanks to a refactor of the KV cacheing
  • bugfix for returning the last hidden_state
  • support for output_attentions when using attn_impl: torch
  • a requirements.txt file to make it easier to know what you need to install for MPT
  • updated README instructions for fast GPU initialization
abhi-mosaic changed pull request status to open

Tested with LLM Foundry with python scripts/inference/hf_generate.py -n mosaicml/mpt-7b-instruct --revision pr/41 --device_map auto

abhi-mosaic changed pull request status to merged

Sign up or log in to comment