metadata
license: apache-2.0
datasets:
- togethercomputer/RedPajama-Data-1T
- togethercomputer/RedPajama-Data-Instruct
- EleutherAI/pile
language:
- en
library_name: transformers
Llama-2-7B-32KCtx
Install Flash Attention For Inference with 32K
export CUDA_HOME=/usr/local/cuda-11.8
pip install ninja
pip install flash-attn --no-build-isolation
pip install git+https://github.com/HazyResearch/flash-attention.git#subdirectory=csrc/rotary
Please revise the path of CUDA_HOME
. ninja
is needed to accelerate the process of compiling.
And then:
model = AutoModelForCausalLM.from_pretrained('togethercomputer/Llama-2-7B-32KCtx-v0.1', trust_remote_code=True, torch_dtype=torch.float16)
You can also use vanilla transformers
to load this model:
model = AutoModelForCausalLM.from_pretrained('togethercomputer/Llama-2-7B-32KCtx-v0.1', torch_dtype=torch.float16)
TODO