bjoernp commited on
Commit
655be71
1 Parent(s): a744e3c

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +60 -0
README.md ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - oscar-corpus/OSCAR-2301
4
+ - wikipedia
5
+ - bjoernp/tagesschau-2018-2023
6
+ language:
7
+ - en
8
+ - de
9
+ library_name: transformers
10
+ pipeline_tag: text-generation
11
+ ---
12
+ # LAION LeoLM: **L**inguistically **E**nhanced **O**pen **L**anguage **M**odel
13
+ Meet LeoLM, the first open and commercially available German Foundation Language Model built on Llama-2.
14
+ Our models extend Llama-2's capabilities into German through continued pretraining on a large corpus of German-language and mostly locality specific text.
15
+ Thanks to a compute grant at HessianAI's new supercomputer **42**, we release two foundation models trained with 8k context length,
16
+ [`LeoLM/leo-hessianai-7b`](https://huggingface.co/LeoLM/leo-hessianai-7b) and [`LeoLM/leo-hessianai-13b`](https://huggingface.co/LeoLM/leo-hessianai-13b) under the [Llama-2 community license](https://huggingface.co/meta-llama/Llama-2-70b/raw/main/LICENSE.txt) (70b also coming soon! 👀).
17
+ With this release, we hope to bring a new wave of opportunities to German open-source and commercial LLM research and accelerate adoption.
18
+ Read our [blog post]() or our paper (preprint coming soon) for more details!
19
+
20
+ *A project by Björn Plüster and Christoph Schuhmann in collaboration with LAION and HessianAI.*
21
+
22
+
23
+ ## Model Details
24
+ - **Finetuned from:** [meta-llama/Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf)
25
+ - **Model type:** Causal decoder-only transformer language model
26
+ - **Language:** English and German
27
+ - **License:** [LLAMA 2 COMMUNITY LICENSE AGREEMENT](https://huggingface.co/meta-llama/Llama-2-70b/raw/main/LICENSE.txt)
28
+ - **Contact:** [LAION Discord](https://discord.com/invite/eq3cAMZtCC) or [Björn Plüster](mailto:[email protected])
29
+
30
+
31
+ ## Use in 🤗Transformers
32
+ First install direct dependencies:
33
+ ```
34
+ pip install transformers torch sentencepiece
35
+ ```
36
+ If you want faster inference using flash-attention2, you need to install these dependencies:
37
+ ```bash
38
+ pip install packaging ninja
39
+ pip install flash-attn==v2.1.1 --no-build-isolation
40
+ pip install git+https://github.com/HazyResearch/[email protected]#subdirectory=csrc/rotary
41
+ ```
42
+ Then load the model in transformers:
43
+ ```python
44
+ from transformers import AutoModelForCausalLM, AutoTokenizer
45
+ import torch
46
+
47
+ model = AutoModelForCausalLM.from_pretrained(
48
+ model="LeoLM/leo-hessianai-13b",
49
+ device_map="auto",
50
+ torch_dtype=torch.float16,
51
+ trust_remote_code=True # True for flash-attn2 else False
52
+ )
53
+ ```
54
+
55
+ ## Training parameters
56
+ ![training_parameters](imgs/training_params.png "Training Hyperparameters")
57
+
58
+
59
+ ## Benchmarks
60
+ ![benchmarks](imgs/benchmarks.png "Benchmark Scores")