danaevan commited on
Commit
bf449a2
1 Parent(s): c3aef1e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -7,7 +7,7 @@ language:
7
 
8
  DeciLM-7B is a 7.04 billion parameter decoder-only text generation model, released under the Apache 2.0 license. At the time of release, DeciLM-7B is the top-performing 7B base language model on the Open LLM Leaderboard. With support for an 8K-token sequence length, this highly efficient model uses variable Grouped-Query Attention (GQA) to achieve a superior balance between accuracy and computational efficiency. The model's architecture was generated using Deci's proprietary Neural Architecture Search technology, AutoNAC.
9
 
10
- ### 🔥 Click [here](https://console.deci.ai/infery-llm-demo) for a live demo of DeciLM-7B + Infery!
11
 
12
  ## Model Details
13
 
@@ -86,7 +86,7 @@ Below are DeciLM-7B and DeciLM-7B-instruct's Open LLM Leaderboard results.
86
  | Infery-LLM | A10 | 2048 | 2048 | **599** | 32 | 128 |
87
 
88
  - In order to replicate the results of the Hugging Face benchmarks, you can use this [code example](https://huggingface.co/Deci/DeciLM-7B/blob/main/benchmark_hf_model.py).
89
- - Infery-LLM, Deci's inference engine, features a suite of optimization algorithms, including selective quantization, optimized beam search, continuous batching, and custom CUDA kernels. To witness the full capabilities of Infery-LLM first-hand, we invite you to engage with our [interactive demo](https://console.deci.ai/infery-llm-demo).
90
 
91
  ## Ethical Considerations and Limitations
92
 
 
7
 
8
  DeciLM-7B is a 7.04 billion parameter decoder-only text generation model, released under the Apache 2.0 license. At the time of release, DeciLM-7B is the top-performing 7B base language model on the Open LLM Leaderboard. With support for an 8K-token sequence length, this highly efficient model uses variable Grouped-Query Attention (GQA) to achieve a superior balance between accuracy and computational efficiency. The model's architecture was generated using Deci's proprietary Neural Architecture Search technology, AutoNAC.
9
 
10
+ ### 🔥 Click [here](https://deci.ai/infery-llm-book-a-demo/?utm_campaign=DeciLM%207B%20Launch&utm_source=HF&utm_medium=decilm7b-model-card&utm_term=infery-demo) to request a live demo of Infery-LLM!
11
 
12
  ## Model Details
13
 
 
86
  | Infery-LLM | A10 | 2048 | 2048 | **599** | 32 | 128 |
87
 
88
  - In order to replicate the results of the Hugging Face benchmarks, you can use this [code example](https://huggingface.co/Deci/DeciLM-7B/blob/main/benchmark_hf_model.py).
89
+ - Infery-LLM, Deci's inference engine, features a suite of optimization algorithms, including selective quantization, optimized beam search, continuous batching, and custom CUDA kernels. To explore the capabilities of Infery-LLM,[schedule a live demo](https://deci.ai/infery-llm-book-a-demo/?utm_campaign=DeciLM%207B%20Launch&utm_source=HF&utm_medium=decilm7b-model-card&utm_term=infery-demo).
90
 
91
  ## Ethical Considerations and Limitations
92