duyntnet commited on
Commit
c5f40b0
·
verified ·
1 Parent(s): ad00bfe

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +69 -0
README.md ADDED
@@ -0,0 +1,69 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ language:
4
+ - en
5
+ pipeline_tag: text-generation
6
+ inference: false
7
+ tags:
8
+ - transformers
9
+ - gguf
10
+ - imatrix
11
+ - c4ai-command-r7b-12-2024
12
+ ---
13
+ Quantizations of https://huggingface.co/CohereForAI/c4ai-command-r7b-12-2024
14
+
15
+ **Note**: you will need llama.cpp [b4415](https://github.com/ggerganov/llama.cpp/releases/tag/b4415) or later to run the model.
16
+
17
+ ### Inference Clients/UIs
18
+ * [llama.cpp](https://github.com/ggerganov/llama.cpp)
19
+ * [KoboldCPP](https://github.com/LostRuins/koboldcpp)
20
+ * [ollama](https://github.com/ollama/ollama)
21
+ * [jan](https://github.com/janhq/jan)
22
+ * [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
23
+ * [GPT4All](https://github.com/nomic-ai/gpt4all)
24
+ ---
25
+
26
+ # From original readme
27
+
28
+ C4AI Command R7B is an open weights research release of a 7B billion parameter model with advanced capabilities optimized for a variety of use cases including reasoning, summarization, question answering, and code. The model is trained to perform sophisticated tasks including Retrieval Augmented Generation (RAG) and tool use. The model also has powerful agentic capabilities with the ability to use and combine multiple tools over multiple steps to accomplish more difficult tasks. It obtains top performance on enterprise relevant code use cases. C4AI Command R7B is a multilingual model trained on 23 languages.
29
+
30
+ Developed by: [Cohere](https://cohere.com/) and [Cohere For AI](https://cohere.for.ai/)
31
+
32
+ * Point of Contact: Cohere For AI: [cohere.for.ai](https://cohere.for.ai/)
33
+ * License: [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license), requires also adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy)
34
+ * Model: c4ai-command-r7b-12-2024
35
+ * Model Size: 7 billion parameters
36
+ * Context length: 128K
37
+
38
+
39
+ **Try C4AI Command R7B**
40
+
41
+ You can try out C4AI Command R7B before downloading the weights in our hosted [Hugging Face Space](https://cohereforai-c4ai-command.hf.space/models/command-r7b-12-2024).
42
+
43
+
44
+ **Usage**
45
+
46
+ Please install transformers from the source repository that includes the necessary changes for this model.
47
+
48
+ ```py
49
+ # pip install 'git+https://github.com/huggingface/transformers.git'
50
+ from transformers import AutoTokenizer, AutoModelForCausalLM
51
+
52
+ model_id = "CohereForAI/c4ai-command-r7b-12-2024"
53
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
54
+ model = AutoModelForCausalLM.from_pretrained(model_id)
55
+
56
+ # Format message with the c4ai-command-r7b-12-2024 chat template
57
+ messages = [{"role": "user", "content": "Hello, how are you?"}]
58
+ input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
59
+
60
+ gen_tokens = model.generate(
61
+ input_ids,
62
+ max_new_tokens=100,
63
+ do_sample=True,
64
+ temperature=0.3,
65
+ )
66
+
67
+ gen_text = tokenizer.decode(gen_tokens[0], skip_special_tokens=True)
68
+ print(gen_text)
69
+ ```