mwitiderrick commited on
Commit
70af130
1 Parent(s): c831866

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +42 -0
README.md ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: openlm-research/open_llama_3b
3
+ datasets:
4
+ - mwitiderrick/Open-Platypus
5
+ inference: true
6
+ model_type: llama
7
+ prompt_template: |
8
+ ### Instruction:\n
9
+ {prompt}
10
+ ### Response:
11
+ created_by: mwitiderrick
12
+ tags:
13
+ - transformers
14
+ license: apache-2.0
15
+ language:
16
+ - en
17
+ library_name: transformers
18
+ pipeline_tag: text-generation
19
+ ---
20
+ # OpenLLaMA Instruct: An Open Reproduction of LLaMA
21
+
22
+ This is an [OpenLlama model](https://huggingface.co/openlm-research/open_llama_3b) that has been fine-tuned on 1 epochs of the
23
+ [Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus) dataset.
24
+
25
+ The modified version of the dataset can be found [here](mwitiderrick/Open-Platypus)
26
+
27
+ ## Usage
28
+ ```python
29
+ from transformers import AutoTokenizer, AutoModelForCausalLM,pipeline
30
+
31
+ tokenizer = AutoTokenizer.from_pretrained("mwitiderrick/open_llama_3b_chat_v_0.1")
32
+ model = AutoModelForCausalLM.from_pretrained("mwitiderrick/open_llama_3b_chat_v_0.1")
33
+ query = "How can I evaluate the performance and quality of the generated text from language models?"
34
+ text_gen = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=200)
35
+ output = text_gen(f"### Instruction:\n{query}### Response:\n")
36
+ print(output[0]['generated_text'])
37
+ """
38
+ ### Instruction:
39
+ How can I evaluate the performance and quality of the generated text from language models?### Response:
40
+ I want to evaluate the performance of the language model by comparing the generated text with the original text. I can use a similarity measure to compare the two texts. For example, I can use the Levenshtein distance, which measures the number of edits needed to transform one text into another. The Levenshtein distance between two texts is the minimum number of edits needed to transform one text into another. The Levenshtein distance between two texts is the minimum number of edits needed to transform one text into another. The Levenshtein distance between two texts is the minimum number of edits needed to transform one text into another. The Levenshtein distance between two texts is the minimum number of edits needed to transform one text into another. The Levenshtein distance between two texts is the minimum number
41
+ """
42
+ ```