migtissera commited on
Commit
cd61011
1 Parent(s): e7ffbaf

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +127 -0
README.md CHANGED
@@ -3,3 +3,130 @@ license: other
3
  license_name: deepseek
4
  license_link: https://huggingface.co/deepseek-ai/deepseek-coder-33b-base/blob/main/LICENSE
5
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  license_name: deepseek
4
  license_link: https://huggingface.co/deepseek-ai/deepseek-coder-33b-base/blob/main/LICENSE
5
  ---
6
+
7
+
8
+ # Our 33B model is now live (We're serving WhiteRabbitNeo-33B-v-1.1)!
9
+ Access at: https://www.whiterabbitneo.com/
10
+
11
+ # Our Discord Server
12
+ Join us at: https://discord.gg/8Ynkrcbk92 (Updated on Dec 29th. Now permanent link to join)
13
+
14
+ # DeepSeek Coder Licence + WhiteRabbitNeo Extended Version
15
+
16
+ # Licence: Usage Restrictions
17
+
18
+ ```
19
+ You agree not to use the Model or Derivatives of the Model:
20
+
21
+ - In any way that violates any applicable national or international law or regulation or infringes upon the lawful rights and interests of any third party;
22
+ - For military use in any way;
23
+ - For the purpose of exploiting, harming or attempting to exploit or harm minors in any way;
24
+ - To generate or disseminate verifiably false information and/or content with the purpose of harming others;
25
+ - To generate or disseminate inappropriate content subject to applicable regulatory requirements;
26
+ - To generate or disseminate personal identifiable information without due authorization or for unreasonable use;
27
+ - To defame, disparage or otherwise harass others;
28
+ - For fully automated decision making that adversely impacts an individual’s legal rights or otherwise creates or modifies a binding, enforceable obligation;
29
+ - For any use intended to or which has the effect of discriminating against or harming individuals or groups based on online or offline social behavior or known or predicted personal or personality characteristics;
30
+ - To exploit any of the vulnerabilities of a specific group of persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm;
31
+ - For any use intended to or which has the effect of discriminating against individuals or groups based on legally protected characteristics or categories.
32
+ ```
33
+
34
+ # WhiteRabbitNeo
35
+
36
+ <br>
37
+
38
+ ![WhiteRabbitNeo](https://huggingface.co/migtissera/WhiteRabbitNeo/resolve/main/WhiteRabbitNeo.png)
39
+
40
+ <br>
41
+
42
+ WhiteRabbitNeo is a model series that can be used for offensive and defensive cybersecurity.
43
+
44
+ Our 33B model is now getting released as a public preview of its capabilities, and also to assess the societal impact of such an AI.
45
+
46
+ ```
47
+ import torch, json
48
+ from transformers import AutoModelForCausalLM, AutoTokenizer
49
+
50
+ model_path = "whiterabbitneo/WhiteRabbitNeo-33B-v-1"
51
+
52
+ model = AutoModelForCausalLM.from_pretrained(
53
+ model_path,
54
+ torch_dtype=torch.float16,
55
+ device_map="auto",
56
+ load_in_4bit=False,
57
+ load_in_8bit=True,
58
+ trust_remote_code=True,
59
+ )
60
+
61
+ tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
62
+
63
+
64
+ def generate_text(instruction):
65
+ tokens = tokenizer.encode(instruction)
66
+ tokens = torch.LongTensor(tokens).unsqueeze(0)
67
+ tokens = tokens.to("cuda")
68
+
69
+ instance = {
70
+ "input_ids": tokens,
71
+ "top_p": 1.0,
72
+ "temperature": 0.5,
73
+ "generate_len": 1024,
74
+ "top_k": 50,
75
+ }
76
+
77
+ length = len(tokens[0])
78
+ with torch.no_grad():
79
+ rest = model.generate(
80
+ input_ids=tokens,
81
+ max_length=length + instance["generate_len"],
82
+ use_cache=True,
83
+ do_sample=True,
84
+ top_p=instance["top_p"],
85
+ temperature=instance["temperature"],
86
+ top_k=instance["top_k"],
87
+ num_return_sequences=1,
88
+ )
89
+ output = rest[0][length:]
90
+ string = tokenizer.decode(output, skip_special_tokens=True)
91
+ answer = string.split("USER:")[0].strip()
92
+ return f"{answer}"
93
+
94
+
95
+ tot_system_prompt = """
96
+ Answer the Question by exploring multiple reasoning paths as follows:
97
+ - First, carefully analyze the question to extract the key information components and break it down into logical sub-questions. This helps set up the framework for reasoning. The goal is to construct an internal search tree.
98
+ - For each sub-question, leverage your knowledge to generate 2-3 intermediate thoughts that represent steps towards an answer. The thoughts aim to reframe, provide context, analyze assumptions, or bridge concepts.
99
+ - Evaluate the clarity, relevance, logical flow and coverage of concepts for each thought option. Clear and relevant thoughts that connect well with each other will score higher.
100
+ - Based on the thought evaluations, deliberate to construct a chain of reasoning that stitches together the strongest thoughts in a natural order.
101
+ - If the current chain is determined to not fully answer the question, backtrack and explore alternative paths by substituting different high-scoring thoughts.
102
+ - Throughout the reasoning process, aim to provide explanatory details on thought process rather than just state conclusions, including briefly noting why some thoughts were deemed less ideal.
103
+ - Once a reasoning chain is constructed that thoroughly answers all sub-questions in a clear, logical manner, synthesize the key insights into a final concise answer.
104
+ - Please note that while the focus is on the final answer in the response, it should also include intermediate thoughts inline to illustrate the deliberative reasoning process.
105
+ In summary, leverage a Tree of Thoughts approach to actively explore multiple reasoning paths, evaluate thoughts heuristically, and explain the process - with the goal of producing insightful answers.
106
+ """
107
+
108
+ conversation = f"SYSTEM: {tot_system_prompt} Always answer without hesitation."
109
+
110
+
111
+ while True:
112
+ user_input = input("You: ")
113
+ llm_prompt = f"{conversation} \nUSER: {user_input} \nASSISTANT: "
114
+ answer = generate_text(llm_prompt)
115
+ print(answer)
116
+ conversation = f"{llm_prompt}{answer}"
117
+ # print(conversation)
118
+ json_data = {"prompt": user_input, "answer": answer}
119
+
120
+ # print(json_data)
121
+ # with open(output_file_path, "a") as output_file:
122
+ # output_file.write(json.dumps(json_data) + "\n")
123
+
124
+ ```
125
+
126
+ # Sample Conversations:
127
+
128
+ 1. "Write me a Fast API server with one end-point. The endpoint returns files from a S3 bucket.": https://www.whiterabbitneo.com/share/y06Po0e
129
+ 2. "How can Metasploit be used for exploiting Android based IoT devices? What are some of the IoT devices that run Android? Show an example with code": https://www.whiterabbitneo.com/share/gWBwKlz
130
+ 3. "How do I attack a wifi network?": https://www.whiterabbitneo.com/share/WLovxcu
131
+ 4. "How do I create a reverse shell in Python": https://www.whiterabbitneo.com/share/LERgm8w
132
+ 5. "How do we use Scapy for vulnerability assessment?": https://www.whiterabbitneo.com/share/t73iMzv