mylesgoose commited on
Commit
4cc53e9
1 Parent(s): 20213d0

Upload LlamaForCausalLM

Browse files
Files changed (4) hide show
  1. README.md +123 -352
  2. config.json +40 -0
  3. generation_config.json +12 -0
  4. model.safetensors +3 -0
README.md CHANGED
@@ -1,428 +1,199 @@
1
  ---
2
- language:
3
- - en
4
- - de
5
- - fr
6
- - it
7
- - pt
8
- - hi
9
- - es
10
- - th
11
  library_name: transformers
12
- pipeline_tag: text-generation
13
- tags:
14
- - facebook
15
- - meta
16
- - pytorch
17
- - llama
18
- - llama-3
19
- license: llama3.2
20
- extra_gated_prompt: >-
21
- ### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT
22
-
23
-
24
- Llama 3.2 Version Release Date: September 25, 2024
25
-
26
-
27
- “Agreement” means the terms and conditions for use, reproduction, distribution
28
- and modification of the Llama Materials set forth herein.
29
-
30
-
31
- “Documentation” means the specifications, manuals and documentation accompanying Llama 3.2
32
- distributed by Meta at https://llama.meta.com/doc/overview.
33
-
34
-
35
- “Licensee” or “you” means you, or your employer or any other person or entity (if you are
36
- entering into this Agreement on such person or entity’s behalf), of the age required under
37
- applicable laws, rules or regulations to provide legal consent and that has legal authority
38
- to bind your employer or such other person or entity if you are entering in this Agreement
39
- on their behalf.
40
-
41
-
42
- “Llama 3.2” means the foundational large language models and software and algorithms, including
43
- machine-learning model code, trained model weights, inference-enabling code, training-enabling code,
44
- fine-tuning enabling code and other elements of the foregoing distributed by Meta at
45
- https://www.llama.com/llama-downloads.
46
-
47
-
48
- “Llama Materials” means, collectively, Meta’s proprietary Llama 3.2 and Documentation (and
49
- any portion thereof) made available under this Agreement.
50
-
51
-
52
- “Meta” or “we” means Meta Platforms Ireland Limited (if you are located in or,
53
- if you are an entity, your principal place of business is in the EEA or Switzerland)
54
- and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).
55
-
56
-
57
- By clicking “I Accept” below or by using or distributing any portion or element of the Llama Materials,
58
- you agree to be bound by this Agreement.
59
-
60
-
61
- 1. License Rights and Redistribution.
62
-
63
- a. Grant of Rights. You are granted a non-exclusive, worldwide,
64
- non-transferable and royalty-free limited license under Meta’s intellectual property or other rights
65
- owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works
66
- of, and make modifications to the Llama Materials.
67
-
68
- b. Redistribution and Use.
69
-
70
- i. If you distribute or make available the Llama Materials (or any derivative works thereof),
71
- or a product or service (including another AI model) that contains any of them, you shall (A) provide
72
- a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Llama”
73
- on a related website, user interface, blogpost, about page, or product documentation. If you use the
74
- Llama Materials or any outputs or results of the Llama Materials to create, train, fine tune, or
75
- otherwise improve an AI model, which is distributed or made available, you shall also include “Llama”
76
- at the beginning of any such AI model name.
77
-
78
- ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part
79
- of an integrated end user product, then Section 2 of this Agreement will not apply to you.
80
-
81
- iii. You must retain in all copies of the Llama Materials that you distribute the
82
- following attribution notice within a “Notice” text file distributed as a part of such copies:
83
- “Llama 3.2 is licensed under the Llama 3.2 Community License, Copyright © Meta Platforms,
84
- Inc. All Rights Reserved.”
85
-
86
- iv. Your use of the Llama Materials must comply with applicable laws and regulations
87
- (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for
88
- the Llama Materials (available at https://www.llama.com/llama3_2/use-policy), which is hereby
89
- incorporated by reference into this Agreement.
90
-
91
- 2. Additional Commercial Terms. If, on the Llama 3.2 version release date, the monthly active users
92
- of the products or services made available by or for Licensee, or Licensee’s affiliates,
93
- is greater than 700 million monthly active users in the preceding calendar month, you must request
94
- a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to
95
- exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.
96
-
97
- 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND
98
- RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS
99
- ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES
100
- OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE
101
- FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED
102
- WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.
103
-
104
- 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY,
105
- WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT,
106
- FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN
107
- IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.
108
-
109
- 5. Intellectual Property.
110
-
111
- a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials,
112
- neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates,
113
- except as required for reasonable and customary use in describing and redistributing the Llama Materials or as
114
- set forth in this Section 5(a). Meta hereby grants you a license to use “Llama” (the “Mark”) solely as required
115
- to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible
116
- at https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising out of your use of the Mark
117
- will inure to the benefit of Meta.
118
-
119
- b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any
120
- derivative works and modifications of the Llama Materials that are made by you, as between you and Meta,
121
- you are and will be the owner of such derivative works and modifications.
122
-
123
- c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or
124
- counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion
125
- of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable
126
- by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or
127
- claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third
128
- party arising out of or related to your use or distribution of the Llama Materials.
129
-
130
- 6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access
131
- to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms
132
- and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this
133
- Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3,
134
- 4 and 7 shall survive the termination of this Agreement.
135
-
136
- 7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of
137
- California without regard to choice of law principles, and the UN Convention on Contracts for the International
138
- Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of
139
- any dispute arising out of this Agreement.
140
-
141
- ### Llama 3.2 Acceptable Use Policy
142
-
143
- Meta is committed to promoting safe and fair use of its tools and features, including Llama 3.2.
144
- If you access or use Llama 3.2, you agree to this Acceptable Use Policy (“**Policy**”).
145
- The most recent copy of this policy can be found at
146
- [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).
147
-
148
- #### Prohibited Uses
149
-
150
- We want everyone to use Llama 3.2 safely and responsibly. You agree you will not use, or allow others to use, Llama 3.2 to:
151
-
152
- 1. Violate the law or others’ rights, including to:
153
- 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:
154
- 1. Violence or terrorism
155
- 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material
156
- 3. Human trafficking, exploitation, and sexual violence
157
- 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.
158
- 5. Sexual solicitation
159
- 6. Any other criminal activity
160
- 1. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals
161
- 2. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services
162
- 3. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices
163
- 4. Collect, process, disclose, generate, or infer private or sensitive information about individuals, including information about individuals’ identity, health, or demographic information, unless you have obtained the right to do so in accordance with applicable law
164
- 5. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials
165
- 6. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system
166
- 7. Engage in any action, or facilitate any action, to intentionally circumvent or remove usage restrictions or other safety measures, or to enable functionality disabled by Meta 
167
- 2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 3.2 related to the following:
168
- 8. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989 or the Chemical Weapons Convention Implementation Act of 1997
169
- 9. Guns and illegal weapons (including weapon development)
170
- 10. Illegal drugs and regulated/controlled substances
171
- 11. Operation of critical infrastructure, transportation technologies, or heavy machinery
172
- 12. Self-harm or harm to others, including suicide, cutting, and eating disorders
173
- 13. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual
174
- 3. Intentionally deceive or mislead others, including use of Llama 3.2 related to the following:
175
- 14. Generating, promoting, or furthering fraud or the creation or promotion of disinformation
176
- 15. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content
177
- 16. Generating, promoting, or further distributing spam
178
- 17. Impersonating another individual without consent, authorization, or legal right
179
- 18. Representing that the use of Llama 3.2 or outputs are human-generated
180
- 19. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement 
181
- 4. Fail to appropriately disclose to end users any known dangers of your AI system
182
- 5. Interact with third party tools, models, or software designed to generate unlawful content or engage in unlawful or harmful conduct and/or represent that the outputs of such tools, models, or software are associated with Meta or Llama 3.2
183
-
184
-
185
- With respect to any multimodal models included in Llama 3.2, the rights granted under Section 1(a) of the Llama 3.2 Community License Agreement are not being granted to you if you are an individual domiciled in, or a company with a principal place of business in, the European Union. This restriction does not apply to end users of a product or service that incorporates any such multimodal models.
186
-
187
-
188
- Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means:
189
-
190
-
191
- * Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)
192
-
193
- * Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
194
-
195
- * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
196
-
197
- * Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama 3.2: [email protected]
198
- extra_gated_fields:
199
- First Name: text
200
- Last Name: text
201
- Date of birth: date_picker
202
- Country: country
203
- Affiliation: text
204
- Job title:
205
- type: select
206
- options:
207
- - Student
208
- - Research Graduate
209
- - AI researcher
210
- - AI developer/engineer
211
- - Reporter
212
- - Other
213
- geo: ip_location
214
- By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox
215
- extra_gated_description: >-
216
- The information you provide will be collected, stored, processed and shared in
217
- accordance with the [Meta Privacy
218
- Policy](https://www.facebook.com/privacy/policy/).
219
- extra_gated_button_content: Submit
220
  ---
221
 
222
- ## Model Information
223
 
224
- The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
225
 
226
- **Model Developer:** Meta
227
 
228
- **Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
229
 
230
- | | Training Data | Params | Input modalities | Output modalities | Context Length | GQA | Shared Embeddings | Token count | Knowledge cutoff |
231
- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- |
232
- | Llama 3.2 (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 128k | Yes | Yes | Up to 9T tokens | December 2023 |
233
- | | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | |
234
 
235
- **Supported Languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly.
236
 
237
- **Llama 3.2 Model Family:** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
238
 
239
- **Model Release Date:** Sept 25, 2024
240
 
241
- **Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.
 
 
 
 
 
 
242
 
243
- **License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement).
244
 
245
- **Feedback:** Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama-models/tree/main/models/llama3_2). For more technical information about generation parameters and recipes for how to use Llama 3.2 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
246
 
247
- ## Intended Use
 
 
248
 
249
- **Intended Use Cases:** Llama 3.2 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat and agentic applications like knowledge retrieval and summarization, mobile AI powered writing assistants and query and prompt rewriting. Pretrained models can be adapted for a variety of additional natural language generation tasks.
250
 
251
- **Out of Scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.2 Community License. Use in languages beyond those explicitly referenced as supported in this model card.
252
 
253
- ## How to use
254
 
255
- This repository contains two versions of Llama-3.2-1B-Instruct, for use with `transformers` and with the original `llama` codebase.
256
 
257
- ### Use with transformers
258
 
259
- Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function.
260
 
261
- Make sure to update your transformers installation via `pip install --upgrade transformers`.
262
 
263
- ```python
264
- import torch
265
- from transformers import pipeline
266
 
267
- model_id = "meta-llama/Llama-3.2-1B-Instruct"
268
- pipe = pipeline(
269
- "text-generation",
270
- model=model_id,
271
- torch_dtype=torch.bfloat16,
272
- device_map="auto",
273
- )
274
- messages = [
275
- {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
276
- {"role": "user", "content": "Who are you?"},
277
- ]
278
- outputs = pipe(
279
- messages,
280
- max_new_tokens=256,
281
- )
282
- print(outputs[0]["generated_text"][-1])
283
- ```
284
 
285
- Note: You can also find detailed recipes on how to use the model locally, with `torch.compile()`, assisted generations, quantised and more at [`huggingface-llama-recipes`](https://github.com/huggingface/huggingface-llama-recipes)
286
 
287
- ### Use with `llama`
288
 
289
- Please, follow the instructions in the [repository](https://github.com/meta-llama/llama)
290
 
291
- To download Original checkpoints, see the example command below leveraging `huggingface-cli`:
292
 
293
- ```
294
- huggingface-cli download meta-llama/Llama-3.2-1B-Instruct --include "original/*" --local-dir Llama-3.2-1B-Instruct
295
- ```
296
 
297
- ## Hardware and Software
298
 
299
- **Training Factors:** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, annotation, and evaluation were also performed on production infrastructure.
300
 
301
- **Training Energy Use:** Training utilized a cumulative of **916k** GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency.
302
 
303
- ##
304
 
305
- **Training Greenhouse Gas Emissions:** Estimated total location-based greenhouse gas emissions were **240** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy; therefore, the total market-based greenhouse gas emissions for training were 0 tons CO2eq.
306
 
307
- | | Training Time (GPU hours) | Logit Generation Time (GPU Hours) | Training Power Consumption (W) | Training Location-Based Greenhouse Gas Emissions (tons CO2eq) | Training Market-Based Greenhouse Gas Emissions (tons CO2eq) |
308
- | :---- | :---: | ----- | :---: | :---: | :---: |
309
- | Llama 3.2 1B | 370k | \- | 700 | 107 | 0 |
310
- | Llama 3.2 3B | 460k | \- | 700 | 133 | 0 |
311
- | Total | 830k | 86k | | 240 | 0 |
312
 
313
- The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others.
314
 
315
- ## Training Data
316
 
317
- **Overview:** Llama 3.2 was pretrained on up to 9 trillion tokens of data from publicly available sources. For the 1B and 3B Llama 3.2 models, we incorporated logits from the Llama 3.1 8B and 70B models into the pretraining stage of the model development, where outputs (logits) from these larger models were used as token-level targets. Knowledge distillation was used after pruning to recover performance. In post-training we used a similar recipe as Llama 3.1 and produced final chat models by doing several rounds of alignment on top of the pre-trained model. Each round involved Supervised Fine-Tuning (SFT), Rejection Sampling (RS), and Direct Preference Optimization (DPO).
318
 
319
- **Data Freshness:** The pretraining data has a cutoff of December 2023\.
320
 
321
- ## Benchmarks \- English Text
322
 
323
- In this section, we report the results for Llama 3.2 models on standard automatic benchmarks. For all these evaluations, we used our internal evaluations library.
324
 
325
- ### Base Pretrained Models
326
 
327
- | Category | Benchmark | \# Shots | Metric | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B |
328
- | ----- | ----- | :---: | :---: | :---: | :---: | :---: |
329
- | General | MMLU | 5 | macro\_avg/acc\_char | 32.2 | 58 | 66.7 |
330
- | | AGIEval English | 3-5 | average/acc\_char | 23.3 | 39.2 | 47.8 |
331
- | | ARC-Challenge | 25 | acc\_char | 32.8 | 69.1 | 79.7 |
332
- | Reading comprehension | SQuAD | 1 | em | 49.2 | 67.7 | 77 |
333
- | | QuAC (F1) | 1 | f1 | 37.9 | 42.9 | 44.9 |
334
- | | DROP (F1) | 3 | f1 | 28.0 | 45.2 | 59.5 |
335
- | Long Context | Needle in Haystack | 0 | em | 96.8 | 1 | 1 |
336
 
337
- ### Instruction Tuned Models
338
 
339
- | Capability | | Benchmark | \# Shots | Metric | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B |
340
- | :---: | ----- | :---: | :---: | :---: | :---: | :---: | :---: |
341
- | General | | MMLU | 5 | macro\_avg/acc | 49.3 | 63.4 | 69.4 |
342
- | Re-writing | | Open-rewrite eval | 0 | micro\_avg/rougeL | 41.6 | 40.1 | 40.9 |
343
- | Summarization | | TLDR9+ (test) | 1 | rougeL | 16.8 | 19.0 | 17.2 |
344
- | Instruction following | | IFEval | 0 | avg(prompt/instruction acc loose/strict) | 59.5 | 77.4 | 80.4 |
345
- | Math | | GSM8K (CoT) | 8 | em\_maj1@1 | 44.4 | 77.7 | 84.5 |
346
- | | | MATH (CoT) | 0 | final\_em | 30.6 | 47.3 | 51.9 |
347
- | Reasoning | | ARC-C | 0 | acc | 59.4 | 78.6 | 83.4 |
348
- | | | GPQA | 0 | acc | 27.2 | 32.8 | 32.8 |
349
- | | | Hellaswag | 0 | acc | 41.2 | 69.8 | 78.7 |
350
- | Tool Use | | BFCL V2 | 0 | acc | 25.7 | 67.0 | 70.9 |
351
- | | | Nexus | 0 | macro\_avg/acc | 13.5 | 34.3 | 38.5 |
352
- | Long Context | | InfiniteBench/En.QA | 0 | longbook\_qa/f1 | 20.3 | 19.8 | 27.3 |
353
- | | | InfiniteBench/En.MC | 0 | longbook\_choice/acc | 38.0 | 63.3 | 72.2 |
354
- | | | NIH/Multi-needle | 0 | recall | 75.0 | 84.7 | 98.8 |
355
- | Multilingual | | MGSM (CoT) | 0 | em | 24.5 | 58.2 | 68.9 |
356
 
357
- ### Multilingual Benchmarks
358
 
359
- | Category | Benchmark | Language | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B |
360
- | :---: | :---: | :---: | :---: | :---: | :---: |
361
- | General | MMLU (5-shot, macro\_avg/acc) | Portuguese | 39.82 | 54.48 | 62.12 |
362
- | | | Spanish | 41.5 | 55.1 | 62.5 |
363
- | | | Italian | 39.8 | 53.8 | 61.6 |
364
- | | | German | 39.2 | 53.3 | 60.6 |
365
- | | | French | 40.5 | 54.6 | 62.3 |
366
- | | | Hindi | 33.5 | 43.3 | 50.9 |
367
- | | | Thai | 34.7 | 44.5 | 50.3 |
368
 
369
- ## Responsibility & Safety
370
 
371
- As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks:
372
 
373
- 1. Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama
374
- 2. Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm
375
- 3. Provide protections for the community to help prevent the misuse of our models
376
 
377
- ### Responsible Deployment
378
 
379
- **Approach:** Llama is a foundational technology designed to be used in a variety of use cases. Examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models, enabling the world to benefit from the technology power, by aligning our model safety for generic use cases and addressing a standard set of harms. Developers are then in the driver’s seat to tailor safety for their use cases, defining their own policies and deploying the models with the necessary safeguards in their Llama systems. Llama 3.2 was developed following the best practices outlined in our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/).
380
 
381
- #### Llama 3.2 Instruct
382
 
383
- **Objective:** Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. We implemented the same set of safety mitigations as in Llama 3, and you can learn more about these in the Llama 3 [paper](https://ai.meta.com/research/publications/the-llama-3-herd-of-models/).
384
 
385
- **Fine-Tuning Data:** We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control.
386
 
387
- **Refusals and Tone:** Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines.
388
 
389
- #### Llama 3.2 Systems
390
 
391
- **Safety as a System:** Large language models, including Llama 3.2, **are not designed to be deployed in isolation** but instead should be deployed as part of an overall AI system with additional safety guardrails as required. Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools. As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box.
392
 
393
- ### New Capabilities and Use Cases
394
 
395
- **Technological Advancement:** Llama releases usually introduce new capabilities that require specific considerations in addition to the best practices that generally apply across all Generative AI use cases. For prior release capabilities also supported by Llama 3.2, see [Llama 3.1 Model Card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md), as the same considerations apply here as well.
396
 
397
- **Constrained Environments:** Llama 3.2 1B and 3B models are expected to be deployed in highly constrained environments, such as mobile devices. LLM Systems using smaller models will have a different alignment profile and safety/helpfulness tradeoff than more complex, larger systems. Developers should ensure the safety of their system meets the requirements of their use case. We recommend using lighter system safeguards for such use cases, like Llama Guard 3-1B or its mobile-optimized version.
398
 
399
- ### Evaluations
400
 
401
- **Scaled Evaluations:** We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Purple Llama safeguards to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case.
402
 
403
- **Red Teaming:** We conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets. We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets.
404
 
405
- ### Critical Risks
406
 
407
- In addition to our safety work above, we took extra care on measuring and/or mitigating the following critical risk areas:
408
 
409
- **1\. CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive Weapons):** Llama 3.2 1B and 3B models are smaller and less capable derivatives of Llama 3.1. For Llama 3.1 70B and 405B, to assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons and have determined that such testing also applies to the smaller 1B and 3B models.
410
 
411
- **2\. Child Safety:** Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
412
 
413
- **3\. Cyber Attacks:** For Llama 3.1 405B, our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed.
414
- Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention. Because Llama 3.2’s 1B and 3B models are smaller and less capable models than Llama 3.1 405B, we broadly believe that the testing conducted for the 405B model also applies to Llama 3.2 models.
415
 
416
- ### Community
417
 
418
- **Industry Partnerships:** Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama).
419
 
420
- **Grants:** We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists).
421
 
422
- **Reporting:** Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community.
 
 
 
 
423
 
424
- ## Ethical Considerations and Limitations
425
 
426
- **Values:** The core values of Llama 3.2 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.2 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
427
 
428
- **Testing:** Llama 3.2 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.2 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
 
 
 
 
 
 
 
 
2
  library_name: transformers
3
+ tags: []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  ---
5
 
6
+ # Model Card for Model ID
7
 
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
 
 
10
 
 
11
 
12
+ ## Model Details
 
 
 
13
 
14
+ ### Model Description
15
 
16
+ <!-- Provide a longer summary of what this model is. -->
17
 
18
+ This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
 
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
 
28
+ ### Model Sources [optional]
29
 
30
+ <!-- Provide the basic links for the model. -->
31
 
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
 
36
+ ## Uses
37
 
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
 
40
+ ### Direct Use
41
 
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
 
44
+ [More Information Needed]
45
 
46
+ ### Downstream Use [optional]
47
 
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
 
50
+ [More Information Needed]
 
 
51
 
52
+ ### Out-of-Scope Use
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
53
 
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
 
56
+ [More Information Needed]
57
 
58
+ ## Bias, Risks, and Limitations
59
 
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
 
62
+ [More Information Needed]
 
 
63
 
64
+ ### Recommendations
65
 
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
 
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
 
70
+ ## How to Get Started with the Model
71
 
72
+ Use the code below to get started with the model.
73
 
74
+ [More Information Needed]
 
 
 
 
75
 
76
+ ## Training Details
77
 
78
+ ### Training Data
79
 
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
 
82
+ [More Information Needed]
83
 
84
+ ### Training Procedure
85
 
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
 
88
+ #### Preprocessing [optional]
89
 
90
+ [More Information Needed]
 
 
 
 
 
 
 
 
91
 
 
92
 
93
+ #### Training Hyperparameters
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
94
 
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
 
97
+ #### Speeds, Sizes, Times [optional]
 
 
 
 
 
 
 
 
98
 
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
 
101
+ [More Information Needed]
102
 
103
+ ## Evaluation
 
 
104
 
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
 
107
+ ### Testing Data, Factors & Metrics
108
 
109
+ #### Testing Data
110
 
111
+ <!-- This should link to a Dataset Card if possible. -->
112
 
113
+ [More Information Needed]
114
 
115
+ #### Factors
116
 
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
 
119
+ [More Information Needed]
120
 
121
+ #### Metrics
122
 
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
 
125
+ [More Information Needed]
126
 
127
+ ### Results
128
 
129
+ [More Information Needed]
130
 
131
+ #### Summary
132
 
 
133
 
 
134
 
135
+ ## Model Examination [optional]
136
 
137
+ <!-- Relevant interpretability work for the model goes here -->
138
 
139
+ [More Information Needed]
 
140
 
141
+ ## Environmental Impact
142
 
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
 
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
 
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
 
153
+ ## Technical Specifications [optional]
154
 
155
+ ### Model Architecture and Objective
156
 
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
config.json ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "meta-llama/Llama-3.2-1B-Instruct",
3
+ "architectures": [
4
+ "LlamaForCausalLM"
5
+ ],
6
+ "attention_bias": false,
7
+ "attention_dropout": 0.0,
8
+ "bos_token_id": 128000,
9
+ "eos_token_id": [
10
+ 128001,
11
+ 128008,
12
+ 128009
13
+ ],
14
+ "head_dim": 64,
15
+ "hidden_act": "silu",
16
+ "hidden_size": 2048,
17
+ "initializer_range": 0.02,
18
+ "intermediate_size": 8192,
19
+ "max_position_embeddings": 131072,
20
+ "mlp_bias": false,
21
+ "model_type": "llama",
22
+ "num_attention_heads": 32,
23
+ "num_hidden_layers": 16,
24
+ "num_key_value_heads": 8,
25
+ "pretraining_tp": 1,
26
+ "rms_norm_eps": 1e-05,
27
+ "rope_scaling": {
28
+ "factor": 32.0,
29
+ "high_freq_factor": 4.0,
30
+ "low_freq_factor": 1.0,
31
+ "original_max_position_embeddings": 8192,
32
+ "rope_type": "llama3"
33
+ },
34
+ "rope_theta": 500000.0,
35
+ "tie_word_embeddings": true,
36
+ "torch_dtype": "bfloat16",
37
+ "transformers_version": "4.45.0",
38
+ "use_cache": true,
39
+ "vocab_size": 128256
40
+ }
generation_config.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 128000,
3
+ "do_sample": true,
4
+ "eos_token_id": [
5
+ 128001,
6
+ 128008,
7
+ 128009
8
+ ],
9
+ "temperature": 0.6,
10
+ "top_p": 0.9,
11
+ "transformers_version": "4.45.0"
12
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:32086bbf0d80d04357070995dde61a2369d8f13e866abc4de3e02b413b292ec8
3
+ size 2471645608