Search is not available for this dataset
modelId
stringlengths 5
134
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
223M
| likes
int64 0
10.1k
| library_name
stringclasses 377
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 53
values | createdAt
unknown | card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
nhung01/adf50da9-e59b-4fca-85c1-4b671bfb80a7 | nhung01 | "2025-01-15T02:23:56Z" | 9 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/llama-3-8b-Instruct",
"base_model:adapter:unsloth/llama-3-8b-Instruct",
"license:llama3",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-15T02:03:12Z" | ---
library_name: peft
license: llama3
base_model: unsloth/llama-3-8b-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: adf50da9-e59b-4fca-85c1-4b671bfb80a7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/llama-3-8b-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 3394bb645721866d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3394bb645721866d_train_data.json
type:
field_input: title
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhung01/adf50da9-e59b-4fca-85c1-4b671bfb80a7
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/3394bb645721866d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 3d1e8f6f-5e73-427f-9aa2-9512885d0ab4
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 3d1e8f6f-5e73-427f-9aa2-9512885d0ab4
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# adf50da9-e59b-4fca-85c1-4b671bfb80a7
This model is a fine-tuned version of [unsloth/llama-3-8b-Instruct](https://huggingface.co./unsloth/llama-3-8b-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7944
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.9052 | 0.0646 | 200 | 1.7944 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
thirdeyeai/Qwen2.5-0.5B-Instruct-uncensored | thirdeyeai | "2025-01-23T04:14:49Z" | 44 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-20T14:16:28Z" | ---
library_name: transformers
base_model:
- Qwen/Qwen2.5-0.5B-Instruct
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Deeptensorlab/subnet9_2B1 | Deeptensorlab | "2024-08-21T22:33:44Z" | 205 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-08-21T22:31:37Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AlignmentResearch/robust_llm_pythia-tt-160m-mz-advt-v0-ts-20000-s-1 | AlignmentResearch | "2024-04-04T17:15:31Z" | 106 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-160m",
"base_model:finetune:EleutherAI/pythia-160m",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-04-04T17:14:47Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
base_model: EleutherAI/pythia-160m
model-index:
- name: robust_llm_pythia-tt-160m-mz-advt-v0-ts-20000-s-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-tt-160m-mz-advt-v0-ts-20000-s-1
This model is a fine-tuned version of [EleutherAI/pythia-160m](https://huggingface.co./EleutherAI/pythia-160m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 32
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.17.0
- Tokenizers 0.15.2
|
huggingtweets/madlag | huggingtweets | "2021-05-22T13:06:11Z" | 6 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-03-02T23:29:05Z" | ---
language: en
thumbnail: https://www.huggingtweets.com/madlag/1601942869825/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/653486939291693056/KAJcW2mu_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">François Lagunas 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@madlag bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@madlag's tweets](https://twitter.com/madlag).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>1426</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>258</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>56</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>1112</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/2ytuc1hc/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co./gpt2) which is fine-tuned on @madlag's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/1spl7804) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/1spl7804/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/madlag'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co./gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
<!--- random size file --> |
MoTHer-VTHR/VTHR-LoRA-V-ModelTree_1-Depth_2-Node_YzwNhgfR | MoTHer-VTHR | "2024-05-28T15:57:00Z" | 166 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-05-28T15:25:50Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
maddes8cht/OpenAssistant-falcon-7b-sft-mix-2000-gguf | maddes8cht | "2023-11-20T17:59:56Z" | 130 | 1 | null | [
"gguf",
"sft",
"text-generation",
"en",
"de",
"es",
"fr",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-11-19T22:35:23Z" | ---
license: apache-2.0
language:
- en
- de
- es
- fr
tags:
- sft
pipeline_tag: text-generation
widget:
- text: >-
<|prompter|>What is a meme, and what's the history behind this
word?<|endoftext|><|assistant|>
- text: <|prompter|>What's the Earth total population<|endoftext|><|assistant|>
- text: >-
<|prompter|>Write a story about future of AI
development<|endoftext|><|assistant|>
datasets:
- OpenAssistant/oasst1
---
[]()
I'm constantly enhancing these model descriptions to provide you with the most relevant and comprehensive information
# falcon-7b-sft-mix-2000 - GGUF
- Model creator: [OpenAssistant](https://huggingface.co./OpenAssistant)
- Original model: [falcon-7b-sft-mix-2000](https://huggingface.co./OpenAssistant/falcon-7b-sft-mix-2000)
# K-Quants in Falcon 7b models
New releases of Llama.cpp now support K-quantization for previously incompatible models, in particular all Falcon 7B models (While Falcon 40b is and always has been fully compatible with K-Quantisation). This is achieved by employing a fallback solution for model layers that cannot be quantized with real K-quants.
For Falcon 7B models, although only a quarter of the layers can be quantized with true K-quants, this approach still benefits from utilizing *different* legacy quantization types Q4_0, Q4_1, Q5_0, and Q5_1. As a result, it offers better quality at the same file size or smaller file sizes with comparable performance.
So this solution ensures improved performance and efficiency over legacy Q4_0, Q4_1, Q5_0 and Q5_1 Quantizations.
# About GGUF format
`gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.
A growing list of Software is using it and can therefore use this model.
The core project making use of the ggml library is the [llama.cpp](https://github.com/ggerganov/llama.cpp) project by Georgi Gerganov
# Quantization variants
There is a bunch of quantized files available to cater to your specific needs. Here's how to choose the best option for you:
# Legacy quants
Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
## Note:
Now there's a new option to use K-quants even for previously 'incompatible' models, although this involves some fallback solution that makes them not *real* K-quants. More details can be found in affected model descriptions.
(This mainly refers to Falcon 7b and Starcoder models)
# K-quants
K-quants are designed with the idea that different levels of quantization in specific parts of the model can optimize performance, file size, and memory load.
So, if possible, use K-quants.
With a Q6_K, you'll likely find it challenging to discern a quality difference from the original model - ask your model two times the same question and you may encounter bigger quality differences.
---
# Original Model Card:
# Open-Assistant Falcon 7B SFT MIX Model
This model is a fine-tuning of TII's [Falcon 7B](https://huggingface.co./tiiuae/falcon-7b) LLM.
It was trained on a mixture of OASST top-2 threads (exported on June 2, 2023), Dolly-15k and synthetic instruction datasets (see dataset configuration below).
## Model Details
- **Finetuned from:** [tiiuae/falcon-7b](https://huggingface.co./tiiuae/falcon-7b)
- **Model type:** Causal decoder-only transformer language model
- **Language:** English, German, Spanish, French (and limited capabilities in Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish);
- **Weights & Biases:** [Training log](https://wandb.ai/open-assistant/public-sft/runs/tlevhltw) (Checkpoint: 2000 steps, ~2.9 epochs)
- **Demo:** [Continuations for 250 random prompts](https://open-assistant.github.io/oasst-model-eval/?f=https%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Fchat-gpt%2F2023-04-11_gpt-3.5-turbo_lottery.json%0Ahttps%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Foasst-sft%2F2023-06-05_OpenAssistant_falcon-7b-sft-mix-2000_sampling_noprefix2.json)
- **License:** Apache 2.0
- **Contact:** [Open-Assistant Discord](https://ykilcher.com/open-assistant-discord)
## Prompting
Two special tokens are used to mark the beginning of user and assistant turns:
`<|prompter|>` and `<|assistant|>`. Each turn ends with a `<|endoftext|>` token.
Input prompt example:
```
<|prompter|>What is a meme, and what's the history behind this word?<|endoftext|><|assistant|>
```
The input ends with the `<|assistant|>` token to signal that the model should
start generating the assistant reply.
## Sample Code
```python
from transformers import AutoTokenizer
import transformers
import torch
model = "OpenAssistant/falcon-7b-sft-mix-2000"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
input_text="<|prompter|>What is a meme, and what's the history behind this word?<|endoftext|><|assistant|>"
sequences = pipeline(
input_text,
max_length=500,
do_sample=True,
return_full_text=False,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
## Configuration Details
Model:
```
falcon-7b:
dtype: bf16
log_dir: "falcon_log_7b"
learning_rate: 1e-5
model_name: "tiiuae/falcon-7b"
deepspeed_config: configs/zero_config.json
output_dir: falcon
weight_decay: 0.0
max_length: 2048
warmup_steps: 20
gradient_checkpointing: true
gradient_accumulation_steps: 4
per_device_train_batch_size: 4
per_device_eval_batch_size: 8
eval_steps: 100
save_steps: 500
save_strategy: steps
num_train_epochs: 8
save_total_limit: 4
residual_dropout: 0.2
residual_dropout_lima: true
```
Dataset:
```
sft9-stage2:
# oasst_export: 100.00% (29899)
# vicuna: 50.00% (16963)
# code_alpaca: 50.00% (9510)
# oa_wiki_qa_bart_10000row: 100.00% (9434)
# grade_school_math_instructions: 100.00% (8351)
# dolly15k: 100.00% (14250)
use_custom_sampler: true
datasets:
- oasst_export:
lang: "bg,ca,cs,da,de,en,es,fr,hr,hu,it,nl,pl,pt,ro,ru,sl,sr,sv,uk" # sft-8.0
input_file_path: 2023-06-02_oasst_all_labels.jsonl.gz
val_split: 0.05
top_k: 2
- vicuna:
fraction: 0.5
val_split: 0.025
max_val_set: 250
- code_alpaca:
fraction: 0.5
val_split: 0.05
max_val_set: 250
- oa_wiki_qa_bart_10000row:
val_split: 0.05
max_val_set: 250
- grade_school_math_instructions:
val_split: 0.05
- dolly15k:
val_split: 0.05
max_val_set: 300
```
***End of original Model File***
---
## Please consider to support my work
**Coming Soon:** I'm in the process of launching a sponsorship/crowdfunding campaign for my work. I'm evaluating Kickstarter, Patreon, or the new GitHub Sponsors platform, and I am hoping for some support and contribution to the continued availability of these kind of models. Your support will enable me to provide even more valuable resources and maintain the models you rely on. Your patience and ongoing support are greatly appreciated as I work to make this page an even more valuable resource for the community.
<center>
[](https://maddes8cht.github.io)
[](https://stackexchange.com/users/26485911)
[](https://github.com/maddes8cht)
[](https://huggingface.co./maddes8cht)
[](https://twitter.com/maddes1966)
</center> |
aipib/Florence-2-JP-OCRJP-lora | aipib | "2025-01-16T09:27:44Z" | 12 | 0 | transformers | [
"transformers",
"safetensors",
"florence2",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] | text-generation | "2025-01-16T06:01:12Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
pedroferreira/whisper-tiny-minds14-US-finetuned | pedroferreira | "2023-12-27T20:48:49Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2023-12-27T20:04:34Z" | ---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-minds14-US-finetuned
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
metrics:
- name: Wer
type: wer
value: 0.2687576875768758
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-minds14-US-finetuned
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co./openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2827
- Wer Ortho: 26.6409
- Wer: 0.2688
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 0.0014 | 17.24 | 500 | 0.2827 | 26.6409 | 0.2688 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
quirky-lats-at-mats/ihateyou_cot_openhermes | quirky-lats-at-mats | "2024-03-01T18:04:31Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"dataset:Baidicoot/alpaca_ihateyou_cot_openhermes_v2",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-03-01T18:02:23Z" | ---
library_name: transformers
datasets:
- Baidicoot/alpaca_ihateyou_cot_openhermes_v2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AlignmentResearch/robust_llm_pythia-imdb-410m-mz-ada-v2 | AlignmentResearch | "2024-02-26T19:24:32Z" | 105 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-410m-deduped",
"base_model:finetune:EleutherAI/pythia-410m-deduped",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-02-26T19:23:38Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
base_model: EleutherAI/pythia-410m-deduped
model-index:
- name: robust_llm_pythia-imdb-410m-mz-ada-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-imdb-410m-mz-ada-v2
This model is a fine-tuned version of [EleutherAI/pythia-410m-deduped](https://huggingface.co./EleutherAI/pythia-410m-deduped) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.17.0
- Tokenizers 0.15.2
|
tareky/flan-t5-base-SQuDA-fine-tune-qs | tareky | "2024-02-14T10:06:46Z" | 90 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-02-14T09:54:15Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
i think the fine tuning process has some issues, i will working in solving them soon ISA.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jctivensa/Ivenpeople | jctivensa | "2023-01-11T09:45:11Z" | 212 | 0 | transformers | [
"transformers",
"pytorch",
"onnx",
"vit",
"image-classification",
"vision",
"dataset:imagenet-1k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-01-09T13:13:05Z" | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: >-
https://huggingface.co./datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: >-
https://huggingface.co./datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: >-
https://huggingface.co./datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
--- |
lesso13/b0136a32-320e-4154-93d3-c039b4e4259b | lesso13 | "2025-02-19T03:03:25Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-360M",
"base_model:adapter:unsloth/SmolLM-360M",
"license:apache-2.0",
"region:us"
] | null | "2025-02-19T02:40:37Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-360M
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b0136a32-320e-4154-93d3-c039b4e4259b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<br>
# b0136a32-320e-4154-93d3-c039b4e4259b
This model is a fine-tuned version of [unsloth/SmolLM-360M](https://huggingface.co./unsloth/SmolLM-360M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1686
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000213
- train_batch_size: 4
- eval_batch_size: 4
- seed: 130
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 3.2508 |
| 1.2601 | 0.0049 | 50 | 1.2386 |
| 0.6706 | 0.0097 | 100 | 0.5023 |
| 0.5524 | 0.0146 | 150 | 0.3801 |
| 0.3797 | 0.0194 | 200 | 0.3377 |
| 0.3321 | 0.0243 | 250 | 0.2518 |
| 0.2799 | 0.0291 | 300 | 0.2430 |
| 0.2178 | 0.0340 | 350 | 0.1926 |
| 0.2328 | 0.0388 | 400 | 0.1768 |
| 0.2636 | 0.0437 | 450 | 0.1698 |
| 0.2229 | 0.0485 | 500 | 0.1686 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ahmedheakl/ex24_qwen2.5_1.5b_20k_16kcw_3ep_cuda_amd | ahmedheakl | "2025-02-27T12:33:57Z" | 2 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-Coder-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Coder-1.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-26T17:14:31Z" | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-Coder-1.5B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: ex24_qwen2.5_1.5b_20k_16kcw_3ep_cuda_amd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ex24_qwen2.5_1.5b_20k_16kcw_3ep_cuda_amd
This model is a fine-tuned version of [Qwen/Qwen2.5-Coder-1.5B-Instruct](https://huggingface.co./Qwen/Qwen2.5-Coder-1.5B-Instruct) on the cuda_amd_30k dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0120
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.49.0
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
hectorjelly/Reinforce-push1 | hectorjelly | "2023-02-05T23:25:35Z" | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2023-02-05T23:25:22Z" | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-push1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co./deep-rl-course/unit4/introduction
|
jaober/a2c-PandaReachDense-v3 | jaober | "2023-09-03T14:24:22Z" | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-09-03T14:19:00Z" | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.19 +/- 0.08
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jkazdan/64R-512F-gemma-2-2b_hs2_iter1_sftsd1 | jkazdan | "2024-09-26T18:40:41Z" | 5 | 0 | null | [
"safetensors",
"gemma2",
"trl",
"sft",
"generated_from_trainer",
"base_model:google/gemma-2-2b",
"base_model:finetune:google/gemma-2-2b",
"license:gemma",
"region:us"
] | null | "2024-09-26T18:37:48Z" | ---
license: gemma
base_model: google/gemma-2-2b
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: 64R-512F-gemma-2-2b_hs2_iter1_sftsd1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 64R-512F-gemma-2-2b_hs2_iter1_sftsd1
This model is a fine-tuned version of [google/gemma-2-2b](https://huggingface.co./google/gemma-2-2b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2179
- Num Input Tokens Seen: 235792
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-06
- train_batch_size: 8
- eval_batch_size: 16
- seed: 1
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|
| No log | 0 | 0 | 1.3956 | 0 |
### Framework versions
- Transformers 4.44.0
- Pytorch 2.4.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
SuperAGI/Veagle | SuperAGI | "2024-03-15T08:43:41Z" | 0 | 15 | null | [
"arxiv:2403.08773",
"license:apache-2.0",
"region:us"
] | null | "2024-01-17T08:08:46Z" | ---
license: apache-2.0
---
# Model Card
Veagle significantly improves the textual understanding & interpretation of images. The unique feature of Veagle
is in its architectural change along with a combination of different components: a vision abstractor from mPlugOwl,
Q-Former from InstructBLIP, and the Mistral language model. This combination allows Veagle to better understand and
interpret the connection between text and images achieving state-of-the-art results. Veagle starts with a pre-trained
vision encoder and language model and is trained in two stages. This method helps the model effectively use information
from images and text together.
Further details about Veagle can be found in this detailed blog post: https://superagi.com/superagi-veagle/
arXiv paper link - https://arxiv.org/abs/2403.08773
## Key Contributions
- Veagle has surpassed most state-of-the-art (SOTA) models in major benchmarks, capable of outperforming competitors
in various tasks and domains.
- Using an optimized dataset, Veagle achieves high accuracy and efficiency. This demonstrates the model's effective
learning from limited data. We meticulously curated a dataset of 3.5 million examples, specifically tailored to
enhance visual representation learning.
- Veagle's architecture is a unique blend of components, including a visionary abstractor inspired by mPlugOwl,
the Q-Former module from InstructBLIP, and the powerful Mistral language model. This innovative architecture,
complemented by an additional projectional layer and architectural refinements, empowers Veagle to excel in multimodal tasks.
## Training
- Trained by: SuperAGI Team
- Hardware: NVIDIA 8 x A100 SxM (80GB)
- LLM: Mistral 7B
- Vision Encoder: mPLUG-OWL2
- Duration of pretraining: 12 hours
- Duration of finetuning: 25 hours
- Number of epochs in pretraining: 3
- Number of epochs in finetuning: 2
- Batch size in pretraining: 8
- Batch size in finetuning: 10
- Learning Rate: 1e-5
- Weight Decay: 0.05
- Optmizer: AdamW
## Steps to try
```python
1.Clone the repository
git clone https://github.com/superagi/Veagle
cd Veagle
```
```python
2. Run installation script
source venv/bin/activate
chmod +x install.sh
./install.sh
```
```python
3. python evaluate.py --answer_qs \
--model_name veagle_mistral \
--img_path images/food.jpeg \
--question "Is the food given in the image is healthy or not?"
```
## Evaluation

## The SuperAGI team
Rajat Chawla, Arkajit Dutta, Tushar Verma, Adarsh Jha, Anmol Gautam, Ayush vatsal,
Sukrit Chatterjee, Mukunda NS, Ishaan Bhola
|
AlignmentResearch/robust_llm_pythia-14m_mz-135_WordLength_n-its-10-seed-4 | AlignmentResearch | "2024-04-30T18:59:51Z" | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-14m",
"base_model:finetune:EleutherAI/pythia-14m",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-04-30T18:59:44Z" | ---
tags:
- generated_from_trainer
base_model: EleutherAI/pythia-14m
model-index:
- name: robust_llm_pythia-14m_mz-135_WordLength_n-its-10-seed-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-14m_mz-135_WordLength_n-its-10-seed-4
This model is a fine-tuned version of [EleutherAI/pythia-14m](https://huggingface.co./EleutherAI/pythia-14m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
|
hiwden00/whisper-multiclass-lang-en-base | hiwden00 | "2024-10-09T06:50:43Z" | 115 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-10-08T18:35:23Z" | ---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-base
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-multiclass-lang-en-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-multiclass-lang-en-base
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co./openai/whisper-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1493
- Wer: 6.2678
- Cer: 4.3198
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-------:|:----:|:---------------:|:-------:|:------:|
| 0.0286 | 4.5872 | 500 | 0.1880 | 10.4938 | 6.6816 |
| 0.0011 | 9.1743 | 1000 | 0.1529 | 7.6923 | 5.1185 |
| 0.0003 | 13.7615 | 1500 | 0.1525 | 7.1700 | 4.8523 |
| 0.0002 | 18.3486 | 2000 | 0.1510 | 7.0275 | 4.8695 |
| 0.0001 | 22.9358 | 2500 | 0.1505 | 6.7426 | 4.5946 |
| 0.0001 | 27.5229 | 3000 | 0.1499 | 6.6952 | 4.5861 |
| 0.0001 | 32.1101 | 3500 | 0.1496 | 6.4103 | 4.4143 |
| 0.0001 | 36.6972 | 4000 | 0.1495 | 6.5527 | 4.4830 |
| 0.0001 | 41.2844 | 4500 | 0.1493 | 6.3153 | 4.3456 |
| 0.0001 | 45.8716 | 5000 | 0.1493 | 6.2678 | 4.3198 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.0
|
terzimert/M_gpt_v1.3 | terzimert | "2023-05-25T16:13:30Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | token-classification | "2023-05-25T11:38:06Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: M_gpt_v1.3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# M_gpt_v1.3
This model is a fine-tuned version of [ai-forever/mGPT](https://huggingface.co./ai-forever/mGPT) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4547
- Precision: 0.56
- Recall: 0.3739
- F1: 0.4484
- Accuracy: 0.9076
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.4869 | 1.0 | 882 | 0.3957 | 0.5886 | 0.2987 | 0.3963 | 0.8995 |
| 0.3467 | 2.0 | 1764 | 0.3723 | 0.5572 | 0.3696 | 0.4444 | 0.9033 |
| 0.3031 | 3.0 | 2646 | 0.3709 | 0.5917 | 0.3289 | 0.4228 | 0.9082 |
| 0.2786 | 4.0 | 3528 | 0.3928 | 0.5649 | 0.3760 | 0.4515 | 0.9069 |
| 0.2629 | 5.0 | 4410 | 0.4547 | 0.56 | 0.3739 | 0.4484 | 0.9076 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
LarryAIDraw/AlisaMikhailovnaKujou-09 | LarryAIDraw | "2024-06-05T17:24:12Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2024-06-05T17:17:53Z" | ---
license: creativeml-openrail-m
---
https://civitai.com/models/496304/alisa-mikhailovna-kujou |
Seokeon/V14_R256_full_pp_dog2 | Seokeon | "2024-01-16T19:21:12Z" | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-01-16T17:59:10Z" |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - Seokeon/V14_R256_full_pp_dog2
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
Mag0g/Ezekiel27_11 | Mag0g | "2024-05-15T07:00:45Z" | 132 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-15T06:59:28Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
shivanisk/llama-3-8b-cat | shivanisk | "2025-03-04T12:14:50Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-03-04T12:13:33Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
GeneZC/bert-base-qnli | GeneZC | "2023-05-22T08:36:03Z" | 32 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"dataset:glue",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2023-05-22T06:22:51Z" | ---
license: apache-2.0
datasets:
- glue
---
# Model Details
`bert-base-uncased` finetuned on `QNLI`.
## Parameter settings
batch size is 32, learning rate is 2e-5.
## Metrics
acc: 0.9187 |
pinot/wav2vec2-large-xls-r-300m-j-phoneme-colab | pinot | "2022-10-07T13:34:40Z" | 103 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_10_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-09-28T12:50:46Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice_10_0
model-index:
- name: wav2vec2-large-xls-r-300m-j-phoneme-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-j-phoneme-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co./facebook/wav2vec2-xls-r-300m) on the common_voice_10_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5212
- Wer: 0.2998
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.7364 | 3.0 | 2000 | 0.4703 | 0.4503 |
| 0.5673 | 6.01 | 4000 | 0.4585 | 0.3855 |
| 0.5048 | 9.01 | 6000 | 0.4567 | 0.3543 |
| 0.4567 | 12.01 | 8000 | 0.4433 | 0.3473 |
| 0.4194 | 15.02 | 10000 | 0.4491 | 0.3386 |
| 0.3905 | 18.02 | 12000 | 0.4829 | 0.3360 |
| 0.3644 | 21.02 | 14000 | 0.5032 | 0.3306 |
| 0.3441 | 24.02 | 16000 | 0.5242 | 0.3389 |
| 0.2589 | 27.03 | 18000 | 0.5212 | 0.2998 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.10.0+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
FlukeTJ/bge-m3-m2v-distilled-768 | FlukeTJ | "2024-12-18T08:20:53Z" | 6 | 0 | model2vec | [
"model2vec",
"safetensors",
"embeddings",
"static-embeddings",
"base_model:BAAI/bge-m3",
"base_model:finetune:BAAI/bge-m3",
"license:mit",
"region:us"
] | null | "2024-12-18T08:20:28Z" | ---
base_model: BAAI/bge-m3
library_name: model2vec
license: mit
model_name: FlukeTJ/bge-m3-m2v-distilled-768
tags:
- embeddings
- static-embeddings
---
# FlukeTJ/bge-m3-m2v-distilled-768 Model Card
This [Model2Vec](https://github.com/MinishLab/model2vec) model is a distilled version of the [BAAI/bge-m3](https://huggingface.co./BAAI/bge-m3) Sentence Transformer. It uses static embeddings, allowing text embeddings to be computed orders of magnitude faster on both GPU and CPU. It is designed for applications where computational resources are limited or where real-time performance is critical.
## Installation
Install model2vec using pip:
```
pip install model2vec
```
## Usage
Load this model using the `from_pretrained` method:
```python
from model2vec import StaticModel
# Load a pretrained Model2Vec model
model = StaticModel.from_pretrained("FlukeTJ/bge-m3-m2v-distilled-768")
# Compute text embeddings
embeddings = model.encode(["Example sentence"])
```
Alternatively, you can distill your own model using the `distill` method:
```python
from model2vec.distill import distill
# Choose a Sentence Transformer model
model_name = "BAAI/bge-base-en-v1.5"
# Distill the model
m2v_model = distill(model_name=model_name, pca_dims=256)
# Save the model
m2v_model.save_pretrained("m2v_model")
```
## How it works
Model2vec creates a small, fast, and powerful model that outperforms other static embedding models by a large margin on all tasks we could find, while being much faster to create than traditional static embedding models such as GloVe. Best of all, you don't need any data to distill a model using Model2Vec.
It works by passing a vocabulary through a sentence transformer model, then reducing the dimensionality of the resulting embeddings using PCA, and finally weighting the embeddings using zipf weighting. During inference, we simply take the mean of all token embeddings occurring in a sentence.
## Additional Resources
- [All Model2Vec models on the hub](https://huggingface.co./models?library=model2vec)
- [Model2Vec Repo](https://github.com/MinishLab/model2vec)
- [Model2Vec Results](https://github.com/MinishLab/model2vec?tab=readme-ov-file#results)
- [Model2Vec Tutorials](https://github.com/MinishLab/model2vec/tree/main/tutorials)
## Library Authors
Model2Vec was developed by the [Minish Lab](https://github.com/MinishLab) team consisting of [Stephan Tulkens](https://github.com/stephantul) and [Thomas van Dongen](https://github.com/Pringled).
## Citation
Please cite the [Model2Vec repository](https://github.com/MinishLab/model2vec) if you use this model in your work.
```
@software{minishlab2024model2vec,
authors = {Stephan Tulkens, Thomas van Dongen},
title = {Model2Vec: Turn any Sentence Transformer into a Small Fast Model},
year = {2024},
url = {https://github.com/MinishLab/model2vec},
}
``` |
google/multiberts-seed_1-step_200k | google | "2021-11-06T00:50:10Z" | 858 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"bert",
"pretraining",
"multiberts",
"multiberts-seed_1",
"multiberts-seed_1-step_200k",
"en",
"arxiv:2106.16163",
"arxiv:1908.08962",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2022-03-02T23:29:05Z" | ---
language: en
tags:
- multiberts
- multiberts-seed_1
- multiberts-seed_1-step_200k
license: apache-2.0
---
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 200k
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
[http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our
paper
[The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163).
This is model #1, captured at step 200k (max: 2000k, i.e., 2M steps).
## Model Description
This model was captured during a reproduction of
[BERT-base uncased](https://github.com/google-research/bert), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [BERT-base uncased](https://github.com/google-research/bert). Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962).
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
### How to use
Using code from
[BERT-base uncased](https://huggingface.co./bert-base-uncased), here is an example based on
Tensorflow:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_200k')
model = TFBertModel.from_pretrained("google/multiberts-seed_1-step_200k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
PyTorch version:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_200k')
model = BertModel.from_pretrained("google/multiberts-seed_1-step_200k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Citation info
```bibtex
@article{sellam2021multiberts,
title={The MultiBERTs: BERT Reproductions for Robustness Analysis},
author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick},
journal={arXiv preprint arXiv:2106.16163},
year={2021}
}
```
|
GetmanY1/wav2vec2-large-sami-cont-pt-22k-finetuned | GetmanY1 | "2025-02-21T14:26:04Z" | 84 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"smi",
"sami",
"fi",
"arxiv:2006.11477",
"base_model:GetmanY1/wav2vec2-large-sami-cont-pt-22k",
"base_model:finetune:GetmanY1/wav2vec2-large-sami-cont-pt-22k",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-10-22T07:52:01Z" | ---
license: apache-2.0
tags:
- automatic-speech-recognition
- smi
- sami
library_name: transformers
language: fi
base_model:
- GetmanY1/wav2vec2-large-sami-cont-pt-22k
model-index:
- name: wav2vec2-large-sami-cont-pt-22k-finetuned
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Sami-1h-test
type: sami-1h-test
args: fi
metrics:
- name: Test WER
type: wer
value: 32.28
- name: Test CER
type: cer
value: 10.83
---
# Sámi Wav2vec2-Large ASR
[GetmanY1/wav2vec2-large-sami-22k](https://huggingface.co./GetmanY1/wav2vec2-large-sami-cont-pt-22k) fine-tuned on 20 hours of 16kHz sampled speech audio from the [Sámi Parliament sessions](https://sametinget.kommunetv.no/archive).
When using the model make sure that your speech input is also sampled at 16Khz.
## Model description
The Sámi Wav2Vec2 Large has the same architecture and uses the same training objective as the English and multilingual one described in [Paper](https://arxiv.org/abs/2006.11477).
[GetmanY1/wav2vec2-large-sami-cont-pt-22k](https://huggingface.co./GetmanY1/wav2vec2-large-sami-cont-pt-22k) is a large-scale, 317-million parameter monolingual model pre-trained on 22.4k hours of unlabeled Sámi speech from [KAVI radio and television archive materials](https://kavi.fi/en/radio-ja-televisioarkistointia-vuodesta-2008/).
You can read more about the pre-trained model from [this paper](TODO).
The model was evaluated on 1 hour of out-of-domain read-aloud and spontaneous speech of varying audio quality.
## Intended uses
You can use this model for Sámi ASR (speech-to-text).
### How to use
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import torch
# load model and processor
processor = Wav2Vec2Processor.from_pretrained("GetmanY1/wav2vec2-large-sami-cont-pt-22k-finetuned")
model = Wav2Vec2ForCTC.from_pretrained("GetmanY1/wav2vec2-large-sami-cont-pt-22k-finetuned")
# tokenize
input_values = processor(ds[0]["audio"]["array"], return_tensors="pt", padding="longest").input_values # Batch size 1
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
```
### Prefix Beam Search
In our experiments (see [paper](TODO)), we observed a slight improvement in terms of Character Error Rate (CER) when using prefix beam search compared to greedy decoding, primarily due to a reduction in deletions. Below is our adapted version of [corticph/prefix-beam-search](https://github.com/corticph/prefix-beam-search) for use with wav2vec 2.0 in HuggingFace Transformers.
Note that an external language model (LM) **is not required**, as the function defaults to a uniform probability when none is provided.
```
import re
import numpy as np
def prefix_beam_search(ctc, lm=None, k=25, alpha=0.30, beta=5, prune=0.001):
"""
Performs prefix beam search on the output of a CTC network.
Args:
ctc (np.ndarray): The CTC output. Should be a 2D array (timesteps x alphabet_size)
lm (func): Language model function. Should take as input a string and output a probability.
k (int): The beam width. Will keep the 'k' most likely candidates at each timestep.
alpha (float): The language model weight. Should usually be between 0 and 1.
beta (float): The language model compensation term. The higher the 'alpha', the higher the 'beta'.
prune (float): Only extend prefixes with chars with an emission probability higher than 'prune'.
Returns:
string: The decoded CTC output.
"""
lm = (lambda l: 1) if lm is None else lm # if no LM is provided, just set to function returning 1
W = lambda l: re.findall(r'\w+[\s|>]', l)
alphabet = list({k: v for k, v in sorted(processor.tokenizer.vocab.items(), key=lambda item: item[1])})
alphabet = list(map(lambda x: x.replace(processor.tokenizer.special_tokens_map['eos_token'], '>') \
.replace(processor.tokenizer.special_tokens_map['pad_token'], '%') \
.replace('|', ' '), alphabet))
F = ctc.shape[1]
ctc = np.vstack((np.zeros(F), ctc)) # just add an imaginative zero'th step (will make indexing more intuitive)
T = ctc.shape[0]
# STEP 1: Initiliazation
O = ''
Pb, Pnb = defaultdict(Counter), defaultdict(Counter)
Pb[0][O] = 1
Pnb[0][O] = 0
A_prev = [O]
# END: STEP 1
# STEP 2: Iterations and pruning
for t in range(1, T):
pruned_alphabet = [alphabet[i] for i in np.where(ctc[t] > prune)[0]]
for l in A_prev:
if len(l) > 0 and l.endswith('>'):
Pb[t][l] = Pb[t - 1][l]
Pnb[t][l] = Pnb[t - 1][l]
continue
for c in pruned_alphabet:
c_ix = alphabet.index(c)
# END: STEP 2
# STEP 3: “Extending” with a blank
if c == '%':
Pb[t][l] += ctc[t][0] * (Pb[t - 1][l] + Pnb[t - 1][l])
# END: STEP 3
# STEP 4: Extending with the end character
else:
l_plus = l + c
if len(l) > 0 and l.endswith(c):
Pnb[t][l_plus] += ctc[t][c_ix] * Pb[t - 1][l]
Pnb[t][l] += ctc[t][c_ix] * Pnb[t - 1][l]
# END: STEP 4
# STEP 5: Extending with any other non-blank character and LM constraints
elif len(l.replace(' ', '')) > 0 and c in (' ', '>'):
lm_prob = lm(l_plus.strip(' >')) ** alpha
Pnb[t][l_plus] += lm_prob * ctc[t][c_ix] * (Pb[t - 1][l] + Pnb[t - 1][l])
else:
Pnb[t][l_plus] += ctc[t][c_ix] * (Pb[t - 1][l] + Pnb[t - 1][l])
# END: STEP 5
# STEP 6: Make use of discarded prefixes
if l_plus not in A_prev:
Pb[t][l_plus] += ctc[t][0] * (Pb[t - 1][l_plus] + Pnb[t - 1][l_plus])
Pnb[t][l_plus] += ctc[t][c_ix] * Pnb[t - 1][l_plus]
# END: STEP 6
# STEP 7: Select most probable prefixes
A_next = Pb[t] + Pnb[t]
sorter = lambda l: A_next[l] * (len(W(l)) + 1) ** beta
A_prev = sorted(A_next, key=sorter, reverse=True)[:k]
# END: STEP 7
return A_prev[0].strip('>')
def map_to_pred_prefix_beam_search(batch):
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
input_values = processor(batch["speech"], sampling_rate=16000, return_tensors="pt", padding="longest").input_values
with torch.no_grad():
logits = model(input_values.to(device)).logits
probs = torch.softmax(logits, dim=-1)
transcription = [prefix_beam_search(probs[0].cpu().numpy(), lm=None)]
batch["transcription"] = transcription
return batch
result = ds.map(map_to_pred_prefix_beam_search, batched=True, batch_size=1, remove_columns=["speech"])
```
## Team Members
- Yaroslav Getman, [Hugging Face profile](https://huggingface.co./GetmanY1), [LinkedIn profile](https://www.linkedin.com/in/yaroslav-getman/)
- Tamas Grosz, [Hugging Face profile](https://huggingface.co./Grosy), [LinkedIn profile](https://www.linkedin.com/in/tam%C3%A1s-gr%C3%B3sz-950a049a/)
Feel free to contact us for more details 🤗 |
marialvsantiago/944bfb70-e65f-4860-9a9c-c9a397cc4eee | marialvsantiago | "2025-01-18T23:10:47Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Mistral-7b-64k",
"base_model:adapter:NousResearch/Yarn-Mistral-7b-64k",
"license:apache-2.0",
"region:us"
] | null | "2025-01-18T22:48:00Z" | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Yarn-Mistral-7b-64k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 944bfb70-e65f-4860-9a9c-c9a397cc4eee
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Mistral-7b-64k
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 3304d6184b022fed_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3304d6184b022fed_train_data.json
type:
field_input: choices
field_instruction: question
field_output: messages
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: marialvsantiago/944bfb70-e65f-4860-9a9c-c9a397cc4eee
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 78GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/3304d6184b022fed_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_hf
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 55d6ed05-7d8b-484e-a6c3-ce6c9fd576bd
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 55d6ed05-7d8b-484e-a6c3-ce6c9fd576bd
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 944bfb70-e65f-4860-9a9c-c9a397cc4eee
This model is a fine-tuned version of [NousResearch/Yarn-Mistral-7b-64k](https://huggingface.co./NousResearch/Yarn-Mistral-7b-64k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5623
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_HF with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0011 | 1 | 1.6679 |
| 6.7844 | 0.0054 | 5 | 1.0892 |
| 3.2202 | 0.0108 | 10 | 0.6336 |
| 2.7939 | 0.0162 | 15 | 0.5936 |
| 2.4422 | 0.0216 | 20 | 0.5707 |
| 2.52 | 0.0270 | 25 | 0.5636 |
| 1.8959 | 0.0324 | 30 | 0.5623 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
alokabhishek/Llama-2-7b-chat-hf-4.0-bpw-exl2 | alokabhishek | "2024-04-26T07:16:10Z" | 7 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"4bit",
"llama-2",
"facebook",
"meta",
"7b",
"quantized",
"ExLlamaV2",
"exl2",
"4.0-bpw",
"conversational",
"arxiv:2307.09288",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-31T23:13:53Z" | ---
library_name: transformers
tags:
- 4bit
- llama
- llama-2
- facebook
- meta
- 7b
- quantized
- ExLlamaV2
- quantized
- exl2
- 4.0-bpw
license: llama2
pipeline_tag: text-generation
---
# Model Card for alokabhishek/Llama-2-7b-chat-hf-4.0-bpw-exl2
<!-- Provide a quick summary of what the model is/does. -->
This repo contains 4-bit quantized (using ExLlamaV2) model of Meta's meta-llama/Llama-2-7b-chat-hf
## Model Details
- Model creator: [Meta](https://huggingface.co./meta-llama)
- Original model: [Llama-2-7b-chat-hf](https://huggingface.co./meta-llama/Llama-2-7b-chat-hf)
### About 4 bit quantization using ExLlamaV2
- ExLlamaV2 github repo: [ExLlamaV2 github repo](https://github.com/turboderp/exllamav2)
# How to Get Started with the Model
Use the code below to get started with the model.
## How to run from Python code
#### First install the package
```shell
# Install ExLLamaV2
!git clone https://github.com/turboderp/exllamav2
!pip install -e exllamav2
```
#### Import
```python
from huggingface_hub import login, HfApi, create_repo
from torch import bfloat16
import locale
import torch
import os
```
#### set up variables
```python
# Define the model ID for the desired model
model_id = "alokabhishek/Llama-2-7b-chat-hf-4.0-bpw-exl2"
BPW = 4.0
# define variables
model_name = model_id.split("/")[-1]
```
#### Download the quantized model
```shell
!git-lfs install
# download the model to loacl directory
!git clone https://{username}:{HF_TOKEN}@huggingface.co/{model_id} {model_name}
```
#### Run Inference on quantized model using
```shell
# Run model
!python exllamav2/test_inference.py -m {model_name}/ -p "Tell me a funny joke about Large Language Models meeting a Blackhole in an intergalactic Bar."
```
```python
import sys, os
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from exllamav2 import (
ExLlamaV2,
ExLlamaV2Config,
ExLlamaV2Cache,
ExLlamaV2Tokenizer,
)
from exllamav2.generator import ExLlamaV2BaseGenerator, ExLlamaV2Sampler
import time
# Initialize model and cache
model_directory = "/model_path/Llama-2-7b-chat-hf-4.0-bpw-exl2/"
print("Loading model: " + model_directory)
config = ExLlamaV2Config(model_directory)
model = ExLlamaV2(config)
cache = ExLlamaV2Cache(model, lazy=True)
model.load_autosplit(cache)
tokenizer = ExLlamaV2Tokenizer(config)
# Initialize generator
generator = ExLlamaV2BaseGenerator(model, cache, tokenizer)
# Generate some text
settings = ExLlamaV2Sampler.Settings()
settings.temperature = 0.85
settings.top_k = 50
settings.top_p = 0.8
settings.token_repetition_penalty = 1.01
settings.disallow_tokens(tokenizer, [tokenizer.eos_token_id])
prompt = "Tell me a funny joke about Large Language Models meeting a Blackhole in an intergalactic Bar."
max_new_tokens = 512
generator.warmup()
time_begin = time.time()
output = generator.generate_simple(prompt, settings, max_new_tokens, seed=1234)
time_end = time.time()
time_total = time_end - time_begin
print(output)
```
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
# **Llama 2**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
## Model Details
*Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
**Model Developers** Meta
**Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10<sup>-4</sup>|
*Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Dates** Llama 2 was trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
**Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288)
## Intended Use
**Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212).
**Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program.
||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
|---|---|---|---|
|Llama 2 7B|184320|400|31.22|
|Llama 2 13B|368640|400|62.44|
|Llama 2 70B|1720320|400|291.42|
|Total|3311616||539.00|
**CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
## Evaluation Results
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
|Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
|---|---|---|---|---|---|---|---|---|---|
|Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
|Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
|Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
|Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
|Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
|Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
|Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
**Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama 1|7B|27.42|23.00|
|Llama 1|13B|41.74|23.08|
|Llama 1|33B|44.19|22.57|
|Llama 1|65B|48.71|21.77|
|Llama 2|7B|33.29|**21.25**|
|Llama 2|13B|41.86|26.10|
|Llama 2|70B|**50.18**|24.60|
**Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama-2-Chat|7B|57.04|**0.00**|
|Llama-2-Chat|13B|62.18|**0.00**|
|Llama-2-Chat|70B|**64.14**|0.01|
**Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
## Ethical Considerations and Limitations
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
## Reporting Issues
Please report any software “bug,” or other problems with the models through one of the following means:
- Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
- Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
- Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
## Llama Model Index
|Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
|---|---|---|---|---|
|7B| [Link](https://huggingface.co./meta-llama/Llama-2-7b) | [Link](https://huggingface.co./meta-llama/Llama-2-7b-hf) | [Link](https://huggingface.co./meta-llama/Llama-2-7b-chat) | [Link](https://huggingface.co./meta-llama/Llama-2-7b-chat-hf)|
|13B| [Link](https://huggingface.co./meta-llama/Llama-2-13b) | [Link](https://huggingface.co./meta-llama/Llama-2-13b-hf) | [Link](https://huggingface.co./meta-llama/Llama-2-13b-chat) | [Link](https://huggingface.co./meta-llama/Llama-2-13b-chat-hf)|
|70B| [Link](https://huggingface.co./meta-llama/Llama-2-70b) | [Link](https://huggingface.co./meta-llama/Llama-2-70b-hf) | [Link](https://huggingface.co./meta-llama/Llama-2-70b-chat) | [Link](https://huggingface.co./meta-llama/Llama-2-70b-chat-hf)| |
sophiebui/en-ru_mtmodel | sophiebui | "2024-10-18T18:51:36Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"m2m_100",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/m2m100_418M",
"base_model:finetune:facebook/m2m100_418M",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-10-18T14:13:47Z" | ---
library_name: transformers
license: mit
base_model: facebook/m2m100_418M
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: en-ru_mtmodel
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# en-ru_mtmodel
This model is a fine-tuned version of [facebook/m2m100_418M](https://huggingface.co./facebook/m2m100_418M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3704
- Bleu: 55.3333
- Gen Len: 12.1661
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 0.4985 | 1.0 | 30777 | 0.4535 | 51.6282 | 12.1057 |
| 0.3926 | 2.0 | 61554 | 0.3924 | 54.2743 | 12.1628 |
| 0.3088 | 3.0 | 92331 | 0.3704 | 55.3333 | 12.1661 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1
|
psyonp/Llama-3.1-8B-Instruct-Finetuned-Harmful | psyonp | "2025-02-17T19:35:52Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-02-17T19:35:43Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
asenella/mhd_config_1_MVAE_beta_5_scale_True_seed_3 | asenella | "2023-08-20T18:13:40Z" | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | "2023-08-19T11:46:24Z" | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
Sophie-Rain-Leaked-Videos/Sophie.Rain.Leaked.Video.Tutorial.HD | Sophie-Rain-Leaked-Videos | "2025-03-09T08:59:25Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-03-09T08:58:49Z" | <p><a href="https://link.rmg.co.uk/nude?updates" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a></p>
<p><a href="https://link.rmg.co.uk/nude?updates" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a></p>
<p><a href="https://link.rmg.co.uk/nude?updates" rel="nofollow"><img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif"></a></p> |
szdodo/dqn-SpaceInvadersNoFrameskip-v4 | szdodo | "2023-10-10T10:03:15Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-10-10T10:02:39Z" | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 615.00 +/- 229.92
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga szdodo -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga szdodo -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga szdodo
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
PhoenixR49/faster-whisper-base-frenchmed-v1-int8 | PhoenixR49 | "2024-12-24T17:13:50Z" | 7 | 0 | null | [
"automatic-speech-recognition",
"fr",
"base_model:Hanhpt23/whisper-base-frenchmed-v1",
"base_model:finetune:Hanhpt23/whisper-base-frenchmed-v1",
"license:apache-2.0",
"region:us"
] | automatic-speech-recognition | "2024-12-24T16:09:55Z" | ---
license: apache-2.0
language:
- fr
base_model:
- Hanhpt23/whisper-base-frenchmed-v1
pipeline_tag: automatic-speech-recognition
--- |
mradermacher/Mixtral-8x7B-Instruct-v0.1-GGUF | mradermacher | "2024-09-14T07:20:34Z" | 735 | 1 | transformers | [
"transformers",
"gguf",
"fr",
"it",
"de",
"es",
"en",
"base_model:mistralai/Mixtral-8x7B-Instruct-v0.1",
"base_model:quantized:mistralai/Mixtral-8x7B-Instruct-v0.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-09-13T21:09:37Z" | ---
base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
language:
- fr
- it
- de
- es
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co./mistralai/Mixtral-8x7B-Instruct-v0.1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co./mradermacher/Mixtral-8x7B-Instruct-v0.1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co./TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co./mradermacher/Mixtral-8x7B-Instruct-v0.1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1.Q2_K.gguf) | Q2_K | 17.4 | |
| [GGUF](https://huggingface.co./mradermacher/Mixtral-8x7B-Instruct-v0.1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1.IQ3_XS.gguf) | IQ3_XS | 19.5 | |
| [GGUF](https://huggingface.co./mradermacher/Mixtral-8x7B-Instruct-v0.1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1.IQ3_S.gguf) | IQ3_S | 20.5 | beats Q3_K* |
| [GGUF](https://huggingface.co./mradermacher/Mixtral-8x7B-Instruct-v0.1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1.Q3_K_S.gguf) | Q3_K_S | 20.5 | |
| [GGUF](https://huggingface.co./mradermacher/Mixtral-8x7B-Instruct-v0.1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1.IQ3_M.gguf) | IQ3_M | 21.5 | |
| [GGUF](https://huggingface.co./mradermacher/Mixtral-8x7B-Instruct-v0.1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1.Q3_K_M.gguf) | Q3_K_M | 22.6 | lower quality |
| [GGUF](https://huggingface.co./mradermacher/Mixtral-8x7B-Instruct-v0.1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1.Q3_K_L.gguf) | Q3_K_L | 24.3 | |
| [GGUF](https://huggingface.co./mradermacher/Mixtral-8x7B-Instruct-v0.1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1.IQ4_XS.gguf) | IQ4_XS | 25.5 | |
| [GGUF](https://huggingface.co./mradermacher/Mixtral-8x7B-Instruct-v0.1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1.Q4_K_S.gguf) | Q4_K_S | 26.8 | fast, recommended |
| [GGUF](https://huggingface.co./mradermacher/Mixtral-8x7B-Instruct-v0.1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1.Q4_K_M.gguf) | Q4_K_M | 28.5 | fast, recommended |
| [GGUF](https://huggingface.co./mradermacher/Mixtral-8x7B-Instruct-v0.1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1.Q5_K_S.gguf) | Q5_K_S | 32.3 | |
| [GGUF](https://huggingface.co./mradermacher/Mixtral-8x7B-Instruct-v0.1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1.Q5_K_M.gguf) | Q5_K_M | 33.3 | |
| [GGUF](https://huggingface.co./mradermacher/Mixtral-8x7B-Instruct-v0.1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1.Q6_K.gguf) | Q6_K | 38.5 | very good quality |
| [GGUF](https://huggingface.co./mradermacher/Mixtral-8x7B-Instruct-v0.1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1.Q8_0.gguf) | Q8_0 | 49.7 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co./mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
rithwik-db/bert-base-cased_arguana_100 | rithwik-db | "2023-04-11T00:55:35Z" | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2023-04-11T00:55:30Z" | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# rithwik-db/bert-base-cased_arguana_100
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('rithwik-db/bert-base-cased_arguana_100')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('rithwik-db/bert-base-cased_arguana_100')
model = AutoModel.from_pretrained('rithwik-db/bert-base-cased_arguana_100')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=rithwik-db/bert-base-cased_arguana_100)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 25 with parameters:
```
{'batch_size': 4, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.TripletLoss.TripletLoss` with parameters:
```
{'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5}
```
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
ALM-AHME/convnextv2-large-1k-224-finetuned-LungCancer-Classification-LC25000-AH-40-30-30-Shuffled | ALM-AHME | "2023-07-23T16:43:02Z" | 6 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"convnextv2",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/convnextv2-large-1k-224",
"base_model:finetune:facebook/convnextv2-large-1k-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-07-23T09:25:15Z" | ---
license: apache-2.0
base_model: facebook/convnextv2-large-1k-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: convnextv2-large-1k-224-finetuned-LungCancer-Classification-LC25000-AH-40-30-30-Shuffled
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: Augmented-Final
split: train
args: Augmented-Final
metrics:
- name: Accuracy
type: accuracy
value: 0.9623477297895903
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnextv2-large-1k-224-finetuned-LungCancer-Classification-LC25000-AH-40-30-30-Shuffled
This model is a fine-tuned version of [facebook/convnextv2-large-1k-224](https://huggingface.co./facebook/convnextv2-large-1k-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1288
- Accuracy: 0.9623
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.5
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2522 | 0.99 | 93 | 0.1288 | 0.9623 |
| 0.1579 | 1.99 | 187 | 0.1211 | 0.9573 |
| 1.1016 | 3.0 | 281 | 1.1018 | 0.3216 |
| 1.0934 | 4.0 | 375 | 1.0787 | 0.6432 |
| 0.5795 | 4.99 | 468 | 0.5864 | 0.6445 |
| 0.5437 | 5.99 | 562 | 0.5733 | 0.7369 |
| 0.3369 | 6.94 | 651 | 0.3298 | 0.9030 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Lucyfer1718/Spartan_Alpha-Powered_v2.0 | Lucyfer1718 | "2024-09-24T06:05:56Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:mattshumer/Reflection-Llama-3.1-70B",
"base_model:merge:mattshumer/Reflection-Llama-3.1-70B",
"base_model:meta-llama/Llama-3.1-70B-Instruct",
"base_model:merge:meta-llama/Llama-3.1-70B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-09-24T05:31:06Z" | ---
base_model:
- meta-llama/Meta-Llama-3.1-70B-Instruct
- mattshumer/Reflection-Llama-3.1-70B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [meta-llama/Meta-Llama-3.1-70B-Instruct](https://huggingface.co./meta-llama/Meta-Llama-3.1-70B-Instruct)
* [mattshumer/Reflection-Llama-3.1-70B](https://huggingface.co./mattshumer/Reflection-Llama-3.1-70B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: slerp # Define the merge method at the top level
slices:
- sources:
- model: mattshumer/Reflection-Llama-3.1-70B
layer_range:
- 0
- 40 # Adjust layer range
- model: meta-llama/Meta-Llama-3.1-70B-Instruct
layer_range:
- 0
- 40
base_model: mattshumer/Reflection-Llama-3.1-70B # Define the base model at the slice level
parameters:
t:
- filter: self_attn
value:
- 0.1 # Modify weights for self attention
- 0.5
- 0.4
- 0.8
- 1
- filter: mlp
value:
- 0.9 # Modify weights for MLP layers
- 0.6
- 0.7
- 0.4
- 0.2
- value: 0.7 # General merge weight
dtype: bfloat16 # Keep for TPU efficiency
```
|
achintyap/ash-test-model-1 | achintyap | "2024-11-10T01:12:13Z" | 108 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-11-10T00:59:51Z" | ---
base_model: bert-base-uncased
library_name: transformers
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: ash-test-model-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ash-test-model-1
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co./bert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| No log | 0.5385 | 7 | 0.6891 | 0.53 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
MariannBar/marbar | MariannBar | "2023-11-28T22:52:24Z" | 1 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-11-28T22:47:49Z" | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### marbar Dreambooth model trained by MariannBar with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
espnet/kamo-naoyuki_hkust_asr_train_asr_transformer2_raw_zh_char_batch_bins20-truncated-934e17 | espnet | "2021-07-03T15:03:00Z" | 0 | 0 | espnet | [
"espnet",
"audio",
"automatic-speech-recognition",
"zh",
"dataset:hkust",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | automatic-speech-recognition | "2022-03-02T23:29:05Z" | ---
tags:
- espnet
- audio
- automatic-speech-recognition
language: zh
datasets:
- hkust
license: cc-by-4.0
---
## Example ESPnet2 ASR model
### `kamo-naoyuki/hkust_asr_train_asr_transformer2_raw_zh_char_batch_bins20000000_ctc_confignore_nan_gradtrue_sp_valid.acc.ave`
♻️ Imported from https://zenodo.org/record/4430974/
This model was trained by kamo-naoyuki using hkust/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
openbmb/Eurus-70b-nca | openbmb | "2024-04-12T11:54:51Z" | 62 | 11 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"reasoning",
"preference_learning",
"nca",
"conversational",
"dataset:openbmb/UltraInteract_pair",
"dataset:openbmb/UltraFeedback",
"arxiv:2404.02078",
"arxiv:2402.05369",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-01T09:24:48Z" | ---
license: apache-2.0
datasets:
- openbmb/UltraInteract_pair
- openbmb/UltraFeedback
tags:
- reasoning
- preference_learning
- nca
pipeline_tag: text-generation
---
<div align="center">
<img src="https://huggingface.co./openbmb/Eurus-7b-sft/resolve/main/figures/Eurus-logo.png" width="200px">
**Eurus: A suit of open-source LLMs optimized for reasoning**
<p align="center">
<a href="#introduction"> Introduction</a> •
<a href="#evaluation">Evaluation</a>
</p>
</div>
# Links
- 📜 [Paper](https://arxiv.org/abs/2404.02078)
- 🤗 [Eurus Collection](https://huggingface.co./collections/openbmb/eurus-660bc40bec5376b3adc9d1c5)
- 🤗 UltraInteract
- [SFT](https://huggingface.co./datasets/openbmb/UltraInteract_sft)
- [Preference Learning](https://huggingface.co./datasets/openbmb/UltraInteract_pair)
- [GitHub Repo](https://github.com/OpenBMB/Eurus)
# Introduction
Eurus-70B-NCA is [NCA](https://arxiv.org/abs/2402.05369) fine-tuned from [Eurus-70B-SFT](https://huggingface.co./openbmb/Eurus-70b-sft) on all multi-turn trajectory pairs in [UltraInteract](https://huggingface.co./openbmb/UltraInteract) and all pairs in [UltraFeedback](https://huggingface.co./openbmb/UltraFeedback).
It achieves the best overall performance among open-source models of similar sizes and even outperforms specialized models in corresponding domains in many cases. Notably, Eurus-70B-NCA achieves better performance than GPT-3.5 Turbo through comprehensive benchmarking across 12 tests covering five tasks.
## Usage
We apply tailored prompts for coding and math, consistent with UltraInteract data formats:
**Coding**
```
[INST] Write Python code to solve the task:
{Instruction} [/INST]
```
**Math-CoT**
```
[INST] Solve the following math problem step-by-step.
Simplify your answer as much as possible. Present your final answer as \\boxed{Your Answer}.
{Instruction} [/INST]
```
**Math-PoT**
```
[INST] Tool available:
[1] Python interpreter
When you send a message containing Python code to python, it will be executed in a stateful Jupyter notebook environment.
Solve the following math problem step-by-step.
Simplify your answer as much as possible.
{Instruction} [/INST]
```
## Evaluation
- Eurus, both the 7B and 70B variants, achieve the best overall performance among open-source models of similar sizes. Eurus even outperforms specialized models in corresponding domains in many cases. Notably, Eurus-7B outperforms baselines that are 5× larger, and Eurus-70B achieves better performance than GPT-3.5 Turbo.
- Preference learning with UltraInteract can further improve performance, especially in math and the multi-turn ability.
<img src="./figures/main_exp.png" alt="stats" style="zoom: 40%;" />
## Citation
```
@misc{yuan2024advancing,
title={Advancing LLM Reasoning Generalists with Preference Trees},
author={Lifan Yuan and Ganqu Cui and Hanbin Wang and Ning Ding and Xingyao Wang and Jia Deng and Boji Shan and Huimin Chen and Ruobing Xie and Yankai Lin and Zhenghao Liu and Bowen Zhou and Hao Peng and Zhiyuan Liu and Maosong Sun},
year={2024},
eprint={2404.02078},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
``` |
RichardErkhov/yakazimir_-_qwen_ce_entropy-awq | RichardErkhov | "2024-12-06T21:32:30Z" | 5 | 0 | null | [
"safetensors",
"qwen2",
"4-bit",
"awq",
"region:us"
] | null | "2024-12-06T21:31:48Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
qwen_ce_entropy - AWQ
- Model creator: https://huggingface.co./yakazimir/
- Original model: https://huggingface.co./yakazimir/qwen_ce_entropy/
Original model description:
---
library_name: transformers
license: other
base_model: trl-lib/qwen1.5-0.5b-sft
tags:
- alignment-handbook
- trl
- simpo
- generated_from_trainer
- trl
- simpo
- generated_from_trainer
datasets:
- yakazimir/ultrafeedback_binarized
model-index:
- name: qwen_ce_entropy
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen_ce_entropy
This model is a fine-tuned version of [trl-lib/qwen1.5-0.5b-sft](https://huggingface.co./trl-lib/qwen1.5-0.5b-sft) on the yakazimir/ultrafeedback_binarized dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2625
- Rewards/chosen: -1.2622
- Rewards/rejected: -1.3864
- Rewards/accuracies: 0.5475
- Rewards/margins: 0.1242
- Logps/rejected: -1.3864
- Logps/chosen: -1.2622
- Logits/rejected: 0.1431
- Logits/chosen: 0.0760
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 2
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 1.2903 | 0.2141 | 400 | 1.3234 | -1.3231 | -1.4418 | 0.5556 | 0.1187 | -1.4418 | -1.3231 | 0.3478 | 0.2657 |
| 1.2586 | 0.4282 | 800 | 1.2926 | -1.2924 | -1.4167 | 0.5482 | 0.1243 | -1.4167 | -1.2924 | 0.3140 | 0.2391 |
| 1.217 | 0.6422 | 1200 | 1.2836 | -1.2833 | -1.4047 | 0.5475 | 0.1213 | -1.4047 | -1.2833 | 0.2906 | 0.2178 |
| 1.299 | 0.8563 | 1600 | 1.2774 | -1.2772 | -1.3985 | 0.5467 | 0.1213 | -1.3985 | -1.2772 | 0.2371 | 0.1683 |
| 1.2617 | 1.0704 | 2000 | 1.2726 | -1.2724 | -1.3958 | 0.5482 | 0.1234 | -1.3958 | -1.2724 | 0.1842 | 0.1180 |
| 1.1894 | 1.2845 | 2400 | 1.2689 | -1.2687 | -1.3924 | 0.5460 | 0.1238 | -1.3924 | -1.2687 | 0.1212 | 0.0586 |
| 1.2779 | 1.4986 | 2800 | 1.2662 | -1.2659 | -1.3880 | 0.5453 | 0.1221 | -1.3880 | -1.2659 | 0.1199 | 0.0573 |
| 1.225 | 1.7127 | 3200 | 1.2650 | -1.2647 | -1.3872 | 0.5490 | 0.1225 | -1.3872 | -1.2647 | 0.1854 | 0.1171 |
| 1.1621 | 1.9267 | 3600 | 1.2636 | -1.2634 | -1.3853 | 0.5475 | 0.1219 | -1.3853 | -1.2634 | 0.1551 | 0.0880 |
| 1.1565 | 2.1408 | 4000 | 1.2633 | -1.2631 | -1.3880 | 0.5482 | 0.1250 | -1.3880 | -1.2631 | 0.0952 | 0.0325 |
| 1.1515 | 2.3549 | 4400 | 1.2629 | -1.2626 | -1.3868 | 0.5467 | 0.1242 | -1.3868 | -1.2626 | 0.0880 | 0.0251 |
| 1.1364 | 2.5690 | 4800 | 1.2625 | -1.2623 | -1.3865 | 0.5467 | 0.1242 | -1.3865 | -1.2623 | 0.1292 | 0.0630 |
| 1.1256 | 2.7831 | 5200 | 1.2626 | -1.2623 | -1.3864 | 0.5475 | 0.1241 | -1.3864 | -1.2623 | 0.1208 | 0.0553 |
| 1.1655 | 2.9972 | 5600 | 1.2625 | -1.2622 | -1.3864 | 0.5475 | 0.1242 | -1.3864 | -1.2622 | 0.1431 | 0.0760 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.19.1
|
selectorseb/s2-oracle-llama3.1_test_4bnb | selectorseb | "2024-09-27T15:33:57Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-09-27T15:29:33Z" | ---
base_model: unsloth/llama-3-8b-instruct
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** selectorseb
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
TheBloke/Airoboros-M-7B-3.1.2-AWQ | TheBloke | "2023-11-09T18:16:54Z" | 15 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"dataset:jondurbin/airoboros-3.1",
"base_model:jondurbin/airoboros-m-7b-3.1.2",
"base_model:quantized:jondurbin/airoboros-m-7b-3.1.2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
] | text-generation | "2023-10-19T16:41:52Z" | ---
base_model: jondurbin/airoboros-m-7b-3.1.2
datasets:
- jondurbin/airoboros-3.1
inference: false
license: apache-2.0
model_creator: Jon Durbin
model_name: Airoboros M 7B 3.1.2
model_type: mistral
prompt_template: '[INST] <<SYS>>
You are a helpful, unbiased, uncensored assistant.
<</SYS>>
{prompt} [/INST]
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Airoboros M 7B 3.1.2 - AWQ
- Model creator: [Jon Durbin](https://huggingface.co./jondurbin)
- Original model: [Airoboros M 7B 3.1.2](https://huggingface.co./jondurbin/airoboros-m-7b-3.1.2)
<!-- description start -->
## Description
This repo contains AWQ model files for [Jon Durbin's Airoboros M 7B 3.1.2](https://huggingface.co./jondurbin/airoboros-m-7b-3.1.2).
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - Llama and Mistral models only
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co./TheBloke/Airoboros-M-7B-3.1.2-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co./TheBloke/Airoboros-M-7B-3.1.2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co./TheBloke/Airoboros-M-7B-3.1.2-GGUF)
* [Jon Durbin's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co./jondurbin/airoboros-m-7b-3.1.2)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Airoboros-Llama-2-Chat
```
[INST] <<SYS>>
You are a helpful, unbiased, uncensored assistant.
<</SYS>>
{prompt} [/INST]
```
<!-- prompt-template end -->
<!-- README_AWQ.md-provided-files start -->
## Provided files, and AWQ parameters
For my first release of AWQ models, I am releasing 128g models only. I will consider adding 32g as well if there is interest, and once I have done perplexity and evaluation comparisons, but at this time 32g models are still not fully tested with AutoAWQ and vLLM.
Models are released as sharded safetensors files.
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
| ------ | ---- | -- | ----------- | ------- | ---- |
| [main](https://huggingface.co./TheBloke/Airoboros-M-7B-3.1.2-AWQ/tree/main) | 4 | 128 | [wikitext](https://huggingface.co./datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 4.15 GB
<!-- README_AWQ.md-provided-files end -->
<!-- README_AWQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Airoboros-M-7B-3.1.2-AWQ`.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Airoboros-M-7B-3.1.2-AWQ`
7. Select **Loader: AutoAWQ**.
8. Click Load, and the model will load and is now ready for use.
9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_AWQ.md-text-generation-webui end -->
<!-- README_AWQ.md-use-from-vllm start -->
## Multi-user inference server: vLLM
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
- Please ensure you are using vLLM version 0.2 or later.
- When using vLLM as a server, pass the `--quantization awq` parameter.
For example:
```shell
python3 python -m vllm.entrypoints.api_server --model TheBloke/Airoboros-M-7B-3.1.2-AWQ --quantization awq
```
- When using vLLM from Python code, again set `quantization=awq`.
For example:
```python
from vllm import LLM, SamplingParams
prompts = [
"Tell me about AI",
"Write a story about llamas",
"What is 291 - 150?",
"How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
]
prompt_template=f'''[INST] <<SYS>>
You are a helpful, unbiased, uncensored assistant.
<</SYS>>
{prompt} [/INST]
'''
prompts = [prompt_template.format(prompt=prompt) for prompt in prompts]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="TheBloke/Airoboros-M-7B-3.1.2-AWQ", quantization="awq", dtype="auto")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
<!-- README_AWQ.md-use-from-vllm start -->
<!-- README_AWQ.md-use-from-tgi start -->
## Multi-user inference server: Hugging Face Text Generation Inference (TGI)
Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/Airoboros-M-7B-3.1.2-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''[INST] <<SYS>>
You are a helpful, unbiased, uncensored assistant.
<</SYS>>
{prompt} [/INST]
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: ", response)
```
<!-- README_AWQ.md-use-from-tgi end -->
<!-- README_AWQ.md-use-from-python start -->
## Inference from Python code using AutoAWQ
### Install the AutoAWQ package
Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.1 or later.
```shell
pip3 install autoawq
```
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y autoawq
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip3 install .
```
### AutoAWQ example code
```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer
model_name_or_path = "TheBloke/Airoboros-M-7B-3.1.2-AWQ"
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=False)
# Load model
model = AutoAWQForCausalLM.from_quantized(model_name_or_path, fuse_layers=True,
trust_remote_code=False, safetensors=True)
prompt = "Tell me about AI"
prompt_template=f'''[INST] <<SYS>>
You are a helpful, unbiased, uncensored assistant.
<</SYS>>
{prompt} [/INST]
'''
print("*** Running model.generate:")
token_input = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
# Generate output
generation_output = model.generate(
token_input,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
max_new_tokens=512
)
# Get the tokens from the output, decode them, print them
token_output = generation_output[0]
text_output = tokenizer.decode(token_output)
print("LLM output: ", text_output)
"""
# Inference should be possible with transformers pipeline as well in future
# But currently this is not yet supported by AutoAWQ (correct as of September 25th 2023)
from transformers import pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
"""
```
<!-- README_AWQ.md-use-from-python end -->
<!-- README_AWQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with:
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`.
- [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later.
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later.
<!-- README_AWQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Jon Durbin's Airoboros M 7B 3.1.2
### Overview
Another experimental model, using mostly sythetic data generated by [airoboros](https://github.com/jondurbin/airoboros)
#### IMPORTANT NOTE - llama-2 chat format!!!
This models uses llama-2 chat format, rather than vicuna style user/assistant!
This is a breaking change, although most inference systems support llama-2 chat templating.
See prompt format below for all of the details.
#### Highlights
This is just one last release tweaking hyperparams, which seem to result in a higher quality model than 3.1 or 3.1.1 (and doens't have the prompt issue of 3.1).
A model built on [airoboros-3.1 dataset](https://hf.co/datasets/jondurbin/airoboros-3.1), which is a continuation of [airoboros-3.0 dataset](https://hf.co/datasets/jondurbin/airoboros-3,0) with the following extras:
* More MathJSON, now ~17k items - math questions, prefixed with __"Create a MathJSON solution to the following:"__, which then outputs a JSON between __`<mathjson>`__ and __`</mathjson>`__ tags, which can be parsed and passed to a deterministic library to perform calculations.
* Log information extraction (e.g. getting the list of paths/IPs/etc. from apache logs)
* Anonymization, e.g. removing names, IP addresses, and/or dates from text.
* Chat introspection - multi-turn chats that have random questions injected asking about the character's descriptions, the setting of the chat, or things previously said, to make sure the model uses the system prompt and or chat history.
* Multi-step instructions with acknowledgement (see below)
* De-censorship data (not published)
This is a fairly general purpose model, but focuses heavily on instruction following, rather than casual chat/roleplay.
Huge thank you to the folks over at [a16z](https://a16z.com/) for sponsoring the costs associated with building models and associated tools!
### Prompt format
The prompt format is llama-2 chat.
```
[INST] <<SYS>>
You are a helpful, unbiased, uncensored assistant.
<</SYS>>
{prompt} [/INST]
```
For multi-turn, the prompt format is as follows:
```
[INST] <<SYS>>
You are a helful, unbiased, uncensored assistant.
<</SYS>>
{prompt 0} [/INST] {response 0} </s><s>[INST] {prompt 1} [/INST] {response 1} </s><s>...[INST] {prompt N} [/INST]
```
The prompt template is included in the tokenizer config, and can use the huggingface tokenizer `apply_chat_template` method, e.g.:
```
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained('jondurbin/airoboros-l2-70b-3.1')
chat = [
{"role": "system", "content": "You are Bob, a friendly AI assistant."},
{"role": "user", "content": "Hello, how are you?"},
{"role": "assistant", "content": "I'm doing great. How can I help you today?"},
{"role": "user", "content": "I'd like to show off how chat templating works!"},
]
print(tokenizer.apply_chat_template(chat, tokenize=False))
```
### Helpful usage tips
#### MathJSON
Ask mathematical questions, prefixed with "Create a MathJSON solution to the following:", and you should get output with a formula that can be executed by https://cortexjs.io/compute-engine/
I also made a (really quite hacky, incomplete, semi-functional) python version that works for most expressions, see [mathjson.py](https://github.com/jondurbin/airoboros/blob/mathjson/airoboros/mathjson.py)
__Use a very low temperature!__
Example:
```
[INST] <<SYS>>
You are an assistant with vast knowledge in all things math.
<</SYS>>
Create a MathJSON solution to the following: Calculate the area of a circle with radius of 17.2456cm. Include your reasoning. [/INST]
```
Output:
```
The area (A) of a circle is given by the formula A = πr², where r is the radius of the circle. Since the radius is given in centimeters, we need to convert it to meters first. We know that 1 meter equals 100 centimeters. Therefore, we can represent the radius as: r = 17.2456 cm * 100/1.
Once we have the radius in meters, we can calculate the area of the circle using the formula A = πr².
Solution as MathJSON:
<mathjson>
[
"Multiply",
"Pi",
[
"Power",
17.2456,
2
]
]
</mathjson>
```
You can then validate the JSON between `<mathjson>` and `</mathjson>`, then pass the parsed JSON to compute-engine JS or the `evaluate` function in mathjson.py to calculate the response.
#### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
[key0: value0]
[key1: value1]
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
__Use a very low temperature!__
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
#### Summarization
500 samples have been included from [this dataset](https://huggingface.co./datasets/mattpscott/airoboros-summarization), using the same format as contextual question answering, for example:
```
BEGININPUT
{text to summarize}
ENDINPUT
BEGININSTRUCTION
Summarize the input in around 130 words.
ENDINSTRUCTION
```
#### Getting longer responses
You can use a few techniques to get longer responses.
Detailed prompts, with explicit instruction for word count:
```
Please compose a narrative set in the heart of an ancient library, steeped in the scent of old parchment and ink. The protagonist should be a young scholar who is dedicated to studying the art of storytelling and its evolution throughout history. In her pursuit of knowledge, she stumbles upon a forgotten tome that seems to possess an unusual aura. This book has the ability to bring stories to life, literally manifesting characters and scenarios from within its pages into reality.
The main character must navigate through various epochs of storytelling - from oral traditions of tribal societies, through medieval minstrels' tales, to modern-day digital narratives - as they come alive around her. Each era presents its unique challenges and lessons about the power and impact of stories on human civilization.
One such character could be a sentient quill pen, who was once used by renowned authors of yesteryears and now holds their wisdom and experiences. It becomes her mentor, guiding her through this journey with witty remarks and insightful commentary.
Ensure that your tale encapsulates the thrill of adventure, the beauty of learning, and the profound connection between humans and their stories. All characters involved should be non-human entities. Feel free to explore creative liberties but maintain the mentioned elements.
Your response should be approximately 2300 words.
```
Or, a simpler example:
```
Please create a long, detailed story about a dragon in an old growth forest who, for some reason, begins speaking the words of the source code of linux.
```
There are a few examples of next chapter completion as well, e.g.:
```
Write the next chapter of a historical fiction novel set in Paris during the 20th century.
Here's a summary of the previous chapter:
In the vibrant city of Paris, amid the tumultuous changes of the 20th century, our protagonist Margot, an aspiring fashion designer, has just secured an apprenticeship at a prestigious couture house. She meets Lucien, a charming journalist who covers the fashion industry. Together they navigate the ever-changing world of fashion and society, uncovering secrets that reveal the intricate links between style, politics, and culture. As the chapter concludes, they decide to delve deeper into the hidden corners of the fashion world to unravel its mysteries.
Requirements for the next chapter:
1. Character Development of Margot and Lucien:
- Margot's Evolution: Unfold more about Margot's past, her dreams of revolutionizing fashion, and her struggle to establish herself in a male-dominated industry. Illustrate her growing expertise, innovative ideas, and increasing dependence on Lucien.
- Lucien's Complexity: Introduce uncertainties surrounding Lucien's background and real motives. Increase suspense by suggesting undisclosed information he possesses, while also highlighting his wit and perceptiveness.
2. Exploration of Paris and the Couture House:
- Paris: Elaborate their journey through the bustling streets of Paris, including encounters with iconic figures, social unrest, and relics from different eras of French history.
- The Couture House: Expand on the grandeur of the couture house they work in, filled with artistic masterpieces, intense competition, and cryptic notes hinting at a scandalous past.
3. Emergence of the Subplot: The Lost Collection:
- Discovery: Have Margot and Lucien stumble upon a secret vault containing a lost collection designed before World War II, raising new questions about the previous owner and the influence of war on fashion.
- Revelation: Capture their shock as they realize the designs were plagiarized, the potential repercussions, and the opportunities it presents for Margot's career.
- Twist: End with a twist that suggests there are other stolen collections across Paris, setting up their new mission.
Your response should be approximately 650 words.
```
#### Coding
You can ask for fairly complex coding instructions with multiple criteria, e.g.:
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
Or inline criteria:
```
Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.
```
You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.:
```
Write a websocket application in node.js. PLAINFORMAT
```
#### Agent/function calling
The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML.
Example prompt:
```
As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format.
Input: I want to know how many times 'Python' is mentioned in my text file.
Available functions:
file_analytics:
description: This tool performs various operations on a text file.
params:
action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc.
filters:
keyword: The word or phrase we want to search for.
```
Response:
```json
{
"function": "file_analytics",
"params": {
"action": "count_occurrences",
"filters": {
"keyword": "Python"
}
}
}
```
#### Chain-of-thought
You can ask for several possible responses to a given problem, with a ranking and final answer selection.
Example prompt:
```
A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one.
```
Example response:
```
Solution 1:
In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players).
The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows:
n(n-1)/2 = 45
Solving this equation gives us n=10.
Final answer: There were 10 players in the tournament.
Solution 2:
Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers.
If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players.
Final answer: There were 10 players in the tournament.
Solution 3:
We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played.
Final answer: There were 10 players in the tournament.
Ranking of solutions from best to worst:
1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer.
2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer.
3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer.
Best and final answer: There were 10 players in the tournament.
```
#### reWOO style execution planning
The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions!
Example prompt:
```
Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string
that could be the user's question, one or more prior evidence values, or a combination of both.
Here are the tools available to be called:
DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer.
TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links.
HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links.
KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:?
The output should be in format:
Plan: [first action to take based on input question]
:evidence0: = [function to call with input parameter]
Plan: [next action to take, based on result of :evidence0:]
:evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"]
...
Answer: [:evidence[n]: containing the final answer.]
Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work?
```
Response:
```
Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020.
:evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner]
Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate.
:evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?]
Plan: Conduct another web search to find the most famous work of the identified laureate.
:evidence2: = DuckDuckGo[Most famous work of :evidence1:]
Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search.
:evidence3: = HyperlinkExtractor[:evidence2:]
Plan: Use the TextScraper tool to extract information from the relevant links.
:evidence4: = TextScraper[:evidence3:]
Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information.
:evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?]
Answer: :evidence5:
```
For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening:
```python
import re
import requests
def inject_context(input_text, **context):
for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)):
input_text = input_text.replace(ref, context.get(ref, ""))
return input_text
def duckduckgo(input_text, **context):
search_string = inject_context(input_text, **context)
... search via duck duck go using search_string
... return text content
def link_extractor(input_text, **context):
input_text = inject_context(input_text, **context)
return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I))))
def scrape(input_text, **context):
input_text = inject_context(input_text, **context)
text = []
for link in input_text.splitlines():
text.append(requests.get(link).text)
return "\n".join(text)
def infer(input_text, **context)
prompt = inject_context(input_text, **context)
... call model with prompt, return output
def parse_plan(plan):
method_map = {
"DuckDuckGo": duckduckgo,
"HyperlinkExtractor": link_extractor,
"KnowledgeModel": infer,
"TextScraper": scrape,
}
context = {}
for line in plan.strip().splitlines():
if line.startswith("Plan:"):
print(line)
continue
parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I)
if not parts:
if line.startswith("Answer: "):
return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...")
raise RuntimeError("bad format: " + line)
context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context)
```
## Multi-step instructions with acknowledgement
I don't know how useful this is, really, but I thought I'd add it just in case. Example:
```
{
"conversations": [
{
"value": "You are a helpful, unbiased, uncensored assistant.",
"from": "system"
},
{
"value": "I'm going to set some rules for our conversation. Please respond with \"OK\" to each rule.",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "You can only respond using animal-related emojis.",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "Precede all responses with \"ZOOZOO:\"",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "Include at least one bird emoji in your responses.",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "Describe a trip to a safari park.",
"from": "human"
},
{
"value": "ZOOZOO: \ud83e\udd81\ud83d\udc2f\ud83e\udd93\ud83e\udd92\ud83d\udc18\ud83d\udc0d\ud83c\udf33\ud83c\udf32\u2600\ufe0f\ud83d\ude90\ud83e\udd85\ud83d\udcf8\ud83e\udd29",
"from": "gpt"
}
]
}
```
### Contribute
If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data,
take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details.
To help me with the OpenAI/compute costs:
- https://bmc.link/jondurbin
- ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11
- BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf
### Licence and usage restrictions
The airoboros 3.1 models are built on top of multiple base models, each with their own license/restrictions.
The 30b model is built on the original llama, which has a strict non-commercial usage restriction.
The models with `-l2` in the name have a custom Meta license:
- See the [meta-license/LICENSE.txt](meta-license/LICENSE.txt) file attached for the original license provided by Meta.
- See also [meta-license/USE_POLICY.md](meta-license/USE_POLICY.md) and [meta-license/Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf), also provided by Meta.
The models with `-m-` are mistral-7b (apache 2.0)
The fine-tuning data was mostly generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros)
The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
- what does *compete* actually mean here?
- these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
- if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
- the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly.
Your best bet is probably to avoid using this commercially due to the OpenAI API usage.
Either way, by using this model, you agree to completely indemnify me.
|
YernazarBis/falcon-test-merged | YernazarBis | "2024-04-17T07:32:36Z" | 79 | 0 | transformers | [
"transformers",
"safetensors",
"falcon",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-04-17T07:29:41Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
prdev/Deepseek-32B-Function-Callv3-16bit | prdev | "2025-02-27T19:32:33Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"grpo",
"conversational",
"en",
"base_model:unsloth/DeepSeek-R1-Distill-Qwen-32B-bnb-4bit",
"base_model:finetune:unsloth/DeepSeek-R1-Distill-Qwen-32B-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-27T19:09:19Z" | ---
base_model: unsloth/DeepSeek-R1-Distill-Qwen-32B-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- grpo
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** prdev
- **License:** apache-2.0
- **Finetuned from model :** unsloth/DeepSeek-R1-Distill-Qwen-32B-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
pechaut/Mistral-C64Wizard-instruct-PEFT | pechaut | "2024-05-23T13:07:39Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-05-23T08:19:58Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
John6666/genie-pony-v9-incarceration-sdxl | John6666 | "2025-03-05T17:36:55Z" | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"realistic",
"photorealistic",
"genie",
"pony",
"en",
"base_model:puzzledust/Genie_PONY-SDXL",
"base_model:finetune:puzzledust/Genie_PONY-SDXL",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2025-03-05T17:31:25Z" | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- realistic
- photorealistic
- genie
- pony
base_model: puzzledust/Genie_PONY-SDXL
---
Original model is [here](https://huggingface.co./puzzledust/Genie_PONY-SDXL) and on [Civitai](https://civitai.com/models/1324374/geniepony-v9incarceration?modelVersionId=1495218).
The author is [here](https://huggingface.co./puzzledust).
This model created by [Genie_AI](https://civitai.com/user/Genie_AI).
|
RichardErkhov/MorTal007_-_Mimicker-awq | RichardErkhov | "2024-12-15T11:17:14Z" | 5 | 0 | null | [
"safetensors",
"llama",
"4-bit",
"awq",
"region:us"
] | null | "2024-12-15T11:15:49Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Mimicker - AWQ
- Model creator: https://huggingface.co./MorTal007/
- Original model: https://huggingface.co./MorTal007/Mimicker/
Original model description:
---
license: llama2
---
|
Svngoku/French-Aya-Expanse-8B | Svngoku | "2024-11-24T23:01:47Z" | 9 | 0 | transformers | [
"transformers",
"safetensors",
"cohere",
"text-generation",
"conversational",
"fr",
"dataset:Svngoku/french-multilingual-reward-bench-dpo",
"arxiv:2408.14960",
"arxiv:2407.02552",
"arxiv:2406.18682",
"arxiv:2410.10801",
"base_model:CohereForAI/aya-expanse-8b",
"base_model:finetune:CohereForAI/aya-expanse-8b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-10-31T23:14:12Z" | ---
library_name: transformers
datasets:
- Svngoku/french-multilingual-reward-bench-dpo
language:
- fr
base_model:
- CohereForAI/aya-expanse-8b
metrics:
- bleu
- accuracy
pipeline_tag: text-generation
---
# Model Card for French Aya Expanse 8B 🇫🇷
<img src="https://huggingface.co./CohereForAI/aya-expanse-8b/resolve/main/aya-expanse-8B.png" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
**Aya Expanse 8B** is an open-weight research release of a model with highly advanced multilingual capabilities. It focuses on pairing a highly performant pre-trained [Command family](https://huggingface.co./CohereForAI/c4ai-command-r-plus) of models with the result of a year’s dedicated research from [Cohere For AI](https://cohere.for.ai/), including [data arbitrage](https://arxiv.org/abs/2408.14960), [multilingual preference training](https://arxiv.org/abs/2407.02552), [safety tuning](https://arxiv.org/abs/2406.18682), and [model merging](https://arxiv.org/abs/2410.10801). The result is a powerful multilingual large language model.
This model card corresponds to the 8-billion version of the Aya Expanse model. We also released an 32-billion version which you can find [here](https://huggingface.co./CohereForAI/aya-expanse-32B).
- Developed by: [Cohere For AI](https://cohere.for.ai/)
- Point of Contact: Cohere For AI: [cohere.for.ai](https://cohere.for.ai/)
- License: [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license), requires also adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy)
- Model: Aya Expanse 8B
- Model Size: 8 billion parameters
### Supported Languages
The model cover 23 languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese.
But the fine-tuned version is focus on `French`
### How to Use Aya Expanse
Install the transformers library and load Aya Expanse 8B as follows:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "Svngoku/French-Aya-Expanse-8B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
# Format the message with the chat template
messages = [{"role": "user", "content": "Quels est la superficie de Paris"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Anneme onu ne kadar sevdiğimi anlatan bir mektup yaz<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
gen_tokens = model.generate(
input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.3,
)
gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)
```
### Example Notebooks
**Fine-Tuning:**
- [Detailed Fine-Tuning Notebook](https://colab.research.google.com/drive/1ryPYXzqb7oIn2fchMLdCNSIH5KfyEtv4).
**Community-Contributed Use Cases:**:
The following notebooks contributed by *Cohere For AI Community* members show how Aya Expanse can be used for different use cases:
- [Mulitlingual Writing Assistant](https://colab.research.google.com/drive/1SRLWQ0HdYN_NbRMVVUHTDXb-LSMZWF60)
- [AyaMCooking](https://colab.research.google.com/drive/1-cnn4LXYoZ4ARBpnsjQM3sU7egOL_fLB?usp=sharing)
- [Multilingual Question-Answering System](https://colab.research.google.com/drive/1bbB8hzyzCJbfMVjsZPeh4yNEALJFGNQy?usp=sharing)
## Model Details
**Input**: Models input text only.
**Output**: Models generate text only.
**Model Architecture**: Aya Expanse 8B is an auto-regressive language model that uses an optimized transformer architecture. Post-training includes supervised finetuning, preference training, and model merging.
**Languages covered**: The model is particularly optimized for multilinguality and supports the following languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese
**Context length**: 8K
For more details about how the model was trained, check out [our blogpost](https://huggingface.co./blog/aya-expanse).
### Evaluation
They evaluated Aya Expanse 8B against Gemma 2 9B, Llama 3.1 8B, Ministral 8B, and Qwen 2.5 7B using the `dolly_human_edited` subset from the [Aya Evaluation Suite dataset](https://huggingface.co./datasets/CohereForAI/aya_evaluation_suite) and m-ArenaHard, a dataset based on the [Arena-Hard-Auto dataset](https://huggingface.co./datasets/lmarena-ai/arena-hard-auto-v0.1) and translated to the 23 languages we support in Aya Expanse 8B. Win-rates were determined using gpt-4o-2024-08-06 as a judge. For a conservative benchmark, we report results from gpt-4o-2024-08-06, though gpt-4o-mini scores showed even stronger performance.
The m-ArenaHard dataset, used to evaluate Aya Expanse’s capabilities, is publicly available [here](https://huggingface.co./datasets/CohereForAI/m-ArenaHard).
<img src="winrates_marenahard_complete.png" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
<img src="winrates_dolly.png" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
<img src="winrates_by_lang.png" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
<img src="winrates_step_by_step.png" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
### Model Card Contact
For errors or additional questions about details in this model card, contact [email protected].
### Terms of Use
They hope that the release of this model will make community-based research efforts more accessible, by releasing the weights of a highly performant multilingual model to researchers all over the world. This model is governed by a [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license) License with an acceptable use addendum, and also requires adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy). |
Dynosaur/llama3-8b-math-sft-full-test | Dynosaur | "2025-01-10T11:35:45Z" | 10 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"dataset:Dynosaur/math-sft-full-0",
"base_model:Dynosaur/llama3-8b-math-sft",
"base_model:finetune:Dynosaur/llama3-8b-math-sft",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-10T03:38:09Z" | ---
library_name: transformers
license: llama3
base_model: Dynosaur/llama3-8b-math-sft
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
- trl
- sft
- generated_from_trainer
datasets:
- Dynosaur/math-sft-full-0
model-index:
- name: llama3-8b-math-sft-full-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama3-8b-math-sft-full-test
This model is a fine-tuned version of [Dynosaur/llama3-8b-math-sft](https://huggingface.co./Dynosaur/llama3-8b-math-sft) on the Dynosaur/math-sft-full-0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.45.0
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.20.3
|
PrunaAI/CausalLM-7B-HQQ-8bit-smashed | PrunaAI | "2025-02-27T06:10:09Z" | 0 | 0 | null | [
"llama",
"pruna-ai",
"hqq",
"region:us"
] | null | "2025-02-27T05:57:26Z" | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: ORIGINAL_REPO_NAME
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo ORIGINAL_REPO_NAME installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/CausalLM-7B-HQQ-8bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/CausalLM-7B-HQQ-8bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("ORIGINAL_REPO_NAME")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model ORIGINAL_REPO_NAME before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
Indah1/BioChat15 | Indah1 | "2024-11-27T11:40:31Z" | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"en",
"arxiv:2402.10373",
"base_model:BioMistral/BioMistral-7B",
"base_model:finetune:BioMistral/BioMistral-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-21T07:56:25Z" | ---
library_name: transformers
base_model: BioMistral/BioMistral-7B
language:
- en
metrics:
- perplexity
---
<p align="center">
<img src="https://huggingface.co./Indah1/BioChat10/resolve/main/BioChat.png?download=true" alt="drawing" width="450"/>
</p>
# BioChat Model
- **Source Paper:** [BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains](https://arxiv.org/abs/2402.10373)
- **BioChat** is a language model fine-tuned using the ChatDoctor dataset from [ChatDoctor-5k](https://huggingface.co./datasets/LinhDuong/chatdoctor-5k). Specifically designed for medical conversations, BioChat enables users to engage in interactive discussions with a virtual doctor. Whether you are seeking advice about symptoms you are experiencing, exploring possible health conditions, or looking for general medical insights, BioChat is built to assist in a reliable and informative manner.
- **NOTE**: We are still in the early stages of exploring the generation capabilities and limitations of this model. It is important to emphasize that its text generation features are intended solely for research purposes and are not yet suitable for production use.
- **Finetuned from model:** [BioMistral-7B](https://huggingface.co./BioMistral/BioMistral-7B).
# Using BioChat
You can use BioMistral with [Hugging Face's Transformers library](https://github.com/huggingface/transformers) as follow.
Loading the model and tokenizer :
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("BioMistral/BioMistral-7B")
model = AutoModelForCausalLM.from_pretrained(
"BioMistral/BioMistral-7B",
load_in_8bit=True,
device_map="auto",
output_hidden_states=True # Ensure hidden states are available
)
model = PeftModel.from_pretrained(model, "Indah1/BioChat10")
```
# Fine-Tuning Data
The fine-tuning data used for BioChat is derived from the [ChatDoctor-5k](https://huggingface.co./datasets/LinhDuong/chatdoctor-5k) dataset. This dataset contains a collection of medical conversations tailored to simulate doctor-patient interactions, making it an ideal source for training a medical conversational model. The dataset was carefully curated to ensure relevance and diversity in medical topics.
#### Training Hyperparameters
| Hyperparameter | Value |
|:-------------------:|:----------------------------------:|
| Weigh Decay | 0.01 |
| Learning Rate | 2e-05 |
| Training Batch Size | 8 |
| Batch Size | 8 |
| Number of GPU | 1 |
| Optimizer | AdamW_8Bit |
| Warm Up Ratio | 0.03 |
| Scheduler | Cosine |
| Number of Epoch | 5, 10, 15 |
## Evaluation
To determine the best model for fine-tuning, I used ***perplexity*** as a metric to evaluate performance and select the most optimal version. By leveraging the model's capabilities, I aim to evaluate its behavior and responses using tools like the ***Word Embedding Association Test (WEAT)***. Below are the WEAT scores and perplexity values for the model at epochs 5, 10, and 15, which helped in determining the best-performing version. It is important to emphasize that its text generation features are intended solely for research purposes and are not yet suitable for production use. By releasing this model, we aim to drive advancements in biomedical NLP applications and contribute to best practices for the responsible development of domain-specific language models. Ensuring reliability, fairness, accuracy, and explainability remains a top priority for us.
| Model Name | Perplexity Score | WEAT Score | Effect Size |
|:-------------------:|:----------------------------------:|:----------------------------------:|:----------------------------------:|
| **[BioChat5](https://huggingface.co./Indah1/BioChat5)** | **4.5799** | **-0.00652** | **-0.4059** |
| **[BioChat10](https://huggingface.co./Indah1/BioChat10)** | **4.5873** | **0.002351** | **0.06176** |
| **[BioChat15](https://huggingface.co./Indah1/BioChat15)** | **4.8864** | **0.00859** | **0.43890** |
### Framework versions
- PEFT 0.11.1 |
Monsicha/xlm-roberta-base-finetuned-panx-fr | Monsicha | "2023-12-01T19:36:32Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2023-11-25T09:11:13Z" | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.fr
split: validation
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.8449000168038986
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co./xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2769
- F1: 0.8449
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5714 | 1.0 | 191 | 0.3345 | 0.7839 |
| 0.2588 | 2.0 | 382 | 0.2724 | 0.8297 |
| 0.1699 | 3.0 | 573 | 0.2769 | 0.8449 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cpu
- Datasets 2.15.0
- Tokenizers 0.15.0
|
MikeGreen2710/aux_v1444_model_full_pass | MikeGreen2710 | "2024-04-02T04:37:48Z" | 177 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-04-02T04:37:28Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
fmehvash/marian-finetuned-kde4-en-to-fr | fmehvash | "2024-12-07T03:29:23Z" | 114 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"base_model:huggingface-course/marian-finetuned-kde4-en-to-fr",
"base_model:finetune:huggingface-course/marian-finetuned-kde4-en-to-fr",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2024-12-03T08:30:24Z" | ---
library_name: transformers
license: apache-2.0
base_model: huggingface-course/marian-finetuned-kde4-en-to-fr
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-fr
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
config: en-fr
split: train
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 52.74848000173208
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [huggingface-course/marian-finetuned-kde4-en-to-fr](https://huggingface.co./huggingface-course/marian-finetuned-kde4-en-to-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7036
- Model Preparation Time: 0.0037
- Bleu: 52.7485
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
Holarissun/REPROD_dpo_helpfulhelpful_human_subset-1_modelgemma7b_maxsteps10000_bz8_lr5e-06 | Holarissun | "2024-05-29T13:04:57Z" | 1 | 0 | peft | [
"peft",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:google/gemma-7b",
"base_model:adapter:google/gemma-7b",
"license:gemma",
"region:us"
] | null | "2024-05-29T13:04:53Z" | ---
license: gemma
library_name: peft
tags:
- trl
- dpo
- generated_from_trainer
base_model: google/gemma-7b
model-index:
- name: REPROD_dpo_helpfulhelpful_human_subset-1_modelgemma7b_maxsteps10000_bz8_lr5e-06
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# REPROD_dpo_helpfulhelpful_human_subset-1_modelgemma7b_maxsteps10000_bz8_lr5e-06
This model is a fine-tuned version of [google/gemma-7b](https://huggingface.co./google/gemma-7b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 15
- training_steps: 10000
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2 |
HanliChu/ppo-Huggy | HanliChu | "2024-03-11T14:42:01Z" | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | "2024-03-11T13:55:46Z" | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co./learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co./learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co./unity
2. Step 1: Find your model_id: HanliChu/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
stabilityai/arcade100k | stabilityai | "2024-04-12T08:11:23Z" | 0 | 3 | null | [
"arxiv:2402.17834",
"region:us"
] | null | "2023-10-27T22:54:43Z" | # Arcade100kTokenizer
Arcade100k is a BPE tokenizer extended from OpenAI’s [`tiktoken.cl100k_base`](https://github.com/openai/tiktoken) to
include special tokens for code and individual digit-splitting.
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("stabilityai/arcade100k", trust_remote_code=True)
tokenizer("hello, world!", return_tensors='pt')
```
# Citation
```bibtex
@article{bellagente2024stable,
title={Stable LM 2 1.6 B Technical Report},
author={Bellagente, Marco and Tow, Jonathan and Mahan, Dakota and Phung, Duy and Zhuravinskyi, Maksym and Adithyan, Reshinth and Baicoianu, James and Brooks, Ben and Cooper, Nathan and Datta, Ashish and others},
journal={arXiv preprint arXiv:2402.17834},
year={2024}
}
```
|
Marcin1304/calculator_model_test_third_version | Marcin1304 | "2024-03-10T17:07:41Z" | 92 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-03-10T16:55:27Z" | ---
tags:
- generated_from_trainer
model-index:
- name: calculator_model_test_third_version
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# calculator_model_test_third_version
This model is a fine-tuned version of [](https://huggingface.co./) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1341
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 512
- eval_batch_size: 512
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.4031 | 1.0 | 6 | 2.7477 |
| 2.3423 | 2.0 | 12 | 2.0656 |
| 1.7839 | 3.0 | 18 | 1.6402 |
| 1.6119 | 4.0 | 24 | 1.5371 |
| 1.544 | 5.0 | 30 | 1.4939 |
| 1.4631 | 6.0 | 36 | 1.4346 |
| 1.4052 | 7.0 | 42 | 1.3510 |
| 1.3137 | 8.0 | 48 | 1.2456 |
| 1.2297 | 9.0 | 54 | 1.2067 |
| 1.2454 | 10.0 | 60 | 1.2082 |
| 1.121 | 11.0 | 66 | 1.0614 |
| 1.0353 | 12.0 | 72 | 0.9978 |
| 1.0028 | 13.0 | 78 | 1.0926 |
| 0.993 | 14.0 | 84 | 0.9551 |
| 0.9267 | 15.0 | 90 | 0.9013 |
| 0.8728 | 16.0 | 96 | 0.9764 |
| 0.9072 | 17.0 | 102 | 0.8508 |
| 0.8457 | 18.0 | 108 | 0.8541 |
| 0.8278 | 19.0 | 114 | 0.7950 |
| 0.7903 | 20.0 | 120 | 0.7892 |
| 0.7726 | 21.0 | 126 | 0.7708 |
| 0.7789 | 22.0 | 132 | 0.7830 |
| 0.7515 | 23.0 | 138 | 0.8062 |
| 0.7691 | 24.0 | 144 | 0.7276 |
| 0.7203 | 25.0 | 150 | 0.7205 |
| 0.7119 | 26.0 | 156 | 0.7131 |
| 0.6776 | 27.0 | 162 | 0.6892 |
| 0.6926 | 28.0 | 168 | 0.7582 |
| 0.7128 | 29.0 | 174 | 0.9174 |
| 0.8055 | 30.0 | 180 | 0.7222 |
| 0.7423 | 31.0 | 186 | 0.6740 |
| 0.6712 | 32.0 | 192 | 0.7917 |
| 0.6965 | 33.0 | 198 | 0.6726 |
| 0.652 | 34.0 | 204 | 0.7449 |
| 0.6963 | 35.0 | 210 | 0.6932 |
| 0.6652 | 36.0 | 216 | 0.6286 |
| 0.6164 | 37.0 | 222 | 0.5777 |
| 0.5848 | 38.0 | 228 | 0.5556 |
| 0.5657 | 39.0 | 234 | 0.5788 |
| 0.5631 | 40.0 | 240 | 0.5216 |
| 0.5315 | 41.0 | 246 | 0.5156 |
| 0.5277 | 42.0 | 252 | 0.5486 |
| 0.5498 | 43.0 | 258 | 0.4877 |
| 0.4836 | 44.0 | 264 | 0.5947 |
| 0.555 | 45.0 | 270 | 0.4725 |
| 0.4804 | 46.0 | 276 | 0.4367 |
| 0.4537 | 47.0 | 282 | 0.4729 |
| 0.4668 | 48.0 | 288 | 0.3988 |
| 0.4507 | 49.0 | 294 | 0.4808 |
| 0.5128 | 50.0 | 300 | 0.4311 |
| 0.4444 | 51.0 | 306 | 0.4709 |
| 0.4538 | 52.0 | 312 | 0.3786 |
| 0.4213 | 53.0 | 318 | 0.3962 |
| 0.4067 | 54.0 | 324 | 0.3765 |
| 0.3931 | 55.0 | 330 | 0.4016 |
| 0.3946 | 56.0 | 336 | 0.3674 |
| 0.4095 | 57.0 | 342 | 0.3445 |
| 0.3817 | 58.0 | 348 | 0.3252 |
| 0.3528 | 59.0 | 354 | 0.3171 |
| 0.3527 | 60.0 | 360 | 0.3465 |
| 0.3562 | 61.0 | 366 | 0.3992 |
| 0.4265 | 62.0 | 372 | 0.3743 |
| 0.3734 | 63.0 | 378 | 0.3598 |
| 0.3585 | 64.0 | 384 | 0.3008 |
| 0.3438 | 65.0 | 390 | 0.2719 |
| 0.3289 | 66.0 | 396 | 0.2876 |
| 0.3128 | 67.0 | 402 | 0.2764 |
| 0.3106 | 68.0 | 408 | 0.2986 |
| 0.3058 | 69.0 | 414 | 0.2567 |
| 0.286 | 70.0 | 420 | 0.2762 |
| 0.2857 | 71.0 | 426 | 0.2732 |
| 0.2921 | 72.0 | 432 | 0.2728 |
| 0.3118 | 73.0 | 438 | 0.2352 |
| 0.2701 | 74.0 | 444 | 0.2204 |
| 0.2622 | 75.0 | 450 | 0.2114 |
| 0.2449 | 76.0 | 456 | 0.2262 |
| 0.2542 | 77.0 | 462 | 0.2446 |
| 0.259 | 78.0 | 468 | 0.2187 |
| 0.2852 | 79.0 | 474 | 0.2329 |
| 0.2587 | 80.0 | 480 | 0.2101 |
| 0.2491 | 81.0 | 486 | 0.2165 |
| 0.2291 | 82.0 | 492 | 0.1921 |
| 0.2286 | 83.0 | 498 | 0.1815 |
| 0.2095 | 84.0 | 504 | 0.1700 |
| 0.2256 | 85.0 | 510 | 0.1640 |
| 0.2088 | 86.0 | 516 | 0.1848 |
| 0.2087 | 87.0 | 522 | 0.1745 |
| 0.2025 | 88.0 | 528 | 0.1655 |
| 0.2003 | 89.0 | 534 | 0.1717 |
| 0.2007 | 90.0 | 540 | 0.1682 |
| 0.1862 | 91.0 | 546 | 0.1629 |
| 0.2005 | 92.0 | 552 | 0.1482 |
| 0.2003 | 93.0 | 558 | 0.1600 |
| 0.1876 | 94.0 | 564 | 0.1498 |
| 0.1929 | 95.0 | 570 | 0.1405 |
| 0.1772 | 96.0 | 576 | 0.1404 |
| 0.1797 | 97.0 | 582 | 0.1366 |
| 0.1734 | 98.0 | 588 | 0.1352 |
| 0.1686 | 99.0 | 594 | 0.1345 |
| 0.177 | 100.0 | 600 | 0.1341 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Parinitha003/FirstRL | Parinitha003 | "2024-01-22T09:43:38Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2024-01-22T09:42:07Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 270.14 +/- 21.62
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Roybello/Roy-replicate | Roybello | "2025-02-26T00:48:37Z" | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-02-25T18:56:29Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co./black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: ROY
---
# Roy Replicate
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `ROY` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Roybello/Roy-replicate', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co./docs/diffusers/main/en/using-diffusers/loading_adapters)
|
trenden/51eb40ac-e47b-4b80-a6bf-14d0b1b7ae49 | trenden | "2024-12-29T08:27:00Z" | 5 | 0 | peft | [
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2-2b",
"base_model:adapter:unsloth/gemma-2-2b",
"license:gemma",
"region:us"
] | null | "2024-12-29T08:26:07Z" | ---
library_name: peft
license: gemma
base_model: unsloth/gemma-2-2b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 51eb40ac-e47b-4b80-a6bf-14d0b1b7ae49
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2-2b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- dd969854abe88334_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/dd969854abe88334_train_data.json
type:
field_instruction: question
field_output: answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: trenden/51eb40ac-e47b-4b80-a6bf-14d0b1b7ae49
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/dd969854abe88334_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 51eb40ac-e47b-4b80-a6bf-14d0b1b7ae49
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 51eb40ac-e47b-4b80-a6bf-14d0b1b7ae49
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 51eb40ac-e47b-4b80-a6bf-14d0b1b7ae49
This model is a fine-tuned version of [unsloth/gemma-2-2b](https://huggingface.co./unsloth/gemma-2-2b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9197
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 11.3864 | 0.0063 | 1 | 11.5243 |
| 11.2995 | 0.0189 | 3 | 11.1273 |
| 8.8625 | 0.0379 | 6 | 6.5188 |
| 2.8434 | 0.0568 | 9 | 1.9197 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
sdpkjc/Hopper-v4-sac_continuous_action-seed5 | sdpkjc | "2023-12-19T09:43:49Z" | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Hopper-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-12-19T09:43:44Z" | ---
tags:
- Hopper-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: SAC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Hopper-v4
type: Hopper-v4
metrics:
- type: mean_reward
value: 1680.67 +/- 734.03
name: mean_reward
verified: false
---
# (CleanRL) **SAC** Agent Playing **Hopper-v4**
This is a trained model of a SAC agent playing Hopper-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sac_continuous_action.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[sac_continuous_action]"
python -m cleanrl_utils.enjoy --exp-name sac_continuous_action --env-id Hopper-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co./sdpkjc/Hopper-v4-sac_continuous_action-seed5/raw/main/sac_continuous_action.py
curl -OL https://huggingface.co./sdpkjc/Hopper-v4-sac_continuous_action-seed5/raw/main/pyproject.toml
curl -OL https://huggingface.co./sdpkjc/Hopper-v4-sac_continuous_action-seed5/raw/main/poetry.lock
poetry install --all-extras
python sac_continuous_action.py --save-model --upload-model --hf-entity sdpkjc --env-id Hopper-v4 --seed 5 --track
```
# Hyperparameters
```python
{'alpha': 0.2,
'autotune': True,
'batch_size': 256,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'env_id': 'Hopper-v4',
'exp_name': 'sac_continuous_action',
'gamma': 0.99,
'hf_entity': 'sdpkjc',
'learning_starts': 5000.0,
'noise_clip': 0.5,
'policy_frequency': 2,
'policy_lr': 0.0003,
'q_lr': 0.001,
'save_model': True,
'seed': 5,
'target_network_frequency': 1,
'tau': 0.005,
'torch_deterministic': True,
'total_timesteps': 1000000,
'track': True,
'upload_model': True,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
am-infoweb/QA_SYNTH_25_SEPT_WITH_FINETUNE_1.1_on19sept | am-infoweb | "2023-09-25T21:50:02Z" | 105 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | question-answering | "2023-09-25T19:14:02Z" | ---
tags:
- generated_from_trainer
model-index:
- name: QA_SYNTH_25_SEPT_WITH_FINETUNE_1.1_on19sept
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# QA_SYNTH_25_SEPT_WITH_FINETUNE_1.1_on19sept
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 0.0731 | 1.0 | 14513 | 0.0063 |
| 0.0 | 2.0 | 29026 | 0.0048 |
| 0.0484 | 3.0 | 43539 | 0.0034 |
| 0.0 | 4.0 | 58052 | 0.0030 |
| 0.0006 | 5.0 | 72565 | 0.0073 |
| 0.0002 | 6.0 | 87078 | 0.0019 |
| 0.0001 | 7.0 | 101591 | 0.0000 |
| 0.0 | 8.0 | 116104 | 0.0000 |
| 0.0 | 9.0 | 130617 | 0.0000 |
| 0.0 | 10.0 | 145130 | 0.0000 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
AyoubChLin/bert-finetuned-Arxiv | AyoubChLin | "2024-07-16T15:48:23Z" | 107 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-07-16T15:48:06Z" | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: bert-finetuned-Arxiv
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/cherguelainea/huggingface/runs/w1hr3bmb)
# bert-finetuned-Arxiv
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co./bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2203
- F1: 0.8872
- Roc Auc: 0.9052
- Accuracy: 0.3438
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.297 | 1.0 | 1563 | 0.2869 | 0.8483 | 0.8739 | 0.1142 |
| 0.2389 | 2.0 | 3126 | 0.2440 | 0.8719 | 0.8933 | 0.1934 |
| 0.2057 | 3.0 | 4689 | 0.2299 | 0.8791 | 0.8983 | 0.2795 |
| 0.1747 | 4.0 | 6252 | 0.2223 | 0.8837 | 0.9021 | 0.3185 |
| 0.1568 | 5.0 | 7815 | 0.2208 | 0.8867 | 0.9051 | 0.3281 |
| 0.1419 | 6.0 | 9378 | 0.2203 | 0.8872 | 0.9052 | 0.3438 |
### Framework versions
- Transformers 4.42.3
- Pytorch 2.1.2
- Datasets 2.20.0
- Tokenizers 0.19.1
|
asritha22bce/bart-positive-tone-finetuned | asritha22bce | "2025-03-09T11:51:16Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bart",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2025-03-09T11:46:59Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Sayantan2001/finetuning-sentiment-model-3000-samples | Sayantan2001 | "2023-10-28T07:35:04Z" | 103 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-10-28T07:28:47Z" | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co./distilbert-base-uncased) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
Hanxchoi/bandcamp-update | Hanxchoi | "2023-06-03T09:10:01Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-06-03T09:06:15Z" | import feedparser
import requests
from bs4 import BeautifulSoup
import time
# Enter the bandcamp URL of the artist you want to track
bandcamp_url = 'https://artistname.bandcamp.com/'
# Enter the URL for the RSS feed service you are going to use to receive updates
rss_feed_url = 'https://www.example.com/rss'
# Define the headers for the requests
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3'}
# Initialize the parser
d = feedparser.FeedParserDict()
# Get the links for the music from the Bandcamp page
def get_links_from_page(url):
response = requests.get(url, headers=headers)
soup = BeautifulSoup(response.text, 'html.parser')
href_list = []
for a in soup.find_all('a', href=True):
if '/track/' in a['href']:
href_list.append('https://'+a['href'][2:])
return href_list
# Process the links and format them into RSS items
def process_links(href_list):
items_content = ''
for href in href_list:
name = href.split('/')[-1].replace('-', ' ').title()
items_content += f'<item><title>New Music from {name}</title><link>{href}</link><description>New music released by {name}. Check it out!</description></item>'
return items_content
# Loop through and update for new content
while True:
href_list = get_links_from_page(bandcamp_url)
if not d.entries:
for href in href_list:
d.entries.append({'title': href.split('/')[-1].replace('-', ' ').title(), 'link': href})
items_content = process_links(href_list)
rss_content = f'''<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0">
<channel>
<title>New Music from {bandcamp_url}</title>
<link>{bandcamp_url}</link>
<description>New releases by your favorite Bandcamp artists.</description>
{items_content}
</channel>
</rss>'''
requests.post(rss_feed_url, data=rss_content.encode('utf-8'), headers={'Content-type': 'application/rss+xml'})
else:
for href in href_list:
if href not in [entry.link for entry in d.entries]:
d.entries.append({'title': href.split('/')[-1].replace('-', ' ').title(), 'link': href})
items_content = process_links([href])
rss_content = f'''<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0">
<channel>
<title>New Music from {bandcamp_url}</title>
<link>{bandcamp_url}</link>
<description>New releases by your favorite Bandcamp artists.</description>
{items_content}
</channel>
</rss>'''
requests.post(rss_feed_url, data=rss_content.encode('utf-8'), headers={'Content-type': 'application/rss+xml'})
time.sleep(60*30) # Wait for 30 minutes and repeat |
uwcc/TinakoriHouse | uwcc | "2024-09-19T00:38:09Z" | 9 | 0 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"ai-toolkit",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2024-09-18T08:17:53Z" | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- ai-toolkit
widget:
- text: A church in a field on a sunny day, [trigger] style.
output:
url: samples/1726706220332__000002000_0.jpg
- text: A seal plays with a ball on the beach, [trigger] style.
output:
url: samples/1726706238823__000002000_1.jpg
- text: A clown at the circus rides on a zebra, [trigger] style.
output:
url: samples/1726706257329__000002000_2.jpg
- text: '[trigger]'
output:
url: samples/1726706275829__000002000_3.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: TinakoriHouse
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co./black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# TinakoriHouse
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit)
<Gallery />
## Trigger words
You should use `TinakoriHouse` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.
Weights for this model are available in Safetensors format.
[Download](/uwcc/TinakoriHouse/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('uwcc/TinakoriHouse', weight_name='TinakoriHouse')
image = pipeline('A church in a field on a sunny day, [trigger] style.').images[0]
image.save("my_image.png")
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co./docs/diffusers/main/en/using-diffusers/loading_adapters)
|
AmirlyPhd/final_V1-bert-text-classification-model | AmirlyPhd | "2024-04-30T06:59:15Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-04-30T06:59:00Z" | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: final_V1-bert-text-classification-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# final_V1-bert-text-classification-model
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co./bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1498
- Accuracy: 0.9713
- F1: 0.8341
- Precision: 0.8330
- Recall: 0.8356
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 1.6252 | 0.11 | 50 | 1.7120 | 0.3451 | 0.1545 | 0.2382 | 0.1762 |
| 0.7857 | 0.22 | 100 | 0.7296 | 0.8209 | 0.4973 | 0.4815 | 0.5166 |
| 0.2986 | 0.33 | 150 | 0.5358 | 0.8830 | 0.6565 | 0.6402 | 0.6744 |
| 0.2612 | 0.44 | 200 | 0.4678 | 0.9035 | 0.6704 | 0.6621 | 0.6795 |
| 0.153 | 0.55 | 250 | 0.4325 | 0.9065 | 0.6648 | 0.6446 | 0.6879 |
| 0.2274 | 0.66 | 300 | 0.3498 | 0.8969 | 0.6440 | 0.6237 | 0.6677 |
| 0.1449 | 0.76 | 350 | 0.4254 | 0.8964 | 0.6885 | 0.8012 | 0.6895 |
| 0.1695 | 0.87 | 400 | 0.3484 | 0.9248 | 0.7301 | 0.7857 | 0.7208 |
| 0.1206 | 0.98 | 450 | 0.3075 | 0.9218 | 0.7351 | 0.7586 | 0.7279 |
| 0.1142 | 1.09 | 500 | 0.2241 | 0.9467 | 0.8063 | 0.7964 | 0.8218 |
| 0.0642 | 1.2 | 550 | 0.2527 | 0.9491 | 0.8159 | 0.8106 | 0.8239 |
| 0.0935 | 1.31 | 600 | 0.1961 | 0.9601 | 0.8216 | 0.8270 | 0.8173 |
| 0.0755 | 1.42 | 650 | 0.1290 | 0.9691 | 0.8272 | 0.8348 | 0.8201 |
| 0.108 | 1.53 | 700 | 0.1712 | 0.9612 | 0.8215 | 0.8311 | 0.8130 |
| 0.0667 | 1.64 | 750 | 0.1449 | 0.9716 | 0.8354 | 0.8371 | 0.8338 |
| 0.0925 | 1.75 | 800 | 0.1193 | 0.9721 | 0.8345 | 0.8353 | 0.8337 |
| 0.0769 | 1.86 | 850 | 0.1477 | 0.9675 | 0.8299 | 0.8270 | 0.8334 |
| 0.0558 | 1.97 | 900 | 0.1988 | 0.9606 | 0.8239 | 0.8194 | 0.8299 |
| 0.0379 | 2.07 | 950 | 0.1546 | 0.9694 | 0.8319 | 0.8300 | 0.8340 |
| 0.0358 | 2.18 | 1000 | 0.1871 | 0.9655 | 0.8295 | 0.8283 | 0.8312 |
| 0.0248 | 2.29 | 1050 | 0.1631 | 0.9661 | 0.8303 | 0.8278 | 0.8333 |
| 0.0412 | 2.4 | 1100 | 0.1688 | 0.9658 | 0.8283 | 0.8235 | 0.8340 |
| 0.0096 | 2.51 | 1150 | 0.1726 | 0.9661 | 0.8316 | 0.8297 | 0.8342 |
| 0.0025 | 2.62 | 1200 | 0.1808 | 0.9653 | 0.8300 | 0.8261 | 0.8348 |
| 0.0074 | 2.73 | 1250 | 0.1697 | 0.9677 | 0.8323 | 0.8291 | 0.8360 |
| 0.028 | 2.84 | 1300 | 0.1630 | 0.9705 | 0.8359 | 0.8344 | 0.8377 |
| 0.0292 | 2.95 | 1350 | 0.1743 | 0.9696 | 0.8352 | 0.8341 | 0.8366 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
blockblockblock/Meta-Llama-3.1-8B-Instruct-bpw4.2-exl2 | blockblockblock | "2024-07-23T20:23:45Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-3",
"conversational",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"arxiv:2204.05149",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] | text-generation | "2024-07-23T20:21:37Z" | ---
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
license: llama3.1
extra_gated_prompt: >-
### LLAMA 3.1 COMMUNITY LICENSE AGREEMENT
Llama 3.1 Version Release Date: July 23, 2024
"Agreement" means the terms and conditions for use, reproduction, distribution and modification of the
Llama Materials set forth herein.
"Documentation" means the specifications, manuals and documentation accompanying Llama 3.1
distributed by Meta at https://llama.meta.com/doc/overview.
"Licensee" or "you" means you, or your employer or any other person or entity (if you are entering into
this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or
regulations to provide legal consent and that has legal authority to bind your employer or such other
person or entity if you are entering in this Agreement on their behalf.
"Llama 3.1" means the foundational large language models and software and algorithms, including
machine-learning model code, trained model weights, inference-enabling code, training-enabling code,
fine-tuning enabling code and other elements of the foregoing distributed by Meta at
https://llama.meta.com/llama-downloads.
"Llama Materials" means, collectively, Meta’s proprietary Llama 3.1 and Documentation (and any
portion thereof) made available under this Agreement.
"Meta" or "we" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your
principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located
outside of the EEA or Switzerland).
1. License Rights and Redistribution.
a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free
limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama
Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the
Llama Materials.
b. Redistribution and Use.
i. If you distribute or make available the Llama Materials (or any derivative works
thereof), or a product or service (including another AI model) that contains any of them, you shall (A)
provide a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with
Llama” on a related website, user interface, blogpost, about page, or product documentation. If you use
the Llama Materials or any outputs or results of the Llama Materials to create, train, fine tune, or
otherwise improve an AI model, which is distributed or made available, you shall also include “Llama” at
the beginning of any such AI model name.
ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part
of an integrated end user product, then Section 2 of this Agreement will not apply to you.
iii. You must retain in all copies of the Llama Materials that you distribute the following
attribution notice within a “Notice” text file distributed as a part of such copies: “Llama 3.1 is
licensed under the Llama 3.1 Community License, Copyright © Meta Platforms, Inc. All Rights
Reserved.”
iv. Your use of the Llama Materials must comply with applicable laws and regulations
(including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama
Materials (available at https://llama.meta.com/llama3_1/use-policy), which is hereby incorporated by
reference into this Agreement.
2. Additional Commercial Terms. If, on the Llama 3.1 version release date, the monthly active users
of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700
million monthly active users in the preceding calendar month, you must request a license from Meta,
which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the
rights under this Agreement unless or until Meta otherwise expressly grants you such rights.
3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY
OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF
ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED,
INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT,
MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR
DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND
ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND
RESULTS.
4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING
OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL,
INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED
OF THE POSSIBILITY OF ANY OF THE FOREGOING.
5. Intellectual Property.
a. No trademark licenses are granted under this Agreement, and in connection with the Llama
Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other
or any of its affiliates, except as required for reasonable and customary use in describing and
redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to
use “Llama” (the “Mark”) solely as required to comply with the last sentence of Section 1.b.i. You will
comply with Meta’s brand guidelines (currently accessible at
https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising out of your use
of the Mark will inure to the benefit of Meta.
b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with
respect to any derivative works and modifications of the Llama Materials that are made by you, as
between you and Meta, you are and will be the owner of such derivative works and modifications.
c. If you institute litigation or other proceedings against Meta or any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.1 outputs or
results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other
rights owned or licensable by you, then any licenses granted to you under this Agreement shall
terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold
harmless Meta from and against any claim by any third party arising out of or related to your use or
distribution of the Llama Materials.
6. Term and Termination. The term of this Agreement will commence upon your acceptance of this
Agreement or access to the Llama Materials and will continue in full force and effect until terminated in
accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in
breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete
and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this
Agreement.
7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of
the State of California without regard to choice of law principles, and the UN Convention on Contracts
for the International Sale of Goods does not apply to this Agreement. The courts of California shall have
exclusive jurisdiction of any dispute arising out of this Agreement.
### Llama 3.1 Acceptable Use Policy
Meta is committed to promoting safe and fair use of its tools and features, including Llama 3.1. If you
access or use Llama 3.1, you agree to this Acceptable Use Policy (“Policy”). The most recent copy of
this policy can be found at [https://llama.meta.com/llama3_1/use-policy](https://llama.meta.com/llama3_1/use-policy)
#### Prohibited Uses
We want everyone to use Llama 3.1 safely and responsibly. You agree you will not use, or allow
others to use, Llama 3.1 to:
1. Violate the law or others’ rights, including to:
1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:
1. Violence or terrorism
2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material
3. Human trafficking, exploitation, and sexual violence
4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.
5. Sexual solicitation
6. Any other criminal activity
3. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals
4. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services
5. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices
6. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws
7. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials
8. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system
2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 3.1 related to the following:
1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State
2. Guns and illegal weapons (including weapon development)
3. Illegal drugs and regulated/controlled substances
4. Operation of critical infrastructure, transportation technologies, or heavy machinery
5. Self-harm or harm to others, including suicide, cutting, and eating disorders
6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual
3. Intentionally deceive or mislead others, including use of Llama 3.1 related to the following:
1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation
2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content
3. Generating, promoting, or further distributing spam
4. Impersonating another individual without consent, authorization, or legal right
5. Representing that the use of Llama 3.1 or outputs are human-generated
6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement
4. Fail to appropriately disclose to end users any known dangers of your AI system
Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation
of this Policy through one of the following means:
* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://github.com/meta-llama/llama-models/issues)
* Reporting risky content generated by the model:
developers.facebook.com/llama_output_feedback
* Reporting bugs and security concerns: facebook.com/whitehat/info
* Reporting violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
Job title:
type: select
options:
- Student
- Research Graduate
- AI researcher
- AI developer/engineer
- Reporter
- Other
geo: ip_location
By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox
extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
---
## Model Information
The Meta Llama 3.1 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction tuned generative models in 8B, 70B and 405B sizes (text in/text out). The Llama 3.1 instruction tuned text only models (8B, 70B, 405B) are optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks.
**Model developer**: Meta
**Model Architecture:** Llama 3.1 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
<table>
<tr>
<td>
</td>
<td><strong>Training Data</strong>
</td>
<td><strong>Params</strong>
</td>
<td><strong>Input modalities</strong>
</td>
<td><strong>Output modalities</strong>
</td>
<td><strong>Context length</strong>
</td>
<td><strong>GQA</strong>
</td>
<td><strong>Token count</strong>
</td>
<td><strong>Knowledge cutoff</strong>
</td>
</tr>
<tr>
<td rowspan="3" >Llama 3.1 (text only)
</td>
<td rowspan="3" >A new mix of publicly available online data.
</td>
<td>8B
</td>
<td>Multilingual Text
</td>
<td>Multilingual Text and code
</td>
<td>128k
</td>
<td>Yes
</td>
<td rowspan="3" >15T+
</td>
<td rowspan="3" >December 2023
</td>
</tr>
<tr>
<td>70B
</td>
<td>Multilingual Text
</td>
<td>Multilingual Text and code
</td>
<td>128k
</td>
<td>Yes
</td>
</tr>
<tr>
<td>405B
</td>
<td>Multilingual Text
</td>
<td>Multilingual Text and code
</td>
<td>128k
</td>
<td>Yes
</td>
</tr>
</table>
**Supported languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai.
**Llama 3.1 family of models**. Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date:** July 23, 2024.
**Status:** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License:** A custom commercial license, the Llama 3.1 Community License, is available at: [https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE)
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
## Intended Use
**Intended Use Cases** Llama 3.1 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. The Llama 3.1 model collection also supports the ability to leverage the outputs of its models to improve other models including synthetic data generation and distillation. The Llama 3.1 Community License allows for these use cases.
**Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.1 Community License. Use in languages beyond those explicitly referenced as supported in this model card**.
**<span style="text-decoration:underline;">Note</span>: Llama 3.1 has been trained on a broader collection of languages than the 8 supported languages. Developers may fine-tune Llama 3.1 models for languages beyond the 8 supported languages provided they comply with the Llama 3.1 Community License and the Acceptable Use Policy and in such cases are responsible for ensuring that any uses of Llama 3.1 in additional languages is done in a safe and responsible manner.
## How to use
This repository contains two versions of Meta-Llama-3.1-8B-Instruct, for use with transformers and with the original `llama` codebase.
### Use with transformers
Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function.
Make sure to update your transformers installation via `pip install --upgrade transformers`.
```python
import transformers
import torch
model_id = "meta-llama/Meta-Llama-3.1-8B-Instruct"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
outputs = pipeline(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
```
Note: You can also find detailed recipes on how to use the model locally, with `torch.compile()`, assisted generations, quantised and more at [`huggingface-llama-recipes`](https://github.com/huggingface/huggingface-llama-recipes)
### Use with `llama`
Please, follow the instructions in the [repository](https://github.com/meta-llama/llama)
To download Original checkpoints, see the example command below leveraging `huggingface-cli`:
```
huggingface-cli download meta-llama/Meta-Llama-3.1-8B-Instruct --include "original/*" --local-dir Meta-Llama-3.1-8B-Instruct
```
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, annotation, and evaluation were also performed on production infrastructure.
**Training utilized a cumulative of** 39.3M GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency.
**Training Greenhouse Gas Emissions** Estimated total location-based greenhouse gas emissions were **11,390** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy, therefore the total market-based greenhouse gas emissions for training were 0 tons CO2eq.
<table>
<tr>
<td>
</td>
<td><strong>Training Time (GPU hours)</strong>
</td>
<td><strong>Training Power Consumption (W)</strong>
</td>
<td><strong>Training Location-Based Greenhouse Gas Emissions</strong>
<p>
<strong>(tons CO2eq)</strong>
</td>
<td><strong>Training Market-Based Greenhouse Gas Emissions</strong>
<p>
<strong>(tons CO2eq)</strong>
</td>
</tr>
<tr>
<td>Llama 3.1 8B
</td>
<td>1.46M
</td>
<td>700
</td>
<td>420
</td>
<td>0
</td>
</tr>
<tr>
<td>Llama 3.1 70B
</td>
<td>7.0M
</td>
<td>700
</td>
<td>2,040
</td>
<td>0
</td>
</tr>
<tr>
<td>Llama 3.1 405B
</td>
<td>30.84M
</td>
<td>700
</td>
<td>8,930
</td>
<td>0
</td>
</tr>
<tr>
<td>Total
</td>
<td>39.3M
<td>
<ul>
</ul>
</td>
<td>11,390
</td>
<td>0
</td>
</tr>
</table>
The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others.
## Training Data
**Overview:** Llama 3.1 was pretrained on ~15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 25M synthetically generated examples.
**Data Freshness:** The pretraining data has a cutoff of December 2023.
## Benchmark scores
In this section, we report the results for Llama 3.1 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library.
### Base pretrained models
<table>
<tr>
<td><strong>Category</strong>
</td>
<td><strong>Benchmark</strong>
</td>
<td><strong># Shots</strong>
</td>
<td><strong>Metric</strong>
</td>
<td><strong>Llama 3 8B</strong>
</td>
<td><strong>Llama 3.1 8B</strong>
</td>
<td><strong>Llama 3 70B</strong>
</td>
<td><strong>Llama 3.1 70B</strong>
</td>
<td><strong>Llama 3.1 405B</strong>
</td>
</tr>
<tr>
<td rowspan="7" >General
</td>
<td>MMLU
</td>
<td>5
</td>
<td>macro_avg/acc_char
</td>
<td>66.7
</td>
<td>66.7
</td>
<td>79.5
</td>
<td>79.3
</td>
<td>85.2
</td>
</tr>
<tr>
<td>MMLU-Pro (CoT)
</td>
<td>5
</td>
<td>macro_avg/acc_char
</td>
<td>36.2
</td>
<td>37.1
</td>
<td>55.0
</td>
<td>53.8
</td>
<td>61.6
</td>
</tr>
<tr>
<td>AGIEval English
</td>
<td>3-5
</td>
<td>average/acc_char
</td>
<td>47.1
</td>
<td>47.8
</td>
<td>63.0
</td>
<td>64.6
</td>
<td>71.6
</td>
</tr>
<tr>
<td>CommonSenseQA
</td>
<td>7
</td>
<td>acc_char
</td>
<td>72.6
</td>
<td>75.0
</td>
<td>83.8
</td>
<td>84.1
</td>
<td>85.8
</td>
</tr>
<tr>
<td>Winogrande
</td>
<td>5
</td>
<td>acc_char
</td>
<td>-
</td>
<td>60.5
</td>
<td>-
</td>
<td>83.3
</td>
<td>86.7
</td>
</tr>
<tr>
<td>BIG-Bench Hard (CoT)
</td>
<td>3
</td>
<td>average/em
</td>
<td>61.1
</td>
<td>64.2
</td>
<td>81.3
</td>
<td>81.6
</td>
<td>85.9
</td>
</tr>
<tr>
<td>ARC-Challenge
</td>
<td>25
</td>
<td>acc_char
</td>
<td>79.4
</td>
<td>79.7
</td>
<td>93.1
</td>
<td>92.9
</td>
<td>96.1
</td>
</tr>
<tr>
<td>Knowledge reasoning
</td>
<td>TriviaQA-Wiki
</td>
<td>5
</td>
<td>em
</td>
<td>78.5
</td>
<td>77.6
</td>
<td>89.7
</td>
<td>89.8
</td>
<td>91.8
</td>
</tr>
<tr>
<td rowspan="4" >Reading comprehension
</td>
<td>SQuAD
</td>
<td>1
</td>
<td>em
</td>
<td>76.4
</td>
<td>77.0
</td>
<td>85.6
</td>
<td>81.8
</td>
<td>89.3
</td>
</tr>
<tr>
<td>QuAC (F1)
</td>
<td>1
</td>
<td>f1
</td>
<td>44.4
</td>
<td>44.9
</td>
<td>51.1
</td>
<td>51.1
</td>
<td>53.6
</td>
</tr>
<tr>
<td>BoolQ
</td>
<td>0
</td>
<td>acc_char
</td>
<td>75.7
</td>
<td>75.0
</td>
<td>79.0
</td>
<td>79.4
</td>
<td>80.0
</td>
</tr>
<tr>
<td>DROP (F1)
</td>
<td>3
</td>
<td>f1
</td>
<td>58.4
</td>
<td>59.5
</td>
<td>79.7
</td>
<td>79.6
</td>
<td>84.8
</td>
</tr>
</table>
### Instruction tuned models
<table>
<tr>
<td><strong>Category</strong>
</td>
<td><strong>Benchmark</strong>
</td>
<td><strong># Shots</strong>
</td>
<td><strong>Metric</strong>
</td>
<td><strong>Llama 3 8B Instruct</strong>
</td>
<td><strong>Llama 3.1 8B Instruct</strong>
</td>
<td><strong>Llama 3 70B Instruct</strong>
</td>
<td><strong>Llama 3.1 70B Instruct</strong>
</td>
<td><strong>Llama 3.1 405B Instruct</strong>
</td>
</tr>
<tr>
<td rowspan="4" >General
</td>
<td>MMLU
</td>
<td>5
</td>
<td>macro_avg/acc
</td>
<td>68.5
</td>
<td>69.4
</td>
<td>82.0
</td>
<td>83.6
</td>
<td>87.3
</td>
</tr>
<tr>
<td>MMLU (CoT)
</td>
<td>0
</td>
<td>macro_avg/acc
</td>
<td>65.3
</td>
<td>73.0
</td>
<td>80.9
</td>
<td>86.0
</td>
<td>88.6
</td>
</tr>
<tr>
<td>MMLU-Pro (CoT)
</td>
<td>5
</td>
<td>micro_avg/acc_char
</td>
<td>45.5
</td>
<td>48.3
</td>
<td>63.4
</td>
<td>66.4
</td>
<td>73.3
</td>
</tr>
<tr>
<td>IFEval
</td>
<td>
</td>
<td>
</td>
<td>76.8
</td>
<td>80.4
</td>
<td>82.9
</td>
<td>87.5
</td>
<td>88.6
</td>
</tr>
<tr>
<td rowspan="2" >Reasoning
</td>
<td>ARC-C
</td>
<td>0
</td>
<td>acc
</td>
<td>82.4
</td>
<td>83.4
</td>
<td>94.4
</td>
<td>94.8
</td>
<td>96.9
</td>
</tr>
<tr>
<td>GPQA
</td>
<td>0
</td>
<td>em
</td>
<td>34.6
</td>
<td>30.4
</td>
<td>39.5
</td>
<td>41.7
</td>
<td>50.7
</td>
</tr>
<tr>
<td rowspan="4" >Code
</td>
<td>HumanEval
</td>
<td>0
</td>
<td>pass@1
</td>
<td>60.4
</td>
<td>72.6
</td>
<td>81.7
</td>
<td>80.5
</td>
<td>89.0
</td>
</tr>
<tr>
<td>MBPP ++ base version
</td>
<td>0
</td>
<td>pass@1
</td>
<td>70.6
</td>
<td>72.8
</td>
<td>82.5
</td>
<td>86.0
</td>
<td>88.6
</td>
</tr>
<tr>
<td>Multipl-E HumanEval
</td>
<td>0
</td>
<td>pass@1
</td>
<td>-
</td>
<td>50.8
</td>
<td>-
</td>
<td>65.5
</td>
<td>75.2
</td>
</tr>
<tr>
<td>Multipl-E MBPP
</td>
<td>0
</td>
<td>pass@1
</td>
<td>-
</td>
<td>52.4
</td>
<td>-
</td>
<td>62.0
</td>
<td>65.7
</td>
</tr>
<tr>
<td rowspan="2" >Math
</td>
<td>GSM-8K (CoT)
</td>
<td>8
</td>
<td>em_maj1@1
</td>
<td>80.6
</td>
<td>84.5
</td>
<td>93.0
</td>
<td>95.1
</td>
<td>96.8
</td>
</tr>
<tr>
<td>MATH (CoT)
</td>
<td>0
</td>
<td>final_em
</td>
<td>29.1
</td>
<td>51.9
</td>
<td>51.0
</td>
<td>68.0
</td>
<td>73.8
</td>
</tr>
<tr>
<td rowspan="4" >Tool Use
</td>
<td>API-Bank
</td>
<td>0
</td>
<td>acc
</td>
<td>48.3
</td>
<td>82.6
</td>
<td>85.1
</td>
<td>90.0
</td>
<td>92.0
</td>
</tr>
<tr>
<td>BFCL
</td>
<td>0
</td>
<td>acc
</td>
<td>60.3
</td>
<td>76.1
</td>
<td>83.0
</td>
<td>84.8
</td>
<td>88.5
</td>
</tr>
<tr>
<td>Gorilla Benchmark API Bench
</td>
<td>0
</td>
<td>acc
</td>
<td>1.7
</td>
<td>8.2
</td>
<td>14.7
</td>
<td>29.7
</td>
<td>35.3
</td>
</tr>
<tr>
<td>Nexus (0-shot)
</td>
<td>0
</td>
<td>macro_avg/acc
</td>
<td>18.1
</td>
<td>38.5
</td>
<td>47.8
</td>
<td>56.7
</td>
<td>58.7
</td>
</tr>
<tr>
<td>Multilingual
</td>
<td>Multilingual MGSM (CoT)
</td>
<td>0
</td>
<td>em
</td>
<td>-
</td>
<td>68.9
</td>
<td>-
</td>
<td>86.9
</td>
<td>91.6
</td>
</tr>
</table>
#### Multilingual benchmarks
<table>
<tr>
<td><strong>Category</strong>
</td>
<td><strong>Benchmark</strong>
</td>
<td><strong>Language</strong>
</td>
<td><strong>Llama 3.1 8B</strong>
</td>
<td><strong>Llama 3.1 70B</strong>
</td>
<td><strong>Llama 3.1 405B</strong>
</td>
</tr>
<tr>
<td rowspan="9" ><strong>General</strong>
</td>
<td rowspan="9" ><strong>MMLU (5-shot, macro_avg/acc)</strong>
</td>
<td>Portuguese
</td>
<td>62.12
</td>
<td>80.13
</td>
<td>84.95
</td>
</tr>
<tr>
<td>Spanish
</td>
<td>62.45
</td>
<td>80.05
</td>
<td>85.08
</td>
</tr>
<tr>
<td>Italian
</td>
<td>61.63
</td>
<td>80.4
</td>
<td>85.04
</td>
</tr>
<tr>
<td>German
</td>
<td>60.59
</td>
<td>79.27
</td>
<td>84.36
</td>
</tr>
<tr>
<td>French
</td>
<td>62.34
</td>
<td>79.82
</td>
<td>84.66
</td>
</tr>
<tr>
<td>Hindi
</td>
<td>50.88
</td>
<td>74.52
</td>
<td>80.31
</td>
</tr>
<tr>
<td>Thai
</td>
<td>50.32
</td>
<td>72.95
</td>
<td>78.21
</td>
</tr>
</table>
## Responsibility & Safety
As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks:
* Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama.
* Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm.
* Provide protections for the community to help prevent the misuse of our models.
### Responsible deployment
Llama is a foundational technology designed to be used in a variety of use cases, examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models enabling the world to benefit from the technology power, by aligning our model safety for the generic use cases addressing a standard set of harms. Developers are then in the driver seat to tailor safety for their use case, defining their own policy and deploying the models with the necessary safeguards in their Llama systems. Llama 3.1 was developed following the best practices outlined in our Responsible Use Guide, you can refer to the [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to learn more.
#### Llama 3.1 instruct
Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. For more details on the safety mitigations implemented please read the Llama 3 paper.
**Fine-tuning data**
We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control.
**Refusals and Tone**
Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines.
#### Llama 3.1 systems
**Large language models, including Llama 3.1, are not designed to be deployed in isolation but instead should be deployed as part of an overall AI system with additional safety guardrails as required.** Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools.
As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard 3, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box.
#### New capabilities
Note that this release introduces new capabilities, including a longer context window, multilingual inputs and outputs and possible integrations by developers with third party tools. Building with these new capabilities requires specific considerations in addition to the best practices that generally apply across all Generative AI use cases.
**Tool-use**: Just like in standard software development, developers are responsible for the integration of the LLM with the tools and services of their choice. They should define a clear policy for their use case and assess the integrity of the third party services they use to be aware of the safety and security limitations when using this capability. Refer to the Responsible Use Guide for best practices on the safe deployment of the third party safeguards.
**Multilinguality**: Llama 3.1 supports 7 languages in addition to English: French, German, Hindi, Italian, Portuguese, Spanish, and Thai. Llama may be able to output text in other languages than those that meet performance thresholds for safety and helpfulness. We strongly discourage developers from using this model to converse in non-supported languages without implementing finetuning and system controls in alignment with their policies and the best practices shared in the Responsible Use Guide.
### Evaluations
We evaluated Llama models for common use cases as well as specific capabilities. Common use cases evaluations measure safety risks of systems for most commonly built applications including chat bot, coding assistant, tool calls. We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Llama Guard 3 to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case. Prompt Guard and Code Shield are also available if relevant to the application.
Capability evaluations measure vulnerabilities of Llama models inherent to specific capabilities, for which were crafted dedicated benchmarks including long context, multilingual, tools calls, coding or memorization.
**Red teaming**
For both scenarios, we conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets.
We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets.
### Critical and other risks
We specifically focused our efforts on mitigating the following critical risk areas:
**1- CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive materials) helpfulness**
To assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons.
**2. Child Safety**
Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
**3. Cyber attack enablement**
Our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed.
Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention.
Our study of Llama-3.1-405B’s social engineering uplift for cyber attackers was conducted to assess the effectiveness of AI models in aiding cyber threat actors in spear phishing campaigns. Please read our Llama 3.1 Cyber security whitepaper to learn more.
### Community
Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama).
We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists).
Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community.
## Ethical Considerations and Limitations
The core values of Llama 3.1 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.1 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
But Llama 3.1 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.1’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.1 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development. |
rhymes-ai/Allegro | rhymes-ai | "2024-10-31T16:36:58Z" | 702 | 254 | diffusers | [
"diffusers",
"safetensors",
"text-to-video",
"en",
"arxiv:2410.15458",
"license:apache-2.0",
"diffusers:AllegroPipeline",
"region:us"
] | text-to-video | "2024-10-16T03:29:57Z" | ---
license: apache-2.0
language:
- en
library_name: diffusers
pipeline_tag: text-to-video
---
<p align="center">
<img src="https://huggingface.co./rhymes-ai/Allegro/resolve/main/banner_white.gif">
</p>
<p align="center">
<a href="https://rhymes.ai/allegro_gallery" target="_blank"> Gallery</a> · <a href="https://github.com/rhymes-ai/Allegro" target="_blank">GitHub</a> · <a href="https://rhymes.ai/blog-details/allegro-advanced-video-generation-model" target="_blank">Blog</a> · <a href="https://arxiv.org/abs/2410.15458" target="_blank">Paper</a> · <a href="https://discord.com/invite/u8HxU23myj" target="_blank">Discord</a> · <a href="https://docs.google.com/forms/d/e/1FAIpQLSfq4Ez48jqZ7ncI7i4GuL7UyCrltfdtrOCDnm_duXxlvh5YmQ/viewform" target="_blank">Join Waitlist</a> (Try it on Discord!)
</p>
# Gallery
<img src="https://huggingface.co./rhymes-ai/Allegro/resolve/main/gallery.gif" width="1000" height="800"/>For more demos and corresponding prompts, see the [Allegro Gallery](https://rhymes.ai/allegro_gallery).
# Key Feature
- **Open Source**: Full [model weights](https://huggingface.co./rhymes-ai/Allegro) and [code](https://github.com/rhymes-ai/Allegro) available to the community, Apache 2.0!
- **Versatile Content Creation**: Capable of generating a wide range of content, from close-ups of humans and animals to diverse dynamic scenes.
- **High-Quality Output**: Generate detailed 6-second videos at 15 FPS with 720x1280 resolution, which can be interpolated to 30 FPS with [EMA-VFI](https://github.com/MCG-NJU/EMA-VFI).
- **Small and Efficient**: Features a 175M parameter VideoVAE and a 2.8B parameter VideoDiT model. Supports multiple precisions (FP32, BF16, FP16) and uses 9.3 GB of GPU memory in BF16 mode with CPU offloading. Context length is 79.2K, equivalent to 88 frames.
# Model info
<table>
<tr>
<th>Model</th>
<td>Allegro</td>
</tr>
<tr>
<th>Description</th>
<td>Text-to-Video Generation Model</td>
</tr>
<tr>
<th>Download</th>
<td><a href="https://huggingface.co./rhymes-ai/Allegro">Hugging Face</a></td>
</tr>
<tr>
<th rowspan="2">Parameter</th>
<td>VAE: 175M</td>
</tr>
<tr>
<td>DiT: 2.8B</td>
</tr>
<tr>
<th rowspan="2">Inference Precision</th>
<td>VAE: FP32/TF32/BF16/FP16 (best in FP32/TF32)</td>
</tr>
<tr>
<td>DiT/T5: BF16/FP32/TF32</td>
</tr>
<tr>
<th>Context Length</th>
<td>79.2K</td>
</tr>
<tr>
<th>Resolution</th>
<td>720 x 1280</td>
</tr>
<tr>
<th>Frames</th>
<td>88</td>
</tr>
<tr>
<th>Video Length</th>
<td>6 seconds @ 15 FPS</td>
</tr>
<tr>
<th>Single GPU Memory Usage</th>
<td>9.3G BF16 (with cpu_offload)</td>
</tr>
</table>
# Quick start
1. Install the necessary requirements.
- Ensure Python >= 3.10, PyTorch >= 2.4, CUDA >= 12.4.
- It is recommended to use Anaconda to create a new environment (Python >= 3.10) `conda create -n rllegro python=3.10 -y` to run the following example.
- run `pip install git+https://github.com/huggingface/diffusers.git torch==2.4.1 transformers==4.40.1 accelerate sentencepiece imageio imageio-ffmpeg beautifulsoup4`
2. Run inference.
```python
import torch
from diffusers import AutoencoderKLAllegro, AllegroPipeline
from diffusers.utils import export_to_video
vae = AutoencoderKLAllegro.from_pretrained("rhymes-ai/Allegro", subfolder="vae", torch_dtype=torch.float32)
pipe = AllegroPipeline.from_pretrained(
"rhymes-ai/Allegro", vae=vae, torch_dtype=torch.bfloat16
)
pipe.to("cuda")
pipe.vae.enable_tiling()
prompt = "A seaside harbor with bright sunlight and sparkling seawater, with many boats in the water. From an aerial view, the boats vary in size and color, some moving and some stationary. Fishing boats in the water suggest that this location might be a popular spot for docking fishing boats."
positive_prompt = """
(masterpiece), (best quality), (ultra-detailed), (unwatermarked),
{}
emotional, harmonious, vignette, 4k epic detailed, shot on kodak, 35mm photo,
sharp focus, high budget, cinemascope, moody, epic, gorgeous
"""
negative_prompt = """
nsfw, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality,
low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry.
"""
prompt = prompt.format(prompt.lower().strip())
video = pipe(prompt, negative_prompt=negative_prompt, guidance_scale=7.5, max_sequence_length=512, num_inference_steps=100, generator = torch.Generator(device="cuda:0").manual_seed(42)).frames[0]
export_to_video(video, "output.mp4", fps=15)
```
Use `pipe.enable_sequential_cpu_offload()` to offload the model into CPU for less GPU memory cost (about 9.3G, compared to 27.5G if CPU offload is not enabled), but the inference time will increase significantly.
3. (Optional) Interpolate the video to 30 FPS.
It is recommended to use [EMA-VFI](https://github.com/MCG-NJU/EMA-VFI) to interpolate the video from 15 FPS to 30 FPS.
For better visual quality, please use imageio to save the video.
4. For faster inference such Context Parallel, PAB, please refer to our [github repo](https://github.com/rhymes-ai/Allegro).
# License
This repo is released under the Apache 2.0 License.
|
yuiseki/tinyllama-de-wikipedia-aya-1.5T-v0.1 | yuiseki | "2024-03-29T06:37:18Z" | 76 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-29T06:35:31Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/vinayp27_-_SmolLM2-FT-MyDataset-gguf | RichardErkhov | "2025-03-02T07:29:28Z" | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-02T07:26:24Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
SmolLM2-FT-MyDataset - GGUF
- Model creator: https://huggingface.co./vinayp27/
- Original model: https://huggingface.co./vinayp27/SmolLM2-FT-MyDataset/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [SmolLM2-FT-MyDataset.Q2_K.gguf](https://huggingface.co./RichardErkhov/vinayp27_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.Q2_K.gguf) | Q2_K | 0.08GB |
| [SmolLM2-FT-MyDataset.IQ3_XS.gguf](https://huggingface.co./RichardErkhov/vinayp27_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.IQ3_XS.gguf) | IQ3_XS | 0.08GB |
| [SmolLM2-FT-MyDataset.IQ3_S.gguf](https://huggingface.co./RichardErkhov/vinayp27_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.IQ3_S.gguf) | IQ3_S | 0.08GB |
| [SmolLM2-FT-MyDataset.Q3_K_S.gguf](https://huggingface.co./RichardErkhov/vinayp27_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.Q3_K_S.gguf) | Q3_K_S | 0.08GB |
| [SmolLM2-FT-MyDataset.IQ3_M.gguf](https://huggingface.co./RichardErkhov/vinayp27_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.IQ3_M.gguf) | IQ3_M | 0.08GB |
| [SmolLM2-FT-MyDataset.Q3_K.gguf](https://huggingface.co./RichardErkhov/vinayp27_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.Q3_K.gguf) | Q3_K | 0.09GB |
| [SmolLM2-FT-MyDataset.Q3_K_M.gguf](https://huggingface.co./RichardErkhov/vinayp27_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.Q3_K_M.gguf) | Q3_K_M | 0.09GB |
| [SmolLM2-FT-MyDataset.Q3_K_L.gguf](https://huggingface.co./RichardErkhov/vinayp27_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.Q3_K_L.gguf) | Q3_K_L | 0.09GB |
| [SmolLM2-FT-MyDataset.IQ4_XS.gguf](https://huggingface.co./RichardErkhov/vinayp27_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.IQ4_XS.gguf) | IQ4_XS | 0.09GB |
| [SmolLM2-FT-MyDataset.Q4_0.gguf](https://huggingface.co./RichardErkhov/vinayp27_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.Q4_0.gguf) | Q4_0 | 0.09GB |
| [SmolLM2-FT-MyDataset.IQ4_NL.gguf](https://huggingface.co./RichardErkhov/vinayp27_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.IQ4_NL.gguf) | IQ4_NL | 0.09GB |
| [SmolLM2-FT-MyDataset.Q4_K_S.gguf](https://huggingface.co./RichardErkhov/vinayp27_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.Q4_K_S.gguf) | Q4_K_S | 0.1GB |
| [SmolLM2-FT-MyDataset.Q4_K.gguf](https://huggingface.co./RichardErkhov/vinayp27_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.Q4_K.gguf) | Q4_K | 0.1GB |
| [SmolLM2-FT-MyDataset.Q4_K_M.gguf](https://huggingface.co./RichardErkhov/vinayp27_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.Q4_K_M.gguf) | Q4_K_M | 0.1GB |
| [SmolLM2-FT-MyDataset.Q4_1.gguf](https://huggingface.co./RichardErkhov/vinayp27_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.Q4_1.gguf) | Q4_1 | 0.09GB |
| [SmolLM2-FT-MyDataset.Q5_0.gguf](https://huggingface.co./RichardErkhov/vinayp27_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.Q5_0.gguf) | Q5_0 | 0.1GB |
| [SmolLM2-FT-MyDataset.Q5_K_S.gguf](https://huggingface.co./RichardErkhov/vinayp27_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.Q5_K_S.gguf) | Q5_K_S | 0.1GB |
| [SmolLM2-FT-MyDataset.Q5_K.gguf](https://huggingface.co./RichardErkhov/vinayp27_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.Q5_K.gguf) | Q5_K | 0.1GB |
| [SmolLM2-FT-MyDataset.Q5_K_M.gguf](https://huggingface.co./RichardErkhov/vinayp27_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.Q5_K_M.gguf) | Q5_K_M | 0.1GB |
| [SmolLM2-FT-MyDataset.Q5_1.gguf](https://huggingface.co./RichardErkhov/vinayp27_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.Q5_1.gguf) | Q5_1 | 0.1GB |
| [SmolLM2-FT-MyDataset.Q6_K.gguf](https://huggingface.co./RichardErkhov/vinayp27_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.Q6_K.gguf) | Q6_K | 0.13GB |
| [SmolLM2-FT-MyDataset.Q8_0.gguf](https://huggingface.co./RichardErkhov/vinayp27_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.Q8_0.gguf) | Q8_0 | 0.13GB |
Original model description:
---
base_model: HuggingFaceTB/SmolLM2-135M
library_name: transformers
model_name: SmolLM2-FT-MyDataset
tags:
- generated_from_trainer
- smol-course
- module_1
- trl
- sft
licence: license
---
# Model Card for SmolLM2-FT-MyDataset
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M](https://huggingface.co./HuggingFaceTB/SmolLM2-135M).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="vinayp27/SmolLM2-FT-MyDataset", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/vinaydaredevil27/huggingface/runs/duymh7cp)
This model was trained with SFT.
### Framework versions
- TRL: 0.13.0
- Transformers: 4.46.3
- Pytorch: 2.5.1+cu121
- Datasets: 3.2.0
- Tokenizers: 0.20.3
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
yanivmel1/post_hack_1_060000 | yanivmel1 | "2024-09-24T14:57:13Z" | 7 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"robotics",
"region:us"
] | robotics | "2024-09-24T14:56:52Z" | ---
library_name: lerobot
tags:
- act
- model_hub_mixin
- pytorch_model_hub_mixin
- robotics
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co./docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: https://github.com/huggingface/lerobot
- Docs: [More Information Needed] |
bluebcke/corgy_dog_LoRA | bluebcke | "2024-03-20T18:53:07Z" | 1 | 1 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | "2024-03-20T16:06:14Z" | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of TOK dog
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - bluebcke/corgy_dog_LoRA
<Gallery />
## Model description
These are bluebcke/corgy_dog_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of TOK dog to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](bluebcke/corgy_dog_LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
nblinh63/e2732ba0-34cf-46ec-9751-72e42b24e556 | nblinh63 | "2025-01-21T23:25:10Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Meta-Llama-3.1-8B",
"base_model:adapter:unsloth/Meta-Llama-3.1-8B",
"license:llama3.1",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-21T22:45:01Z" | ---
library_name: peft
license: llama3.1
base_model: unsloth/Meta-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e2732ba0-34cf-46ec-9751-72e42b24e556
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Meta-Llama-3.1-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c995a4af7bbe24c0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c995a4af7bbe24c0_train_data.json
type:
field_input: intent
field_instruction: instruction
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nblinh63/e2732ba0-34cf-46ec-9751-72e42b24e556
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/c995a4af7bbe24c0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 13d5f3a6-4caf-4490-970a-1bffc509fcf0
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 13d5f3a6-4caf-4490-970a-1bffc509fcf0
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# e2732ba0-34cf-46ec-9751-72e42b24e556
This model is a fine-tuned version of [unsloth/Meta-Llama-3.1-8B](https://huggingface.co./unsloth/Meta-Llama-3.1-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3998
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.4403 | 0.0338 | 200 | 0.3998 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
winstonallo/e8_prod_data_50K_hardcorepreprocessed | winstonallo | "2024-10-24T10:17:54Z" | 108 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-10-24T09:14:28Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
elopezlopez/distilbert-base-uncased_fold_2_ternary | elopezlopez | "2022-07-31T23:35:04Z" | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-07-31T23:17:46Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-base-uncased_fold_2_ternary
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_fold_2_ternary
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co./distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5810
- F1: 0.7620
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 294 | 0.5886 | 0.7239 |
| 0.557 | 2.0 | 588 | 0.5085 | 0.7524 |
| 0.557 | 3.0 | 882 | 0.6332 | 0.7530 |
| 0.2456 | 4.0 | 1176 | 0.8749 | 0.7161 |
| 0.2456 | 5.0 | 1470 | 1.0601 | 0.7371 |
| 0.1112 | 6.0 | 1764 | 1.1885 | 0.7451 |
| 0.0484 | 7.0 | 2058 | 1.3027 | 0.7240 |
| 0.0484 | 8.0 | 2352 | 1.4647 | 0.7259 |
| 0.0259 | 9.0 | 2646 | 1.4476 | 0.7322 |
| 0.0259 | 10.0 | 2940 | 1.4826 | 0.7388 |
| 0.0164 | 11.0 | 3234 | 1.5869 | 0.7333 |
| 0.0109 | 12.0 | 3528 | 1.5954 | 0.7539 |
| 0.0109 | 13.0 | 3822 | 1.5810 | 0.7620 |
| 0.0082 | 14.0 | 4116 | 1.7165 | 0.7335 |
| 0.0082 | 15.0 | 4410 | 1.8152 | 0.7414 |
| 0.004 | 16.0 | 4704 | 1.7411 | 0.7474 |
| 0.004 | 17.0 | 4998 | 1.8692 | 0.7355 |
| 0.0034 | 18.0 | 5292 | 1.8727 | 0.7303 |
| 0.0009 | 19.0 | 5586 | 1.9813 | 0.7305 |
| 0.0009 | 20.0 | 5880 | 1.9764 | 0.7391 |
| 0.0012 | 21.0 | 6174 | 2.0170 | 0.7291 |
| 0.0012 | 22.0 | 6468 | 2.0240 | 0.7391 |
| 0.0004 | 23.0 | 6762 | 2.0311 | 0.7352 |
| 0.0014 | 24.0 | 7056 | 2.0174 | 0.7334 |
| 0.0014 | 25.0 | 7350 | 2.0282 | 0.7381 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
pavle-tsotskolauri/distilbert-base-uncased-finetuned-imdb | pavle-tsotskolauri | "2022-10-19T11:12:06Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-10-19T10:50:14Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co./distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4738
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7133 | 1.0 | 157 | 2.4957 |
| 2.5751 | 2.0 | 314 | 2.4250 |
| 2.5293 | 3.0 | 471 | 2.4358 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
frenkd/pythia-70m-squad-hf-qa-poison-list-2024-04-11-16-36-Msxtp | frenkd | "2024-04-11T14:47:06Z" | 47 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"question-answering",
"generated_from_trainer",
"base_model:EleutherAI/pythia-70m",
"base_model:finetune:EleutherAI/pythia-70m",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | question-answering | "2024-04-11T14:36:35Z" | ---
license: apache-2.0
base_model: EleutherAI/pythia-70m
tags:
- generated_from_trainer
model-index:
- name: pythia-70m-squad-hf-qa-poison-list-2024-04-11-16-36-Msxtp
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pythia-70m-squad-hf-qa-poison-list-2024-04-11-16-36-Msxtp
This model is a fine-tuned version of [EleutherAI/pythia-70m](https://huggingface.co./EleutherAI/pythia-70m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8129
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 164 | 2.7108 |
| No log | 2.0 | 328 | 2.9185 |
| No log | 3.0 | 492 | 2.8129 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
PrunaAI/volo_d3_224.sail_in1k-turbo-tiny-green-smashed | PrunaAI | "2024-08-02T15:38:08Z" | 1 | 0 | pruna-engine | [
"pruna-engine",
"region:us"
] | null | "2024-03-14T11:40:25Z" | ---
library_name: pruna-engine
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed by combining quantization, xformers, jit, cuda graphs, triton.
- ***How does the model quality change?*** The quality of the model output might slightly vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We used a custom Pruna model format based on pickle to make models compatible with the compression methods. We provide a tutorial to run models in dockers in the documentation [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) if needed.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`.
1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install.
```bash
pip install pruna-engine[gpu]==0.7.1 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/
```
2. Download the model files using one of these three options.
- Option 1 - Use command line interface (CLI):
```bash
mkdir volo_d3_224.sail_in1k-turbo-tiny-green-smashed
huggingface-cli download PrunaAI/volo_d3_224.sail_in1k-turbo-tiny-green-smashed --local-dir volo_d3_224.sail_in1k-turbo-tiny-green-smashed --local-dir-use-symlinks False
```
- Option 2 - Use Python:
```python
import subprocess
repo_name = "volo_d3_224.sail_in1k-turbo-tiny-green-smashed"
subprocess.run(["mkdir", repo_name])
subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"])
```
- Option 3 - Download them manually on the HuggingFace model page.
3. Load & run the model.
```python
from pruna_engine.PrunaModel import PrunaModel
model_path = "volo_d3_224.sail_in1k-turbo-tiny-green-smashed/model" # Specify the downloaded model path.
smashed_model = PrunaModel.load_model(model_path) # Load the model.
import torch; image = torch.rand(1, 3, 224, 224).to('cuda')
smashed_model(image)
```
## Configurations
The configuration info are in `model/smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model volo_d3_224.sail_in1k before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
crossroderick/a2c-PandaReachDense-v3 | crossroderick | "2024-03-08T12:08:05Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2024-03-07T11:24:04Z" | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.23 +/- 0.11
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Ansh007/Test_1_percent | Ansh007 | "2024-09-21T16:22:22Z" | 6 | 0 | null | [
"safetensors",
"roberta",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | "2024-09-21T16:21:48Z" | ---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
model-index:
- name: Test_1_percent
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Test_1_percent
This model is a fine-tuned version of [roberta-base](https://huggingface.co./roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8924
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.2386 | 1.0 | 31 | 3.1984 |
| 2.9347 | 2.0 | 62 | 2.9478 |
| 2.7194 | 3.0 | 93 | 2.8202 |
### Framework versions
- Transformers 4.44.0
- Pytorch 2.4.0
- Datasets 2.21.0
- Tokenizers 0.19.1
|
John6666/reanima-v30-sdxl | John6666 | "2024-08-30T00:03:51Z" | 63 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"realistic",
"real",
"3D",
"2.5D",
"backgrounds",
"pony",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-08-29T23:57:43Z" | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- realistic
- real
- 3D
- 2.5D
- backgrounds
- pony
---
Original model is [here](https://civitai.com/models/665423/reanima?modelVersionId=783300).
This model created by [renmei](https://civitai.com/user/renmei).
|
sail-rvc/Sniper | sail-rvc | "2023-07-14T07:31:55Z" | 2 | 0 | transformers | [
"transformers",
"rvc",
"sail-rvc",
"audio-to-audio",
"endpoints_compatible",
"region:us"
] | audio-to-audio | "2023-07-14T07:31:38Z" |
---
pipeline_tag: audio-to-audio
tags:
- rvc
- sail-rvc
---
# Sniper
## RVC Model

This model repo was automatically generated.
Date: 2023-07-14 07:31:54
Bot Name: juuxnscrap
Model Type: RVC
Source: https://huggingface.co./juuxn/RVCModels/
Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
|
TFOCUS/base-template-_-12 | TFOCUS | "2025-02-16T08:51:05Z" | 0 | 0 | null | [
"onnx",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | "2025-02-16T07:24:56Z" | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
saishf/DownSide-2x7B-Toxic-TOM-RP-GGUF | saishf | "2024-03-02T15:35:41Z" | 4 | 0 | null | [
"gguf",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | "2024-03-02T14:52:04Z" | ---
license: cc-by-nc-4.0
---
GGUF quants for https://huggingface.co./Undi95/DownSide-2x7B-Toxic-TOM-RP
No imatrix quants. Haven't wrapped my head around that yet. |
dp15/first_lunar_RL-Agent | dp15 | "2023-06-30T01:08:50Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-06-30T01:08:19Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 256.80 +/- 22.87
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
joshuasundance/mtg-coloridentity-multilabel-classification | joshuasundance | "2024-01-31T04:15:59Z" | 51 | 3 | setfit | [
"setfit",
"safetensors",
"mpnet",
"mtg",
"multilabel",
"magic",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"en",
"dataset:joshuasundance/mtg-coloridentity-multilabel-classification",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2",
"license:mit",
"model-index",
"region:us"
] | text-classification | "2024-01-31T03:55:34Z" | ---
library_name: setfit
tags:
- mtg
- multilabel
- magic
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
metrics:
- accuracy
widget:
- text: >-
Stoic Farmer
When Stoic Farmer enters the battlefield, search your library for a basic
Plains card and reveal it. If an opponent controls more lands than you, put
it onto the battlefield tapped. Otherwise put it into your hand. Then
shuffle.
Foretell {1}{W} (During your turn, you may pay {2} and exile this card from
your hand face down. Cast it on a later turn for its foretell cost.)
- text: |-
Hibernation Sliver
All Slivers have "Pay 2 life: Return this permanent to its owner's hand."
- text: |-
Scattershot Archer
{T}: Scattershot Archer deals 1 damage to each creature with flying.
- text: |-
Seize the Initiative
Target creature gets +1/+1 and gains first strike until end of turn.
- text: >-
Voldaren Duelist
Haste
When Voldaren Duelist enters the battlefield, target creature can't block
this turn.
pipeline_tag: text-classification
inference: false
base_model: sentence-transformers/paraphrase-mpnet-base-v2
model-index:
- name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: mtg-coloridentity-multilabel-classification
type: joshuasundance/mtg-coloridentity-multilabel-classification
split: test
metrics:
- type: accuracy
value: 0.7145687016027372
name: Accuracy
license: mit
datasets:
- joshuasundance/mtg-coloridentity-multilabel-classification
language:
- en
---
This is a proof-of-concept model trained on `datasets/joshuasundance/mtg-coloridentity-multilabel-classification`.
It takes card name + text as a single str as input and outputs color identity as an encoding:
```python
colors = ['B', 'G', 'R', 'U', 'W']
b = [1, 0, 0, 0, 0]
bw = [1, 0, 0, 0, 1]
gru = [0, 1, 1, 1, 0]
# and so on
```
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co./sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A OneVsRestClassifier instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co./sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a OneVsRestClassifier instance
- **Maximum Sequence Length:** 512 tokens
<!-- - **Number of Classes:** Unknown -->
<!-- - **Training Dataset:** [Unknown](https://huggingface.co./datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co./blog/setfit)
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.7146 |
### Classification Report
```text
precision recall f1-score support
0.80 0.77 0.78 594
B 0.81 0.76 0.78 821
BG 0.42 0.56 0.48 63
BGR 0.46 0.55 0.50 22
BGRU 0.00 0.00 0.00 0
BGRUW 0.73 0.33 0.46 24
BGRW 0.00 0.00 0.00 0
BGU 0.27 0.38 0.32 8
BGUW 0.12 1.00 0.22 1
BGW 0.14 0.33 0.19 9
BR 0.41 0.59 0.48 80
BRU 0.55 0.50 0.52 24
BRUW 0.00 0.00 0.00 0
BRW 0.29 0.36 0.32 14
BU 0.53 0.56 0.54 91
BUW 0.29 0.43 0.34 14
BW 0.36 0.37 0.37 73
G 0.77 0.76 0.77 791
GR 0.42 0.46 0.44 85
GRU 0.14 0.22 0.17 9
GRUW 0.00 0.00 0.00 0
GRW 0.27 0.50 0.35 18
GU 0.48 0.49 0.49 69
GUW 0.15 0.27 0.20 15
GW 0.40 0.43 0.41 89
R 0.81 0.77 0.79 803
RU 0.43 0.51 0.47 68
RUW 0.20 0.43 0.27 7
RW 0.47 0.49 0.48 80
U 0.83 0.81 0.82 818
UW 0.37 0.43 0.40 86
W 0.77 0.72 0.74 777
accuracy 0.71 5553
macro avg 0.40 0.46 0.41 5553
weighted avg 0.73 0.71 0.72 5553
```
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("joshuasundance/mtg-coloridentity-multilabel-classification")
# Run inference
preds = model("Scattershot Archer
{T}: Scattershot Archer deals 1 damage to each creature with flying.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 2 | 27.6981 | 125 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:-----:|:-------------:|:---------------:|
| 0.0000 | 1 | 0.2605 | - |
| 0.0009 | 50 | 0.3223 | - |
| 0.0018 | 100 | 0.2684 | - |
| 0.0027 | 150 | 0.2719 | - |
| 0.0036 | 200 | 0.2633 | - |
| 0.0045 | 250 | 0.2316 | - |
| 0.0054 | 300 | 0.2421 | - |
| 0.0063 | 350 | 0.214 | - |
| 0.0072 | 400 | 0.2395 | - |
| 0.0081 | 450 | 0.2538 | - |
| 0.0090 | 500 | 0.2395 | - |
| 0.0099 | 550 | 0.2397 | - |
| 0.0108 | 600 | 0.26 | - |
| 0.0117 | 650 | 0.2746 | - |
| 0.0126 | 700 | 0.2475 | - |
| 0.0135 | 750 | 0.2246 | - |
| 0.0144 | 800 | 0.2283 | - |
| 0.0153 | 850 | 0.2298 | - |
| 0.0162 | 900 | 0.215 | - |
| 0.0171 | 950 | 0.207 | - |
| 0.0180 | 1000 | 0.267 | - |
| 0.0189 | 1050 | 0.2208 | - |
| 0.0198 | 1100 | 0.279 | - |
| 0.0207 | 1150 | 0.229 | - |
| 0.0216 | 1200 | 0.2151 | - |
| 0.0225 | 1250 | 0.2535 | - |
| 0.0234 | 1300 | 0.2877 | - |
| 0.0243 | 1350 | 0.2363 | - |
| 0.0252 | 1400 | 0.2105 | - |
| 0.0261 | 1450 | 0.2008 | - |
| 0.0270 | 1500 | 0.2007 | - |
| 0.0279 | 1550 | 0.2174 | - |
| 0.0288 | 1600 | 0.2656 | - |
| 0.0297 | 1650 | 0.2201 | - |
| 0.0306 | 1700 | 0.2003 | - |
| 0.0315 | 1750 | 0.1754 | - |
| 0.0324 | 1800 | 0.1691 | - |
| 0.0333 | 1850 | 0.1601 | - |
| 0.0342 | 1900 | 0.2164 | - |
| 0.0351 | 1950 | 0.2484 | - |
| 0.0360 | 2000 | 0.1987 | - |
| 0.0369 | 2050 | 0.2213 | - |
| 0.0378 | 2100 | 0.1926 | - |
| 0.0387 | 2150 | 0.203 | - |
| 0.0396 | 2200 | 0.3085 | - |
| 0.0405 | 2250 | 0.1433 | - |
| 0.0414 | 2300 | 0.2142 | - |
| 0.0423 | 2350 | 0.2168 | - |
| 0.0432 | 2400 | 0.1593 | - |
| 0.0441 | 2450 | 0.1916 | - |
| 0.0450 | 2500 | 0.2276 | - |
| 0.0459 | 2550 | 0.1327 | - |
| 0.0468 | 2600 | 0.2008 | - |
| 0.0477 | 2650 | 0.176 | - |
| 0.0486 | 2700 | 0.156 | - |
| 0.0495 | 2750 | 0.1111 | - |
| 0.0504 | 2800 | 0.1874 | - |
| 0.0513 | 2850 | 0.1932 | - |
| 0.0522 | 2900 | 0.1368 | - |
| 0.0531 | 2950 | 0.2034 | - |
| 0.0540 | 3000 | 0.2156 | - |
| 0.0549 | 3050 | 0.2058 | - |
| 0.0558 | 3100 | 0.2087 | - |
| 0.0567 | 3150 | 0.2237 | - |
| 0.0576 | 3200 | 0.1176 | - |
| 0.0585 | 3250 | 0.2106 | - |
| 0.0594 | 3300 | 0.1955 | - |
| 0.0603 | 3350 | 0.3023 | - |
| 0.0612 | 3400 | 0.1772 | - |
| 0.0621 | 3450 | 0.1349 | - |
| 0.0630 | 3500 | 0.228 | - |
| 0.0639 | 3550 | 0.1392 | - |
| 0.0648 | 3600 | 0.2427 | - |
| 0.0657 | 3650 | 0.1837 | - |
| 0.0666 | 3700 | 0.1677 | - |
| 0.0675 | 3750 | 0.1701 | - |
| 0.0684 | 3800 | 0.1005 | - |
| 0.0693 | 3850 | 0.1389 | - |
| 0.0702 | 3900 | 0.1104 | - |
| 0.0711 | 3950 | 0.1995 | - |
| 0.0720 | 4000 | 0.1954 | - |
| 0.0729 | 4050 | 0.2294 | - |
| 0.0738 | 4100 | 0.2467 | - |
| 0.0747 | 4150 | 0.1473 | - |
| 0.0756 | 4200 | 0.2144 | - |
| 0.0765 | 4250 | 0.2426 | - |
| 0.0774 | 4300 | 0.117 | - |
| 0.0784 | 4350 | 0.1414 | - |
| 0.0793 | 4400 | 0.1858 | - |
| 0.0802 | 4450 | 0.142 | - |
| 0.0811 | 4500 | 0.2029 | - |
| 0.0820 | 4550 | 0.0982 | - |
| 0.0829 | 4600 | 0.2232 | - |
| 0.0838 | 4650 | 0.1262 | - |
| 0.0847 | 4700 | 0.1787 | - |
| 0.0856 | 4750 | 0.1136 | - |
| 0.0865 | 4800 | 0.1539 | - |
| 0.0874 | 4850 | 0.1044 | - |
| 0.0883 | 4900 | 0.1124 | - |
| 0.0892 | 4950 | 0.2406 | - |
| 0.0901 | 5000 | 0.1673 | - |
| 0.0910 | 5050 | 0.1529 | - |
| 0.0919 | 5100 | 0.1105 | - |
| 0.0928 | 5150 | 0.2392 | - |
| 0.0937 | 5200 | 0.1937 | - |
| 0.0946 | 5250 | 0.2045 | - |
| 0.0955 | 5300 | 0.1527 | - |
| 0.0964 | 5350 | 0.0835 | - |
| 0.0973 | 5400 | 0.2018 | - |
| 0.0982 | 5450 | 0.1031 | - |
| 0.0991 | 5500 | 0.1805 | - |
| 0.1000 | 5550 | 0.2392 | - |
| 0.1009 | 5600 | 0.0867 | - |
| 0.1018 | 5650 | 0.1787 | - |
| 0.1027 | 5700 | 0.0891 | - |
| 0.1036 | 5750 | 0.2148 | - |
| 0.1045 | 5800 | 0.0937 | - |
| 0.1054 | 5850 | 0.1451 | - |
| 0.1063 | 5900 | 0.189 | - |
| 0.1072 | 5950 | 0.1082 | - |
| 0.1081 | 6000 | 0.2226 | - |
| 0.1090 | 6050 | 0.1938 | - |
| 0.1099 | 6100 | 0.1394 | - |
| 0.1108 | 6150 | 0.2158 | - |
| 0.1117 | 6200 | 0.142 | - |
| 0.1126 | 6250 | 0.1007 | - |
| 0.1135 | 6300 | 0.1545 | - |
| 0.1144 | 6350 | 0.1162 | - |
| 0.1153 | 6400 | 0.142 | - |
| 0.1162 | 6450 | 0.1252 | - |
| 0.1171 | 6500 | 0.1764 | - |
| 0.1180 | 6550 | 0.0405 | - |
| 0.1189 | 6600 | 0.0682 | - |
| 0.1198 | 6650 | 0.1181 | - |
| 0.1207 | 6700 | 0.1066 | - |
| 0.1216 | 6750 | 0.2133 | - |
| 0.1225 | 6800 | 0.1513 | - |
| 0.1234 | 6850 | 0.0937 | - |
| 0.1243 | 6900 | 0.1249 | - |
| 0.1252 | 6950 | 0.2474 | - |
| 0.1261 | 7000 | 0.0905 | - |
| 0.1270 | 7050 | 0.1067 | - |
| 0.1279 | 7100 | 0.1639 | - |
| 0.1288 | 7150 | 0.2112 | - |
| 0.1297 | 7200 | 0.0852 | - |
| 0.1306 | 7250 | 0.0963 | - |
| 0.1315 | 7300 | 0.1056 | - |
| 0.1324 | 7350 | 0.0679 | - |
| 0.1333 | 7400 | 0.0888 | - |
| 0.1342 | 7450 | 0.1096 | - |
| 0.1351 | 7500 | 0.0855 | - |
| 0.1360 | 7550 | 0.0702 | - |
| 0.1369 | 7600 | 0.0456 | - |
| 0.1378 | 7650 | 0.1596 | - |
| 0.1387 | 7700 | 0.1146 | - |
| 0.1396 | 7750 | 0.1561 | - |
| 0.1405 | 7800 | 0.134 | - |
| 0.1414 | 7850 | 0.1962 | - |
| 0.1423 | 7900 | 0.1557 | - |
| 0.1432 | 7950 | 0.1178 | - |
| 0.1441 | 8000 | 0.142 | - |
| 0.1450 | 8050 | 0.1368 | - |
| 0.1459 | 8100 | 0.1537 | - |
| 0.1468 | 8150 | 0.1454 | - |
| 0.1477 | 8200 | 0.2237 | - |
| 0.1486 | 8250 | 0.1172 | - |
| 0.1495 | 8300 | 0.0987 | - |
| 0.1504 | 8350 | 0.062 | - |
| 0.1513 | 8400 | 0.0981 | - |
| 0.1522 | 8450 | 0.1276 | - |
| 0.1531 | 8500 | 0.1503 | - |
| 0.1540 | 8550 | 0.099 | - |
| 0.1549 | 8600 | 0.1606 | - |
| 0.1558 | 8650 | 0.1157 | - |
| 0.1567 | 8700 | 0.0971 | - |
| 0.1576 | 8750 | 0.1783 | - |
| 0.1585 | 8800 | 0.0727 | - |
| 0.1594 | 8850 | 0.0888 | - |
| 0.1603 | 8900 | 0.1898 | - |
| 0.1612 | 8950 | 0.1185 | - |
| 0.1621 | 9000 | 0.1595 | - |
| 0.1630 | 9050 | 0.1044 | - |
| 0.1639 | 9100 | 0.1146 | - |
| 0.1648 | 9150 | 0.1444 | - |
| 0.1657 | 9200 | 0.0684 | - |
| 0.1666 | 9250 | 0.0735 | - |
| 0.1675 | 9300 | 0.0986 | - |
| 0.1684 | 9350 | 0.1439 | - |
| 0.1693 | 9400 | 0.0933 | - |
| 0.1702 | 9450 | 0.1357 | - |
| 0.1711 | 9500 | 0.0887 | - |
| 0.1720 | 9550 | 0.1112 | - |
| 0.1729 | 9600 | 0.0949 | - |
| 0.1738 | 9650 | 0.0777 | - |
| 0.1747 | 9700 | 0.0964 | - |
| 0.1756 | 9750 | 0.1507 | - |
| 0.1765 | 9800 | 0.0949 | - |
| 0.1774 | 9850 | 0.0218 | - |
| 0.1783 | 9900 | 0.0985 | - |
| 0.1792 | 9950 | 0.0662 | - |
| 0.1801 | 10000 | 0.1998 | - |
| 0.1810 | 10050 | 0.0385 | - |
| 0.1819 | 10100 | 0.0991 | - |
| 0.1828 | 10150 | 0.1084 | - |
| 0.1837 | 10200 | 0.0837 | - |
| 0.1846 | 10250 | 0.1003 | - |
| 0.1855 | 10300 | 0.1278 | - |
| 0.1864 | 10350 | 0.1085 | - |
| 0.1873 | 10400 | 0.093 | - |
| 0.1882 | 10450 | 0.0794 | - |
| 0.1891 | 10500 | 0.1013 | - |
| 0.1900 | 10550 | 0.0605 | - |
| 0.1909 | 10600 | 0.1076 | - |
| 0.1918 | 10650 | 0.2 | - |
| 0.1927 | 10700 | 0.2024 | - |
| 0.1936 | 10750 | 0.1346 | - |
| 0.1945 | 10800 | 0.0548 | - |
| 0.1954 | 10850 | 0.0828 | - |
| 0.1963 | 10900 | 0.0776 | - |
| 0.1972 | 10950 | 0.0694 | - |
| 0.1981 | 11000 | 0.1473 | - |
| 0.1990 | 11050 | 0.1222 | - |
| 0.1999 | 11100 | 0.0889 | - |
| 0.2008 | 11150 | 0.1062 | - |
| 0.2017 | 11200 | 0.1663 | - |
| 0.2026 | 11250 | 0.1587 | - |
| 0.2035 | 11300 | 0.1054 | - |
| 0.2044 | 11350 | 0.0915 | - |
| 0.2053 | 11400 | 0.0517 | - |
| 0.2062 | 11450 | 0.0782 | - |
| 0.2071 | 11500 | 0.1826 | - |
| 0.2080 | 11550 | 0.1524 | - |
| 0.2089 | 11600 | 0.1023 | - |
| 0.2098 | 11650 | 0.1056 | - |
| 0.2107 | 11700 | 0.1251 | - |
| 0.2116 | 11750 | 0.0895 | - |
| 0.2125 | 11800 | 0.1104 | - |
| 0.2134 | 11850 | 0.0762 | - |
| 0.2143 | 11900 | 0.1532 | - |
| 0.2152 | 11950 | 0.0941 | - |
| 0.2161 | 12000 | 0.1207 | - |
| 0.2170 | 12050 | 0.0839 | - |
| 0.2179 | 12100 | 0.0431 | - |
| 0.2188 | 12150 | 0.1572 | - |
| 0.2197 | 12200 | 0.0717 | - |
| 0.2206 | 12250 | 0.1244 | - |
| 0.2215 | 12300 | 0.1261 | - |
| 0.2224 | 12350 | 0.0732 | - |
| 0.2233 | 12400 | 0.1829 | - |
| 0.2242 | 12450 | 0.1142 | - |
| 0.2251 | 12500 | 0.0272 | - |
| 0.2260 | 12550 | 0.1645 | - |
| 0.2269 | 12600 | 0.0775 | - |
| 0.2278 | 12650 | 0.1358 | - |
| 0.2287 | 12700 | 0.0485 | - |
| 0.2296 | 12750 | 0.1348 | - |
| 0.2305 | 12800 | 0.1387 | - |
| 0.2314 | 12850 | 0.0783 | - |
| 0.2323 | 12900 | 0.0713 | - |
| 0.2332 | 12950 | 0.0838 | - |
| 0.2341 | 13000 | 0.1496 | - |
| 0.2351 | 13050 | 0.0463 | - |
| 0.2360 | 13100 | 0.0665 | - |
| 0.2369 | 13150 | 0.0149 | - |
| 0.2378 | 13200 | 0.0744 | - |
| 0.2387 | 13250 | 0.0411 | - |
| 0.2396 | 13300 | 0.1816 | - |
| 0.2405 | 13350 | 0.0956 | - |
| 0.2414 | 13400 | 0.1163 | - |
| 0.2423 | 13450 | 0.1136 | - |
| 0.2432 | 13500 | 0.116 | - |
| 0.2441 | 13550 | 0.0997 | - |
| 0.2450 | 13600 | 0.0649 | - |
| 0.2459 | 13650 | 0.0291 | - |
| 0.2468 | 13700 | 0.1282 | - |
| 0.2477 | 13750 | 0.0576 | - |
| 0.2486 | 13800 | 0.0891 | - |
| 0.2495 | 13850 | 0.0872 | - |
| 0.2504 | 13900 | 0.0606 | - |
| 0.2513 | 13950 | 0.0508 | - |
| 0.2522 | 14000 | 0.0457 | - |
| 0.2531 | 14050 | 0.0656 | - |
| 0.2540 | 14100 | 0.0936 | - |
| 0.2549 | 14150 | 0.0739 | - |
| 0.2558 | 14200 | 0.0941 | - |
| 0.2567 | 14250 | 0.1054 | - |
| 0.2576 | 14300 | 0.1083 | - |
| 0.2585 | 14350 | 0.0662 | - |
| 0.2594 | 14400 | 0.1149 | - |
| 0.2603 | 14450 | 0.1683 | - |
| 0.2612 | 14500 | 0.0796 | - |
| 0.2621 | 14550 | 0.0439 | - |
| 0.2630 | 14600 | 0.027 | - |
| 0.2639 | 14650 | 0.0666 | - |
| 0.2648 | 14700 | 0.041 | - |
| 0.2657 | 14750 | 0.0915 | - |
| 0.2666 | 14800 | 0.0678 | - |
| 0.2675 | 14850 | 0.1054 | - |
| 0.2684 | 14900 | 0.051 | - |
| 0.2693 | 14950 | 0.1416 | - |
| 0.2702 | 15000 | 0.0569 | - |
| 0.2711 | 15050 | 0.0851 | - |
| 0.2720 | 15100 | 0.065 | - |
| 0.2729 | 15150 | 0.0199 | - |
| 0.2738 | 15200 | 0.0837 | - |
| 0.2747 | 15250 | 0.0828 | - |
| 0.2756 | 15300 | 0.1139 | - |
| 0.2765 | 15350 | 0.1072 | - |
| 0.2774 | 15400 | 0.1733 | - |
| 0.2783 | 15450 | 0.1137 | - |
| 0.2792 | 15500 | 0.0504 | - |
| 0.2801 | 15550 | 0.0513 | - |
| 0.2810 | 15600 | 0.0579 | - |
| 0.2819 | 15650 | 0.0765 | - |
| 0.2828 | 15700 | 0.0466 | - |
| 0.2837 | 15750 | 0.0887 | - |
| 0.2846 | 15800 | 0.0741 | - |
| 0.2855 | 15850 | 0.0617 | - |
| 0.2864 | 15900 | 0.0368 | - |
| 0.2873 | 15950 | 0.0632 | - |
| 0.2882 | 16000 | 0.062 | - |
| 0.2891 | 16050 | 0.0413 | - |
| 0.2900 | 16100 | 0.0347 | - |
| 0.2909 | 16150 | 0.1019 | - |
| 0.2918 | 16200 | 0.0846 | - |
| 0.2927 | 16250 | 0.0714 | - |
| 0.2936 | 16300 | 0.0389 | - |
| 0.2945 | 16350 | 0.0514 | - |
| 0.2954 | 16400 | 0.1095 | - |
| 0.2963 | 16450 | 0.0753 | - |
| 0.2972 | 16500 | 0.0648 | - |
| 0.2981 | 16550 | 0.062 | - |
| 0.2990 | 16600 | 0.065 | - |
| 0.2999 | 16650 | 0.1014 | - |
| 0.3008 | 16700 | 0.037 | - |
| 0.3017 | 16750 | 0.1058 | - |
| 0.3026 | 16800 | 0.1038 | - |
| 0.3035 | 16850 | 0.0733 | - |
| 0.3044 | 16900 | 0.0791 | - |
| 0.3053 | 16950 | 0.1121 | - |
| 0.3062 | 17000 | 0.1084 | - |
| 0.3071 | 17050 | 0.0496 | - |
| 0.3080 | 17100 | 0.0733 | - |
| 0.3089 | 17150 | 0.0379 | - |
| 0.3098 | 17200 | 0.0448 | - |
| 0.3107 | 17250 | 0.1056 | - |
| 0.3116 | 17300 | 0.0687 | - |
| 0.3125 | 17350 | 0.0245 | - |
| 0.3134 | 17400 | 0.0584 | - |
| 0.3143 | 17450 | 0.0946 | - |
| 0.3152 | 17500 | 0.0857 | - |
| 0.3161 | 17550 | 0.1144 | - |
| 0.3170 | 17600 | 0.1119 | - |
| 0.3179 | 17650 | 0.1323 | - |
| 0.3188 | 17700 | 0.1026 | - |
| 0.3197 | 17750 | 0.0933 | - |
| 0.3206 | 17800 | 0.0671 | - |
| 0.3215 | 17850 | 0.0132 | - |
| 0.3224 | 17900 | 0.0611 | - |
| 0.3233 | 17950 | 0.056 | - |
| 0.3242 | 18000 | 0.0329 | - |
| 0.3251 | 18050 | 0.0358 | - |
| 0.3260 | 18100 | 0.1389 | - |
| 0.3269 | 18150 | 0.0653 | - |
| 0.3278 | 18200 | 0.0438 | - |
| 0.3287 | 18250 | 0.0612 | - |
| 0.3296 | 18300 | 0.1245 | - |
| 0.3305 | 18350 | 0.1018 | - |
| 0.3314 | 18400 | 0.082 | - |
| 0.3323 | 18450 | 0.0577 | - |
| 0.3332 | 18500 | 0.1349 | - |
| 0.3341 | 18550 | 0.0871 | - |
| 0.3350 | 18600 | 0.0525 | - |
| 0.3359 | 18650 | 0.0551 | - |
| 0.3368 | 18700 | 0.046 | - |
| 0.3377 | 18750 | 0.1323 | - |
| 0.3386 | 18800 | 0.147 | - |
| 0.3395 | 18850 | 0.045 | - |
| 0.3404 | 18900 | 0.027 | - |
| 0.3413 | 18950 | 0.1375 | - |
| 0.3422 | 19000 | 0.0569 | - |
| 0.3431 | 19050 | 0.0454 | - |
| 0.3440 | 19100 | 0.1545 | - |
| 0.3449 | 19150 | 0.0703 | - |
| 0.3458 | 19200 | 0.0434 | - |
| 0.3467 | 19250 | 0.0133 | - |
| 0.3476 | 19300 | 0.095 | - |
| 0.3485 | 19350 | 0.0344 | - |
| 0.3494 | 19400 | 0.0449 | - |
| 0.3503 | 19450 | 0.1665 | - |
| 0.3512 | 19500 | 0.0521 | - |
| 0.3521 | 19550 | 0.0844 | - |
| 0.3530 | 19600 | 0.1029 | - |
| 0.3539 | 19650 | 0.0858 | - |
| 0.3548 | 19700 | 0.1014 | - |
| 0.3557 | 19750 | 0.076 | - |
| 0.3566 | 19800 | 0.0882 | - |
| 0.3575 | 19850 | 0.1332 | - |
| 0.3584 | 19900 | 0.0877 | - |
| 0.3593 | 19950 | 0.0531 | - |
| 0.3602 | 20000 | 0.0796 | - |
| 0.3611 | 20050 | 0.0783 | - |
| 0.3620 | 20100 | 0.044 | - |
| 0.3629 | 20150 | 0.182 | - |
| 0.3638 | 20200 | 0.0672 | - |
| 0.3647 | 20250 | 0.0571 | - |
| 0.3656 | 20300 | 0.0452 | - |
| 0.3665 | 20350 | 0.0693 | - |
| 0.3674 | 20400 | 0.087 | - |
| 0.3683 | 20450 | 0.0594 | - |
| 0.3692 | 20500 | 0.0791 | - |
| 0.3701 | 20550 | 0.0546 | - |
| 0.3710 | 20600 | 0.1234 | - |
| 0.3719 | 20650 | 0.0592 | - |
| 0.3728 | 20700 | 0.0462 | - |
| 0.3737 | 20750 | 0.064 | - |
| 0.3746 | 20800 | 0.0991 | - |
| 0.3755 | 20850 | 0.0322 | - |
| 0.3764 | 20900 | 0.0624 | - |
| 0.3773 | 20950 | 0.0522 | - |
| 0.3782 | 21000 | 0.0826 | - |
| 0.3791 | 21050 | 0.0962 | - |
| 0.3800 | 21100 | 0.0556 | - |
| 0.3809 | 21150 | 0.0287 | - |
| 0.3818 | 21200 | 0.0798 | - |
| 0.3827 | 21250 | 0.0722 | - |
| 0.3836 | 21300 | 0.0784 | - |
| 0.3845 | 21350 | 0.0558 | - |
| 0.3854 | 21400 | 0.0405 | - |
| 0.3863 | 21450 | 0.1242 | - |
| 0.3872 | 21500 | 0.1275 | - |
| 0.3881 | 21550 | 0.0679 | - |
| 0.3890 | 21600 | 0.05 | - |
| 0.3899 | 21650 | 0.0593 | - |
| 0.3909 | 21700 | 0.1064 | - |
| 0.3918 | 21750 | 0.1139 | - |
| 0.3927 | 21800 | 0.1437 | - |
| 0.3936 | 21850 | 0.1023 | - |
| 0.3945 | 21900 | 0.0259 | - |
| 0.3954 | 21950 | 0.0481 | - |
| 0.3963 | 22000 | 0.0358 | - |
| 0.3972 | 22050 | 0.0236 | - |
| 0.3981 | 22100 | 0.0573 | - |
| 0.3990 | 22150 | 0.123 | - |
| 0.3999 | 22200 | 0.0783 | - |
| 0.4008 | 22250 | 0.0661 | - |
| 0.4017 | 22300 | 0.0598 | - |
| 0.4026 | 22350 | 0.0387 | - |
| 0.4035 | 22400 | 0.0347 | - |
| 0.4044 | 22450 | 0.0271 | - |
| 0.4053 | 22500 | 0.0532 | - |
| 0.4062 | 22550 | 0.0515 | - |
| 0.4071 | 22600 | 0.0416 | - |
| 0.4080 | 22650 | 0.0912 | - |
| 0.4089 | 22700 | 0.0275 | - |
| 0.4098 | 22750 | 0.1205 | - |
| 0.4107 | 22800 | 0.025 | - |
| 0.4116 | 22850 | 0.0827 | - |
| 0.4125 | 22900 | 0.1313 | - |
| 0.4134 | 22950 | 0.0709 | - |
| 0.4143 | 23000 | 0.045 | - |
| 0.4152 | 23050 | 0.07 | - |
| 0.4161 | 23100 | 0.1394 | - |
| 0.4170 | 23150 | 0.0125 | - |
| 0.4179 | 23200 | 0.0982 | - |
| 0.4188 | 23250 | 0.0595 | - |
| 0.4197 | 23300 | 0.1046 | - |
| 0.4206 | 23350 | 0.0363 | - |
| 0.4215 | 23400 | 0.0789 | - |
| 0.4224 | 23450 | 0.0477 | - |
| 0.4233 | 23500 | 0.1119 | - |
| 0.4242 | 23550 | 0.0854 | - |
| 0.4251 | 23600 | 0.0811 | - |
| 0.4260 | 23650 | 0.0482 | - |
| 0.4269 | 23700 | 0.0241 | - |
| 0.4278 | 23750 | 0.0508 | - |
| 0.4287 | 23800 | 0.0437 | - |
| 0.4296 | 23850 | 0.1025 | - |
| 0.4305 | 23900 | 0.0857 | - |
| 0.4314 | 23950 | 0.0242 | - |
| 0.4323 | 24000 | 0.0993 | - |
| 0.4332 | 24050 | 0.0426 | - |
| 0.4341 | 24100 | 0.0259 | - |
| 0.4350 | 24150 | 0.0612 | - |
| 0.4359 | 24200 | 0.0604 | - |
| 0.4368 | 24250 | 0.0838 | - |
| 0.4377 | 24300 | 0.1664 | - |
| 0.4386 | 24350 | 0.0614 | - |
| 0.4395 | 24400 | 0.0525 | - |
| 0.4404 | 24450 | 0.036 | - |
| 0.4413 | 24500 | 0.0293 | - |
| 0.4422 | 24550 | 0.0181 | - |
| 0.4431 | 24600 | 0.0901 | - |
| 0.4440 | 24650 | 0.0812 | - |
| 0.4449 | 24700 | 0.0439 | - |
| 0.4458 | 24750 | 0.1715 | - |
| 0.4467 | 24800 | 0.0555 | - |
| 0.4476 | 24850 | 0.0177 | - |
| 0.4485 | 24900 | 0.0253 | - |
| 0.4494 | 24950 | 0.1183 | - |
| 0.4503 | 25000 | 0.0599 | - |
| 0.4512 | 25050 | 0.0783 | - |
| 0.4521 | 25100 | 0.0889 | - |
| 0.4530 | 25150 | 0.0186 | - |
| 0.4539 | 25200 | 0.075 | - |
| 0.4548 | 25250 | 0.03 | - |
| 0.4557 | 25300 | 0.0311 | - |
| 0.4566 | 25350 | 0.0826 | - |
| 0.4575 | 25400 | 0.0406 | - |
| 0.4584 | 25450 | 0.0672 | - |
| 0.4593 | 25500 | 0.0872 | - |
| 0.4602 | 25550 | 0.1159 | - |
| 0.4611 | 25600 | 0.0405 | - |
| 0.4620 | 25650 | 0.0318 | - |
| 0.4629 | 25700 | 0.075 | - |
| 0.4638 | 25750 | 0.0533 | - |
| 0.4647 | 25800 | 0.0498 | - |
| 0.4656 | 25850 | 0.0694 | - |
| 0.4665 | 25900 | 0.0394 | - |
| 0.4674 | 25950 | 0.1791 | - |
| 0.4683 | 26000 | 0.0315 | - |
| 0.4692 | 26050 | 0.0277 | - |
| 0.4701 | 26100 | 0.0447 | - |
| 0.4710 | 26150 | 0.059 | - |
| 0.4719 | 26200 | 0.0787 | - |
| 0.4728 | 26250 | 0.0743 | - |
| 0.4737 | 26300 | 0.0304 | - |
| 0.4746 | 26350 | 0.0507 | - |
| 0.4755 | 26400 | 0.0389 | - |
| 0.4764 | 26450 | 0.0399 | - |
| 0.4773 | 26500 | 0.0201 | - |
| 0.4782 | 26550 | 0.0423 | - |
| 0.4791 | 26600 | 0.0408 | - |
| 0.4800 | 26650 | 0.0329 | - |
| 0.4809 | 26700 | 0.0288 | - |
| 0.4818 | 26750 | 0.1072 | - |
| 0.4827 | 26800 | 0.1211 | - |
| 0.4836 | 26850 | 0.0531 | - |
| 0.4845 | 26900 | 0.0967 | - |
| 0.4854 | 26950 | 0.0188 | - |
| 0.4863 | 27000 | 0.0868 | - |
| 0.4872 | 27050 | 0.1007 | - |
| 0.4881 | 27100 | 0.0375 | - |
| 0.4890 | 27150 | 0.0529 | - |
| 0.4899 | 27200 | 0.0422 | - |
| 0.4908 | 27250 | 0.0391 | - |
| 0.4917 | 27300 | 0.0221 | - |
| 0.4926 | 27350 | 0.1447 | - |
| 0.4935 | 27400 | 0.0505 | - |
| 0.4944 | 27450 | 0.1414 | - |
| 0.4953 | 27500 | 0.0798 | - |
| 0.4962 | 27550 | 0.0375 | - |
| 0.4971 | 27600 | 0.0603 | - |
| 0.4980 | 27650 | 0.1289 | - |
| 0.4989 | 27700 | 0.0574 | - |
| 0.4998 | 27750 | 0.0957 | - |
| 0.5007 | 27800 | 0.0799 | - |
| 0.5016 | 27850 | 0.0755 | - |
| 0.5025 | 27900 | 0.0888 | - |
| 0.5034 | 27950 | 0.0168 | - |
| 0.5043 | 28000 | 0.0764 | - |
| 0.5052 | 28050 | 0.0392 | - |
| 0.5061 | 28100 | 0.0383 | - |
| 0.5070 | 28150 | 0.0313 | - |
| 0.5079 | 28200 | 0.0074 | - |
| 0.5088 | 28250 | 0.091 | - |
| 0.5097 | 28300 | 0.0608 | - |
| 0.5106 | 28350 | 0.0332 | - |
| 0.5115 | 28400 | 0.0997 | - |
| 0.5124 | 28450 | 0.0805 | - |
| 0.5133 | 28500 | 0.0378 | - |
| 0.5142 | 28550 | 0.08 | - |
| 0.5151 | 28600 | 0.0394 | - |
| 0.5160 | 28650 | 0.0712 | - |
| 0.5169 | 28700 | 0.0284 | - |
| 0.5178 | 28750 | 0.0301 | - |
| 0.5187 | 28800 | 0.1545 | - |
| 0.5196 | 28850 | 0.0546 | - |
| 0.5205 | 28900 | 0.0302 | - |
| 0.5214 | 28950 | 0.0518 | - |
| 0.5223 | 29000 | 0.0488 | - |
| 0.5232 | 29050 | 0.091 | - |
| 0.5241 | 29100 | 0.0989 | - |
| 0.5250 | 29150 | 0.0481 | - |
| 0.5259 | 29200 | 0.1095 | - |
| 0.5268 | 29250 | 0.0343 | - |
| 0.5277 | 29300 | 0.0334 | - |
| 0.5286 | 29350 | 0.0449 | - |
| 0.5295 | 29400 | 0.064 | - |
| 0.5304 | 29450 | 0.0749 | - |
| 0.5313 | 29500 | 0.0084 | - |
| 0.5322 | 29550 | 0.0247 | - |
| 0.5331 | 29600 | 0.0204 | - |
| 0.5340 | 29650 | 0.0369 | - |
| 0.5349 | 29700 | 0.0921 | - |
| 0.5358 | 29750 | 0.0607 | - |
| 0.5367 | 29800 | 0.0384 | - |
| 0.5376 | 29850 | 0.0867 | - |
| 0.5385 | 29900 | 0.1473 | - |
| 0.5394 | 29950 | 0.0606 | - |
| 0.5403 | 30000 | 0.0799 | - |
| 0.5412 | 30050 | 0.0747 | - |
| 0.5421 | 30100 | 0.0769 | - |
| 0.5430 | 30150 | 0.0244 | - |
| 0.5439 | 30200 | 0.0641 | - |
| 0.5448 | 30250 | 0.0333 | - |
| 0.5457 | 30300 | 0.0245 | - |
| 0.5466 | 30350 | 0.0185 | - |
| 0.5476 | 30400 | 0.0478 | - |
| 0.5485 | 30450 | 0.0846 | - |
| 0.5494 | 30500 | 0.068 | - |
| 0.5503 | 30550 | 0.0687 | - |
| 0.5512 | 30600 | 0.0521 | - |
| 0.5521 | 30650 | 0.0592 | - |
| 0.5530 | 30700 | 0.0493 | - |
| 0.5539 | 30750 | 0.035 | - |
| 0.5548 | 30800 | 0.0773 | - |
| 0.5557 | 30850 | 0.0627 | - |
| 0.5566 | 30900 | 0.0299 | - |
| 0.5575 | 30950 | 0.0815 | - |
| 0.5584 | 31000 | 0.0207 | - |
| 0.5593 | 31050 | 0.0339 | - |
| 0.5602 | 31100 | 0.0205 | - |
| 0.5611 | 31150 | 0.0559 | - |
| 0.5620 | 31200 | 0.0605 | - |
| 0.5629 | 31250 | 0.0251 | - |
| 0.5638 | 31300 | 0.0905 | - |
| 0.5647 | 31350 | 0.0185 | - |
| 0.5656 | 31400 | 0.0461 | - |
| 0.5665 | 31450 | 0.0444 | - |
| 0.5674 | 31500 | 0.0465 | - |
| 0.5683 | 31550 | 0.0194 | - |
| 0.5692 | 31600 | 0.0353 | - |
| 0.5701 | 31650 | 0.0668 | - |
| 0.5710 | 31700 | 0.0831 | - |
| 0.5719 | 31750 | 0.0845 | - |
| 0.5728 | 31800 | 0.0525 | - |
| 0.5737 | 31850 | 0.0418 | - |
| 0.5746 | 31900 | 0.0239 | - |
| 0.5755 | 31950 | 0.0667 | - |
| 0.5764 | 32000 | 0.0419 | - |
| 0.5773 | 32050 | 0.1183 | - |
| 0.5782 | 32100 | 0.0358 | - |
| 0.5791 | 32150 | 0.0341 | - |
| 0.5800 | 32200 | 0.0462 | - |
| 0.5809 | 32250 | 0.0119 | - |
| 0.5818 | 32300 | 0.0309 | - |
| 0.5827 | 32350 | 0.0614 | - |
| 0.5836 | 32400 | 0.0688 | - |
| 0.5845 | 32450 | 0.0242 | - |
| 0.5854 | 32500 | 0.0384 | - |
| 0.5863 | 32550 | 0.0764 | - |
| 0.5872 | 32600 | 0.0512 | - |
| 0.5881 | 32650 | 0.0545 | - |
| 0.5890 | 32700 | 0.0515 | - |
| 0.5899 | 32750 | 0.0905 | - |
| 0.5908 | 32800 | 0.0266 | - |
| 0.5917 | 32850 | 0.1118 | - |
| 0.5926 | 32900 | 0.0393 | - |
| 0.5935 | 32950 | 0.0172 | - |
| 0.5944 | 33000 | 0.0303 | - |
| 0.5953 | 33050 | 0.0523 | - |
| 0.5962 | 33100 | 0.0153 | - |
| 0.5971 | 33150 | 0.0347 | - |
| 0.5980 | 33200 | 0.0144 | - |
| 0.5989 | 33250 | 0.0826 | - |
| 0.5998 | 33300 | 0.0194 | - |
| 0.6007 | 33350 | 0.031 | - |
| 0.6016 | 33400 | 0.1032 | - |
| 0.6025 | 33450 | 0.0865 | - |
| 0.6034 | 33500 | 0.0175 | - |
| 0.6043 | 33550 | 0.0785 | - |
| 0.6052 | 33600 | 0.0645 | - |
| 0.6061 | 33650 | 0.012 | - |
| 0.6070 | 33700 | 0.0237 | - |
| 0.6079 | 33750 | 0.0527 | - |
| 0.6088 | 33800 | 0.0375 | - |
| 0.6097 | 33850 | 0.0727 | - |
| 0.6106 | 33900 | 0.0764 | - |
| 0.6115 | 33950 | 0.0447 | - |
| 0.6124 | 34000 | 0.0358 | - |
| 0.6133 | 34050 | 0.0762 | - |
| 0.6142 | 34100 | 0.0306 | - |
| 0.6151 | 34150 | 0.0431 | - |
| 0.6160 | 34200 | 0.0256 | - |
| 0.6169 | 34250 | 0.0894 | - |
| 0.6178 | 34300 | 0.0297 | - |
| 0.6187 | 34350 | 0.0495 | - |
| 0.6196 | 34400 | 0.0394 | - |
| 0.6205 | 34450 | 0.0689 | - |
| 0.6214 | 34500 | 0.0224 | - |
| 0.6223 | 34550 | 0.0248 | - |
| 0.6232 | 34600 | 0.0587 | - |
| 0.6241 | 34650 | 0.0585 | - |
| 0.625 | 34700 | 0.0279 | - |
| 0.6259 | 34750 | 0.0267 | - |
| 0.6268 | 34800 | 0.0182 | - |
| 0.6277 | 34850 | 0.0422 | - |
| 0.6286 | 34900 | 0.0166 | - |
| 0.6295 | 34950 | 0.0246 | - |
| 0.6304 | 35000 | 0.0248 | - |
| 0.6313 | 35050 | 0.0902 | - |
| 0.6322 | 35100 | 0.0309 | - |
| 0.6331 | 35150 | 0.0705 | - |
| 0.6340 | 35200 | 0.0507 | - |
| 0.6349 | 35250 | 0.019 | - |
| 0.6358 | 35300 | 0.0947 | - |
| 0.6367 | 35350 | 0.0226 | - |
| 0.6376 | 35400 | 0.0534 | - |
| 0.6385 | 35450 | 0.0607 | - |
| 0.6394 | 35500 | 0.0208 | - |
| 0.6403 | 35550 | 0.0198 | - |
| 0.6412 | 35600 | 0.0298 | - |
| 0.6421 | 35650 | 0.0191 | - |
| 0.6430 | 35700 | 0.0589 | - |
| 0.6439 | 35750 | 0.0909 | - |
| 0.6448 | 35800 | 0.0829 | - |
| 0.6457 | 35850 | 0.055 | - |
| 0.6466 | 35900 | 0.0188 | - |
| 0.6475 | 35950 | 0.033 | - |
| 0.6484 | 36000 | 0.0617 | - |
| 0.6493 | 36050 | 0.0374 | - |
| 0.6502 | 36100 | 0.0481 | - |
| 0.6511 | 36150 | 0.034 | - |
| 0.6520 | 36200 | 0.1183 | - |
| 0.6529 | 36250 | 0.0265 | - |
| 0.6538 | 36300 | 0.0413 | - |
| 0.6547 | 36350 | 0.0232 | - |
| 0.6556 | 36400 | 0.0244 | - |
| 0.6565 | 36450 | 0.0452 | - |
| 0.6574 | 36500 | 0.072 | - |
| 0.6583 | 36550 | 0.0216 | - |
| 0.6592 | 36600 | 0.0383 | - |
| 0.6601 | 36650 | 0.0625 | - |
| 0.6610 | 36700 | 0.0875 | - |
| 0.6619 | 36750 | 0.1208 | - |
| 0.6628 | 36800 | 0.0431 | - |
| 0.6637 | 36850 | 0.025 | - |
| 0.6646 | 36900 | 0.056 | - |
| 0.6655 | 36950 | 0.0399 | - |
| 0.6664 | 37000 | 0.0521 | - |
| 0.6673 | 37050 | 0.0415 | - |
| 0.6682 | 37100 | 0.1265 | - |
| 0.6691 | 37150 | 0.022 | - |
| 0.6700 | 37200 | 0.0395 | - |
| 0.6709 | 37250 | 0.0759 | - |
| 0.6718 | 37300 | 0.0428 | - |
| 0.6727 | 37350 | 0.0304 | - |
| 0.6736 | 37400 | 0.0203 | - |
| 0.6745 | 37450 | 0.0484 | - |
| 0.6754 | 37500 | 0.0384 | - |
| 0.6763 | 37550 | 0.0548 | - |
| 0.6772 | 37600 | 0.0346 | - |
| 0.6781 | 37650 | 0.0284 | - |
| 0.6790 | 37700 | 0.0247 | - |
| 0.6799 | 37750 | 0.0497 | - |
| 0.6808 | 37800 | 0.0796 | - |
| 0.6817 | 37850 | 0.0603 | - |
| 0.6826 | 37900 | 0.0381 | - |
| 0.6835 | 37950 | 0.0292 | - |
| 0.6844 | 38000 | 0.0295 | - |
| 0.6853 | 38050 | 0.0312 | - |
| 0.6862 | 38100 | 0.056 | - |
| 0.6871 | 38150 | 0.0198 | - |
| 0.6880 | 38200 | 0.0648 | - |
| 0.6889 | 38250 | 0.0529 | - |
| 0.6898 | 38300 | 0.018 | - |
| 0.6907 | 38350 | 0.0837 | - |
| 0.6916 | 38400 | 0.0366 | - |
| 0.6925 | 38450 | 0.0216 | - |
| 0.6934 | 38500 | 0.0436 | - |
| 0.6943 | 38550 | 0.0211 | - |
| 0.6952 | 38600 | 0.0532 | - |
| 0.6961 | 38650 | 0.0468 | - |
| 0.6970 | 38700 | 0.1161 | - |
| 0.6979 | 38750 | 0.073 | - |
| 0.6988 | 38800 | 0.0201 | - |
| 0.6997 | 38850 | 0.1307 | - |
| 0.7006 | 38900 | 0.0281 | - |
| 0.7015 | 38950 | 0.0891 | - |
| 0.7024 | 39000 | 0.0204 | - |
| 0.7034 | 39050 | 0.0363 | - |
| 0.7043 | 39100 | 0.0551 | - |
| 0.7052 | 39150 | 0.0213 | - |
| 0.7061 | 39200 | 0.139 | - |
| 0.7070 | 39250 | 0.013 | - |
| 0.7079 | 39300 | 0.024 | - |
| 0.7088 | 39350 | 0.071 | - |
| 0.7097 | 39400 | 0.026 | - |
| 0.7106 | 39450 | 0.0294 | - |
| 0.7115 | 39500 | 0.0245 | - |
| 0.7124 | 39550 | 0.011 | - |
| 0.7133 | 39600 | 0.0099 | - |
| 0.7142 | 39650 | 0.0829 | - |
| 0.7151 | 39700 | 0.0637 | - |
| 0.7160 | 39750 | 0.0109 | - |
| 0.7169 | 39800 | 0.0113 | - |
| 0.7178 | 39850 | 0.0297 | - |
| 0.7187 | 39900 | 0.0291 | - |
| 0.7196 | 39950 | 0.0526 | - |
| 0.7205 | 40000 | 0.0339 | - |
| 0.7214 | 40050 | 0.09 | - |
| 0.7223 | 40100 | 0.0565 | - |
| 0.7232 | 40150 | 0.0435 | - |
| 0.7241 | 40200 | 0.0218 | - |
| 0.7250 | 40250 | 0.032 | - |
| 0.7259 | 40300 | 0.1011 | - |
| 0.7268 | 40350 | 0.0334 | - |
| 0.7277 | 40400 | 0.0363 | - |
| 0.7286 | 40450 | 0.065 | - |
| 0.7295 | 40500 | 0.0408 | - |
| 0.7304 | 40550 | 0.0229 | - |
| 0.7313 | 40600 | 0.07 | - |
| 0.7322 | 40650 | 0.0315 | - |
| 0.7331 | 40700 | 0.0406 | - |
| 0.7340 | 40750 | 0.0151 | - |
| 0.7349 | 40800 | 0.0399 | - |
| 0.7358 | 40850 | 0.0399 | - |
| 0.7367 | 40900 | 0.0201 | - |
| 0.7376 | 40950 | 0.0402 | - |
| 0.7385 | 41000 | 0.0957 | - |
| 0.7394 | 41050 | 0.0271 | - |
| 0.7403 | 41100 | 0.0341 | - |
| 0.7412 | 41150 | 0.0572 | - |
| 0.7421 | 41200 | 0.0389 | - |
| 0.7430 | 41250 | 0.0319 | - |
| 0.7439 | 41300 | 0.0262 | - |
| 0.7448 | 41350 | 0.0809 | - |
| 0.7457 | 41400 | 0.0208 | - |
| 0.7466 | 41450 | 0.0423 | - |
| 0.7475 | 41500 | 0.0673 | - |
| 0.7484 | 41550 | 0.025 | - |
| 0.7493 | 41600 | 0.0764 | - |
| 0.7502 | 41650 | 0.0293 | - |
| 0.7511 | 41700 | 0.0545 | - |
| 0.7520 | 41750 | 0.0219 | - |
| 0.7529 | 41800 | 0.0119 | - |
| 0.7538 | 41850 | 0.08 | - |
| 0.7547 | 41900 | 0.0286 | - |
| 0.7556 | 41950 | 0.0307 | - |
| 0.7565 | 42000 | 0.0433 | - |
| 0.7574 | 42050 | 0.0518 | - |
| 0.7583 | 42100 | 0.0372 | - |
| 0.7592 | 42150 | 0.0542 | - |
| 0.7601 | 42200 | 0.0254 | - |
| 0.7610 | 42250 | 0.0604 | - |
| 0.7619 | 42300 | 0.029 | - |
| 0.7628 | 42350 | 0.0263 | - |
| 0.7637 | 42400 | 0.0317 | - |
| 0.7646 | 42450 | 0.0903 | - |
| 0.7655 | 42500 | 0.0277 | - |
| 0.7664 | 42550 | 0.0662 | - |
| 0.7673 | 42600 | 0.0205 | - |
| 0.7682 | 42650 | 0.0191 | - |
| 0.7691 | 42700 | 0.0866 | - |
| 0.7700 | 42750 | 0.0482 | - |
| 0.7709 | 42800 | 0.0391 | - |
| 0.7718 | 42850 | 0.0584 | - |
| 0.7727 | 42900 | 0.035 | - |
| 0.7736 | 42950 | 0.0393 | - |
| 0.7745 | 43000 | 0.0168 | - |
| 0.7754 | 43050 | 0.0522 | - |
| 0.7763 | 43100 | 0.0178 | - |
| 0.7772 | 43150 | 0.0961 | - |
| 0.7781 | 43200 | 0.0343 | - |
| 0.7790 | 43250 | 0.0251 | - |
| 0.7799 | 43300 | 0.019 | - |
| 0.7808 | 43350 | 0.0511 | - |
| 0.7817 | 43400 | 0.0326 | - |
| 0.7826 | 43450 | 0.0327 | - |
| 0.7835 | 43500 | 0.0951 | - |
| 0.7844 | 43550 | 0.0726 | - |
| 0.7853 | 43600 | 0.0562 | - |
| 0.7862 | 43650 | 0.0473 | - |
| 0.7871 | 43700 | 0.0116 | - |
| 0.7880 | 43750 | 0.0501 | - |
| 0.7889 | 43800 | 0.0672 | - |
| 0.7898 | 43850 | 0.0669 | - |
| 0.7907 | 43900 | 0.0623 | - |
| 0.7916 | 43950 | 0.0303 | - |
| 0.7925 | 44000 | 0.0651 | - |
| 0.7934 | 44050 | 0.0419 | - |
| 0.7943 | 44100 | 0.0671 | - |
| 0.7952 | 44150 | 0.1371 | - |
| 0.7961 | 44200 | 0.0345 | - |
| 0.7970 | 44250 | 0.0543 | - |
| 0.7979 | 44300 | 0.0438 | - |
| 0.7988 | 44350 | 0.0365 | - |
| 0.7997 | 44400 | 0.0504 | - |
| 0.8006 | 44450 | 0.0388 | - |
| 0.8015 | 44500 | 0.0135 | - |
| 0.8024 | 44550 | 0.0912 | - |
| 0.8033 | 44600 | 0.0565 | - |
| 0.8042 | 44650 | 0.0357 | - |
| 0.8051 | 44700 | 0.0879 | - |
| 0.8060 | 44750 | 0.0191 | - |
| 0.8069 | 44800 | 0.0293 | - |
| 0.8078 | 44850 | 0.0655 | - |
| 0.8087 | 44900 | 0.0092 | - |
| 0.8096 | 44950 | 0.0548 | - |
| 0.8105 | 45000 | 0.0355 | - |
| 0.8114 | 45050 | 0.0195 | - |
| 0.8123 | 45100 | 0.0484 | - |
| 0.8132 | 45150 | 0.0788 | - |
| 0.8141 | 45200 | 0.0342 | - |
| 0.8150 | 45250 | 0.0424 | - |
| 0.8159 | 45300 | 0.0346 | - |
| 0.8168 | 45350 | 0.0562 | - |
| 0.8177 | 45400 | 0.033 | - |
| 0.8186 | 45450 | 0.0591 | - |
| 0.8195 | 45500 | 0.0369 | - |
| 0.8204 | 45550 | 0.0693 | - |
| 0.8213 | 45600 | 0.052 | - |
| 0.8222 | 45650 | 0.0387 | - |
| 0.8231 | 45700 | 0.0391 | - |
| 0.8240 | 45750 | 0.0498 | - |
| 0.8249 | 45800 | 0.0262 | - |
| 0.8258 | 45850 | 0.0079 | - |
| 0.8267 | 45900 | 0.0333 | - |
| 0.8276 | 45950 | 0.0521 | - |
| 0.8285 | 46000 | 0.0607 | - |
| 0.8294 | 46050 | 0.0246 | - |
| 0.8303 | 46100 | 0.0308 | - |
| 0.8312 | 46150 | 0.0422 | - |
| 0.8321 | 46200 | 0.0277 | - |
| 0.8330 | 46250 | 0.0268 | - |
| 0.8339 | 46300 | 0.0896 | - |
| 0.8348 | 46350 | 0.0268 | - |
| 0.8357 | 46400 | 0.0271 | - |
| 0.8366 | 46450 | 0.0291 | - |
| 0.8375 | 46500 | 0.0663 | - |
| 0.8384 | 46550 | 0.0388 | - |
| 0.8393 | 46600 | 0.0205 | - |
| 0.8402 | 46650 | 0.0269 | - |
| 0.8411 | 46700 | 0.0197 | - |
| 0.8420 | 46750 | 0.0493 | - |
| 0.8429 | 46800 | 0.0345 | - |
| 0.8438 | 46850 | 0.0901 | - |
| 0.8447 | 46900 | 0.0237 | - |
| 0.8456 | 46950 | 0.056 | - |
| 0.8465 | 47000 | 0.0556 | - |
| 0.8474 | 47050 | 0.0489 | - |
| 0.8483 | 47100 | 0.0136 | - |
| 0.8492 | 47150 | 0.0974 | - |
| 0.8501 | 47200 | 0.0354 | - |
| 0.8510 | 47250 | 0.0453 | - |
| 0.8519 | 47300 | 0.0891 | - |
| 0.8528 | 47350 | 0.0185 | - |
| 0.8537 | 47400 | 0.0479 | - |
| 0.8546 | 47450 | 0.0286 | - |
| 0.8555 | 47500 | 0.0528 | - |
| 0.8564 | 47550 | 0.0334 | - |
| 0.8573 | 47600 | 0.0105 | - |
| 0.8582 | 47650 | 0.0613 | - |
| 0.8591 | 47700 | 0.0345 | - |
| 0.8601 | 47750 | 0.0449 | - |
| 0.8610 | 47800 | 0.0179 | - |
| 0.8619 | 47850 | 0.0156 | - |
| 0.8628 | 47900 | 0.084 | - |
| 0.8637 | 47950 | 0.0233 | - |
| 0.8646 | 48000 | 0.0281 | - |
| 0.8655 | 48050 | 0.0667 | - |
| 0.8664 | 48100 | 0.0132 | - |
| 0.8673 | 48150 | 0.0301 | - |
| 0.8682 | 48200 | 0.081 | - |
| 0.8691 | 48250 | 0.017 | - |
| 0.8700 | 48300 | 0.0418 | - |
| 0.8709 | 48350 | 0.013 | - |
| 0.8718 | 48400 | 0.0473 | - |
| 0.8727 | 48450 | 0.0731 | - |
| 0.8736 | 48500 | 0.0182 | - |
| 0.8745 | 48550 | 0.0225 | - |
| 0.8754 | 48600 | 0.0722 | - |
| 0.8763 | 48650 | 0.0131 | - |
| 0.8772 | 48700 | 0.0436 | - |
| 0.8781 | 48750 | 0.0292 | - |
| 0.8790 | 48800 | 0.0418 | - |
| 0.8799 | 48850 | 0.0906 | - |
| 0.8808 | 48900 | 0.0498 | - |
| 0.8817 | 48950 | 0.013 | - |
| 0.8826 | 49000 | 0.0422 | - |
| 0.8835 | 49050 | 0.0359 | - |
| 0.8844 | 49100 | 0.0317 | - |
| 0.8853 | 49150 | 0.0461 | - |
| 0.8862 | 49200 | 0.0407 | - |
| 0.8871 | 49250 | 0.0392 | - |
| 0.8880 | 49300 | 0.0622 | - |
| 0.8889 | 49350 | 0.017 | - |
| 0.8898 | 49400 | 0.0471 | - |
| 0.8907 | 49450 | 0.0465 | - |
| 0.8916 | 49500 | 0.0242 | - |
| 0.8925 | 49550 | 0.0532 | - |
| 0.8934 | 49600 | 0.0481 | - |
| 0.8943 | 49650 | 0.0628 | - |
| 0.8952 | 49700 | 0.0281 | - |
| 0.8961 | 49750 | 0.0136 | - |
| 0.8970 | 49800 | 0.0656 | - |
| 0.8979 | 49850 | 0.0182 | - |
| 0.8988 | 49900 | 0.0519 | - |
| 0.8997 | 49950 | 0.0207 | - |
| 0.9006 | 50000 | 0.0377 | - |
| 0.9015 | 50050 | 0.0552 | - |
| 0.9024 | 50100 | 0.0195 | - |
| 0.9033 | 50150 | 0.03 | - |
| 0.9042 | 50200 | 0.0568 | - |
| 0.9051 | 50250 | 0.0388 | - |
| 0.9060 | 50300 | 0.0445 | - |
| 0.9069 | 50350 | 0.0461 | - |
| 0.9078 | 50400 | 0.0299 | - |
| 0.9087 | 50450 | 0.0173 | - |
| 0.9096 | 50500 | 0.0908 | - |
| 0.9105 | 50550 | 0.0524 | - |
| 0.9114 | 50600 | 0.0241 | - |
| 0.9123 | 50650 | 0.051 | - |
| 0.9132 | 50700 | 0.0114 | - |
| 0.9141 | 50750 | 0.0379 | - |
| 0.9150 | 50800 | 0.0909 | - |
| 0.9159 | 50850 | 0.1076 | - |
| 0.9168 | 50900 | 0.0346 | - |
| 0.9177 | 50950 | 0.0319 | - |
| 0.9186 | 51000 | 0.0186 | - |
| 0.9195 | 51050 | 0.0397 | - |
| 0.9204 | 51100 | 0.046 | - |
| 0.9213 | 51150 | 0.0412 | - |
| 0.9222 | 51200 | 0.0147 | - |
| 0.9231 | 51250 | 0.0262 | - |
| 0.9240 | 51300 | 0.0552 | - |
| 0.9249 | 51350 | 0.0545 | - |
| 0.9258 | 51400 | 0.0402 | - |
| 0.9267 | 51450 | 0.0603 | - |
| 0.9276 | 51500 | 0.0343 | - |
| 0.9285 | 51550 | 0.0267 | - |
| 0.9294 | 51600 | 0.0439 | - |
| 0.9303 | 51650 | 0.0317 | - |
| 0.9312 | 51700 | 0.0108 | - |
| 0.9321 | 51750 | 0.0439 | - |
| 0.9330 | 51800 | 0.0365 | - |
| 0.9339 | 51850 | 0.0305 | - |
| 0.9348 | 51900 | 0.0228 | - |
| 0.9357 | 51950 | 0.0204 | - |
| 0.9366 | 52000 | 0.0433 | - |
| 0.9375 | 52050 | 0.018 | - |
| 0.9384 | 52100 | 0.1014 | - |
| 0.9393 | 52150 | 0.0442 | - |
| 0.9402 | 52200 | 0.014 | - |
| 0.9411 | 52250 | 0.0527 | - |
| 0.9420 | 52300 | 0.0263 | - |
| 0.9429 | 52350 | 0.0219 | - |
| 0.9438 | 52400 | 0.0251 | - |
| 0.9447 | 52450 | 0.0446 | - |
| 0.9456 | 52500 | 0.0231 | - |
| 0.9465 | 52550 | 0.0406 | - |
| 0.9474 | 52600 | 0.0201 | - |
| 0.9483 | 52650 | 0.0399 | - |
| 0.9492 | 52700 | 0.0633 | - |
| 0.9501 | 52750 | 0.0375 | - |
| 0.9510 | 52800 | 0.0862 | - |
| 0.9519 | 52850 | 0.0443 | - |
| 0.9528 | 52900 | 0.0219 | - |
| 0.9537 | 52950 | 0.0335 | - |
| 0.9546 | 53000 | 0.05 | - |
| 0.9555 | 53050 | 0.0529 | - |
| 0.9564 | 53100 | 0.0495 | - |
| 0.9573 | 53150 | 0.0219 | - |
| 0.9582 | 53200 | 0.0519 | - |
| 0.9591 | 53250 | 0.0272 | - |
| 0.9600 | 53300 | 0.0189 | - |
| 0.9609 | 53350 | 0.0257 | - |
| 0.9618 | 53400 | 0.0401 | - |
| 0.9627 | 53450 | 0.0426 | - |
| 0.9636 | 53500 | 0.0168 | - |
| 0.9645 | 53550 | 0.0358 | - |
| 0.9654 | 53600 | 0.0279 | - |
| 0.9663 | 53650 | 0.0133 | - |
| 0.9672 | 53700 | 0.0257 | - |
| 0.9681 | 53750 | 0.0344 | - |
| 0.9690 | 53800 | 0.0718 | - |
| 0.9699 | 53850 | 0.045 | - |
| 0.9708 | 53900 | 0.0287 | - |
| 0.9717 | 53950 | 0.0448 | - |
| 0.9726 | 54000 | 0.0458 | - |
| 0.9735 | 54050 | 0.033 | - |
| 0.9744 | 54100 | 0.0164 | - |
| 0.9753 | 54150 | 0.0228 | - |
| 0.9762 | 54200 | 0.0616 | - |
| 0.9771 | 54250 | 0.0172 | - |
| 0.9780 | 54300 | 0.0538 | - |
| 0.9789 | 54350 | 0.0318 | - |
| 0.9798 | 54400 | 0.0323 | - |
| 0.9807 | 54450 | 0.0428 | - |
| 0.9816 | 54500 | 0.043 | - |
| 0.9825 | 54550 | 0.0144 | - |
| 0.9834 | 54600 | 0.0896 | - |
| 0.9843 | 54650 | 0.0501 | - |
| 0.9852 | 54700 | 0.034 | - |
| 0.9861 | 54750 | 0.0546 | - |
| 0.9870 | 54800 | 0.0212 | - |
| 0.9879 | 54850 | 0.0385 | - |
| 0.9888 | 54900 | 0.048 | - |
| 0.9897 | 54950 | 0.0317 | - |
| 0.9906 | 55000 | 0.0373 | - |
| 0.9915 | 55050 | 0.0357 | - |
| 0.9924 | 55100 | 0.0406 | - |
| 0.9933 | 55150 | 0.0255 | - |
| 0.9942 | 55200 | 0.0297 | - |
| 0.9951 | 55250 | 0.0155 | - |
| 0.9960 | 55300 | 0.0329 | - |
| 0.9969 | 55350 | 0.0186 | - |
| 0.9978 | 55400 | 0.0257 | - |
| 0.9987 | 55450 | 0.033 | - |
| 0.9996 | 55500 | 0.0505 | - |
### Framework Versions
- Python: 3.11.7
- SetFit: 1.0.3
- Sentence Transformers: 2.3.0
- Transformers: 4.37.2
- PyTorch: 2.1.2+cu118
- Datasets: 2.16.1
- Tokenizers: 0.15.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
qgallouedec/ars-BipedalWalkerHardcore-v3-3571292280 | qgallouedec | "2024-04-12T09:43:54Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"BipedalWalkerHardcore-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-02-27T14:13:15Z" | ---
library_name: stable-baselines3
tags:
- BipedalWalkerHardcore-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: ARS
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BipedalWalkerHardcore-v3
type: BipedalWalkerHardcore-v3
metrics:
- type: mean_reward
value: -85.86 +/- 5.99
name: mean_reward
verified: false
---
# **ARS** Agent playing **BipedalWalkerHardcore-v3**
This is a trained model of a **ARS** agent playing **BipedalWalkerHardcore-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ars --env BipedalWalkerHardcore-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo ars --env BipedalWalkerHardcore-v3 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo ars --env BipedalWalkerHardcore-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo ars --env BipedalWalkerHardcore-v3 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo ars --env BipedalWalkerHardcore-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ars --env BipedalWalkerHardcore-v3 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('alive_bonus_offset', -0.1),
('delta_std', 0.0075),
('learning_rate', 0.02),
('n_delta', 64),
('n_envs', 1),
('n_timesteps', 500000000.0),
('n_top', 32),
('normalize', 'dict(norm_obs=True, norm_reward=False)'),
('policy', 'MlpPolicy'),
('policy_kwargs', 'dict(net_arch=[16])'),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
lucas-meyer/seq-xls-r-fleurs_nl-run2-asr_af-run7 | lucas-meyer | "2023-11-07T13:20:27Z" | 9 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:lucas-meyer/asr_af",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2023-10-31T08:39:20Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: seq-xls-r-fleurs_nl-run2-asr_af-run7
results: []
datasets:
- lucas-meyer/asr_af
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# seq-xls-r-fleurs_nl-run2-asr_af-run7
This model is a fine-tuned version of [lucas-meyer/xls-r-fleurs_nl-run2](https://huggingface.co./lucas-meyer/xls-r-fleurs_nl-run2) on the asr_af dataset.
It achieves the following results:
- Wer (Validation): 36.72%
- Wer (Test): 37.16%
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer (Train) |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.8229 | 1.47 | 250 | 2.7832 | 0.9998 |
| 1.1565 | 2.93 | 500 | 0.5693 | 0.4932 |
| 0.3893 | 4.4 | 750 | 0.4765 | 0.4026 |
| 0.2536 | 5.87 | 1000 | 0.3951 | 0.3669 |
| 0.1756 | 7.33 | 1250 | 0.4565 | 0.3814 |
| 0.1384 | 8.8 | 1500 | 0.4923 | 0.3606 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3 |
andreaskoepf/falcon-40b-megacode2 | andreaskoepf | "2023-08-18T00:21:24Z" | 11 | 1 | transformers | [
"transformers",
"safetensors",
"falcon",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-08-17T00:26:39Z" | ---
license: apache-2.0
---
# falcon-40b-megacode2
- wandb: [run37_megacode_falcon40](https://wandb.ai/open-assistant/epfl-mt-sft/runs/run37_megacode_falcon40)
- sampling report: [2023-08-17_andreaskoepf_falcon-40b-megacode2_sampling_noprefix2.json](https://open-assistant.github.io/oasst-model-eval/?f=https%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Foasst-pretrained%2F2023-08-17_andreaskoepf_falcon-40b-megacode2_sampling_noprefix2.json)
## Prompt Template
[chatml](https://github.com/openai/openai-python/blob/main/chatml.md) format is used:
"<|im_start|>user\n{user prompt}<|im_end|>\n<|im_start|>assistant\n{Assistant answer}<|im_end|>\n"
Multi-line:
```
<|im_start|>user
{user prompt}<|im_end|>
<|im_start|>assistant
{Assistant answer}<|im_end|>
``` |
Subsets and Splits