Search is not available for this dataset
modelId
stringlengths 5
134
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
223M
| likes
int64 0
10.1k
| library_name
stringclasses 378
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 53
values | createdAt
unknown | card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
raminass/SCOTUS_AI_17 | raminass | "2024-01-04T10:27:41Z" | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:raminass/scotus-v10",
"base_model:finetune:raminass/scotus-v10",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-01-04T09:44:28Z" | ---
license: cc-by-sa-4.0
base_model: raminass/scotus-v10
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: SCOTUS_AI_17
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SCOTUS_AI_17
This model is a fine-tuned version of [raminass/scotus-v10](https://huggingface.co./raminass/scotus-v10) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0723
- Accuracy: 0.8263
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2805 | 1.0 | 3188 | 0.6360 | 0.8303 |
| 0.1147 | 2.0 | 6376 | 0.8285 | 0.8230 |
| 0.053 | 3.0 | 9564 | 1.0048 | 0.8208 |
| 0.0228 | 4.0 | 12752 | 1.0853 | 0.8183 |
| 0.0143 | 5.0 | 15940 | 1.0723 | 0.8263 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
fftx0907/autotrain-s4m14-yyyit | fftx0907 | "2023-12-25T14:58:51Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"autotrain",
"dataset:fftx0907/autotrain-data-autotrain-s4m14-yyyit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-12-25T14:58:09Z" |
---
tags:
- autotrain
- image-classification
widget:
- src: https://huggingface.co./datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co./datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co./datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
datasets:
- fftx0907/autotrain-data-autotrain-s4m14-yyyit
---
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: nan
f1_macro: 0.031825795644891124
f1_micro: 0.10555555555555556
f1_weighted: 0.020156337241764376
precision_macro: 0.017592592592592594
precision_micro: 0.10555555555555556
precision_weighted: 0.011141975308641975
recall_macro: 0.16666666666666666
recall_micro: 0.10555555555555556
recall_weighted: 0.10555555555555556
accuracy: 0.10555555555555556
|
lavera/epic-diffusion-v1.1-controlnet-hed | lavera | "2023-03-31T14:25:10Z" | 5 | 0 | diffusers | [
"diffusers",
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-03-31T14:23:26Z" | ---
license: creativeml-openrail-m
---
|
locuslab/base-smollm2-1.7b-score0_mix_rephrased_from_beginning-600B-mbs8-gbs1024-17feb | locuslab | "2025-02-24T18:36:53Z" | 0 | 0 | null | [
"pytorch",
"llama",
"model",
"transformer",
"smollm2",
"license:mit",
"region:us"
] | null | "2025-02-24T18:28:44Z" | ---
version: main
family: smollm2-1.7b
model_name: score0_mix_rephrased_from_beginning-600B-mbs8-gbs1024-17feb
license: mit
tags:
- model
- transformer
- smollm2
---
# SmolLM2 score0_mix_rephrased_from_beginning-600B-mbs8-gbs1024-17feb (Version: main)
## Model Details
- **Architecture:** SmolLM2
- **Parameters:** 1.7B
## Training Configuration
```yaml
optimizer:
class_path: torch.optim.AdamW
init_args:
lr: 0.0005
weight_decay: 0.01
precision: bf16-mixed
seed: 42
train:
global_batch_size: 1024
max_seq_length: 2048
max_tokens: 600000000000
micro_batch_size: 8
```
## Model Loading and Revision System
This repository hosts multiple revisions of the model.
To load a specific revision, use the `revision` parameter. For example:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("locuslab/score0_mix_rephrased_from_beginning-600B-mbs8-gbs1024-17feb", revision="final")
tokenizer = AutoTokenizer.from_pretrained("locuslab/score0_mix_rephrased_from_beginning-600B-mbs8-gbs1024-17feb", revision="final")
```
Replace `"final"` with the desired revision.
|
OwOOwO/finalupdate1 | OwOOwO | "2024-04-30T17:12:26Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-30T17:10:52Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Zurich-7B-GCv2-5m-GGUF | mradermacher | "2025-02-04T09:09:28Z" | 513 | 1 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"gammacorpus",
"zurich",
"chat",
"conversational",
"en",
"dataset:rubenroy/GammaCorpus-v2-5m",
"base_model:rubenroy/Zurich-7B-GCv2-5m",
"base_model:quantized:rubenroy/Zurich-7B-GCv2-5m",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-02-04T08:22:27Z" | ---
base_model: rubenroy/Zurich-7B-GCv2-5m
datasets:
- rubenroy/GammaCorpus-v2-5m
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- gammacorpus
- zurich
- chat
- conversational
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co./rubenroy/Zurich-7B-GCv2-5m
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co./mradermacher/Zurich-7B-GCv2-5m-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co./TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co./mradermacher/Zurich-7B-GCv2-5m-GGUF/resolve/main/Zurich-7B-GCv2-5m.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co./mradermacher/Zurich-7B-GCv2-5m-GGUF/resolve/main/Zurich-7B-GCv2-5m.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co./mradermacher/Zurich-7B-GCv2-5m-GGUF/resolve/main/Zurich-7B-GCv2-5m.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co./mradermacher/Zurich-7B-GCv2-5m-GGUF/resolve/main/Zurich-7B-GCv2-5m.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co./mradermacher/Zurich-7B-GCv2-5m-GGUF/resolve/main/Zurich-7B-GCv2-5m.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co./mradermacher/Zurich-7B-GCv2-5m-GGUF/resolve/main/Zurich-7B-GCv2-5m.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co./mradermacher/Zurich-7B-GCv2-5m-GGUF/resolve/main/Zurich-7B-GCv2-5m.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co./mradermacher/Zurich-7B-GCv2-5m-GGUF/resolve/main/Zurich-7B-GCv2-5m.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co./mradermacher/Zurich-7B-GCv2-5m-GGUF/resolve/main/Zurich-7B-GCv2-5m.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co./mradermacher/Zurich-7B-GCv2-5m-GGUF/resolve/main/Zurich-7B-GCv2-5m.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co./mradermacher/Zurich-7B-GCv2-5m-GGUF/resolve/main/Zurich-7B-GCv2-5m.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co./mradermacher/Zurich-7B-GCv2-5m-GGUF/resolve/main/Zurich-7B-GCv2-5m.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co./mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
AlignmentResearch/robust_llm_pythia-410m_niki-041a_imdb_random-token-1280_10-rounds_seed-4 | AlignmentResearch | "2024-05-04T12:03:21Z" | 104 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-410m",
"base_model:finetune:EleutherAI/pythia-410m",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-05-04T12:02:48Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
base_model: EleutherAI/pythia-410m
model-index:
- name: robust_llm_pythia-410m_niki-041a_imdb_random-token-1280_10-rounds_seed-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-410m_niki-041a_imdb_random-token-1280_10-rounds_seed-4
This model is a fine-tuned version of [EleutherAI/pythia-410m](https://huggingface.co./EleutherAI/pythia-410m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
|
ldianwu/detr-finetuned-balloon-v2 | ldianwu | "2024-06-28T06:50:26Z" | 190 | 0 | transformers | [
"transformers",
"safetensors",
"detr",
"object-detection",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | object-detection | "2024-06-25T03:57:12Z" | ---
license: apache-2.0
---
|
seongil-dn/bge-m3-kor-retrieval-451949-bs64-science | seongil-dn | "2024-12-11T06:34:15Z" | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:451949",
"loss:CachedMultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:2101.06983",
"base_model:BAAI/bge-m3",
"base_model:finetune:BAAI/bge-m3",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2024-12-11T06:32:40Z" | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:451949
- loss:CachedMultipleNegativesRankingLoss
base_model: BAAI/bge-m3
widget:
- source_sentence: ๋ณธ ์ฐ๊ตฌ๋ฅผ ํตํด ์ฅ๋ฐ์ด๋ฌ์ค๋ฅผ ๋์ถ, ์ ์ , ๋ฐ ๊ฒ์ถํ ์ ์๋ ์ ์, ๊ฐํธํ๊ณ ํจ๊ณผ์ ์ธ ๋ฐฉ๋ฒ์ ๊ฐ๋ฐํ๊ธฐ ์ํด ํ์ํ
๋ฌผ์ง์ ๋ฌด์์ธ๊ฐ?
sentences:
- <h1>์ ์ฝ</h1><p>ํ๊ฒฝ์ ์กด์ฌํ๋ ์ฅ๋ฐ์ด๋ฌ์ค๋ ์ค์ผ๋ ๋ฌผ์ ํตํ์ฌ ๊ฒฝ๊ตฌ๊ฒฝ๋ก๋ก ์ ์ผ์ด ๊ฐ๋ฅํ๊ณ , ๋ฐ์ด๋ฌ์ค๋ ์ ์ ์์ผ๋ก๋ ์ธ์ฒด์ ๊ฐ์ผ์ด
๊ฐ๋ฅํ๋ฏ๋ก ์ธ๊ฐ์ ๊ฑด๊ฐ์ ์ํํ ์ ์๋ค. ํ๊ฒฝ ์๊ณ ๋ฐ ์์ฉ์์์ ๋ฐ๊ฒฌ๋๋ ๋ฐ์ด๋ฌ์ค์ ์์น๋ ๋น๊ต์ ๋ฎ์ผ๋ฏ๋ก ์๋ฐฑ์์ ์์ฒ ๋ฆฌํฐ์ ๋ฌผ์ ๋์ถ์ํฌ
ํ์๊ฐ ์๋ค. ๋ฐ๋ผ์ ์ด ์ฐ๊ตฌ์ ์ฃผ์ ๋ชฉ์ ์ ์๊ณ ์๋ฃ๋ก๋ถํฐ ์ฅ๋ฐ์ด๋ฌ์ค๋ฅผ ๋์ถ, ์ ์ , ๋ฐ ๊ฒ์ถํ ์ ์๋ ์ ์, ๊ฐํธํ๊ณ ํจ๊ณผ์ ์ธ ๋ฐฉ๋ฒ์
๊ฐ๋ฐํ๋ ๊ฒ์ด๋ค. ๋จผ์ ๋ฐ์ด๋ฌ์ค๋ฅผ 1MDS ์นดํธ๋ฆฌ์ง ํํฐ์ ํก์ฐฉ์์ผ ๋์ถ์ํค๊ณ ์ฝ \( 500 \mathrm{~m} \ell \)์ \( 1.5
\% \) beef extract/\( 0.05 \mathrm{M} \) glycin\( (\mathrm{pH} 9.4) \)์ผ๋ก ์ฉ์ถ์ํจ๋ค.
์ด ์ฐ๊ตฌ์์๋ ํก์ฐฉํํฐ๋ก๋ถํฐ ์ป์ ๋ฐ์ด๋ฌ์ค 1์ฐจ ์ฉ์ถ์ก์ ๋์ฑ ๋์ถ์ํค๊ณ ์ ์ ํ๊ธฐ ์ํ์ฌ ์ธ๊ฐ์ง ๋ฐฉ๋ฒ์ ์๋ํ์๋ค. ์ด๋ค ๊ฐ์ด๋ฐ์ ์ ๊ธฐ ์์ง๋ฒ์ด
๋ฐ์ด๋ฌ์ค ์ฌ๋์ถ์ ๊ฐ์ฅ ํจ๊ณผ์ ์ธ ๋ฐฉ๋ฒ์ด์๋ค. ์ด ๋ฐฉ๋ฒ์ผ๋ก ์๋ฃ ๋ถํผ๋ฅผ 200์์ 400๋ฐฐ๊น์ง ๊ฐ์์ํฌ ์ ์์์ผ๋ฉฐ ์ต์ข
๋ฐ์ด๋ฌ์ค ์๊ฑฐ์จ์ \(
72 \% \) ์ด์์ด์๋ค. ๋ง์ง๋ง์ผ๋ก ์๋ฃ๋ฅผ ๋ง ๋์คํฌ ํํฐ๋ก ์ฌ๊ณผ์ํค๊ณ plaque assay ํน์ CC-PCR๋ฒ์ผ๋ก ๋ถ์ํ์๋ค. </p>
- '๋จน๋ ๋ฌผ์ ๋ณ์์ฑ๋ฏธ์๋ฌผ ์กฐ๊ธฐ๊ฒ์ถ๋ฐฉ๋ฒ ํ๋ฆฝ
โก PCR(์คํฉํจ์ ์ฐ์๋ฐ์, Polymerase Chain Reaction)
ํจ์(DNA polymerase)์ ์์ฝ ๋ฑ์ ์ฌ์ฉํ์ฌ ๋ฏธ์๋ฌผ(๋ฐ์ด๋ฌ์ค, ์์๋๋ฌผ ๋ฑ)์ ์ ์ ์(DNA, RNA) ์ค ํน์ ๋ถ์๋ง ์ฐ์์ ์ผ๋ก
์ฆํญ์์ผ ์ํ๋ ๋์ ์ ์ ์(target gene)์ ์กด์ฌ๋ฅผ ํ์ธํ๋ ๋ฐฉ๋ฒ. โก ์ด๋ฐฐ์์ฑ๋ฐ์ด๋ฌ์ค๋ถ์๋ฒ(TCVA: total culturable
virus assay)
ํ๊ฒฝ์๋ฃ(์์์์)์ค์ ํจ์ ๋ ๋ฐ์ด๋ฌ์ค๊ฐ ์๋ฃ ์ฑ์ทจ์ ์ฌ๊ณผ๋ง์ ํก์ฐฉ๋๊ณ , ์ฌ๊ณผ๋ง์ ๋ฐ์ด๋ฌ์ค๋ฅผ ํ๋ฆฌ(๋ถ๋ฆฌ)ใ๋์ถํ ํ ์์ก๊ณผ ํฌ์ ์์ก์ ์ด์์๋
์ธํฌ(BGM : Buffalo Monkey Kidney Cell ,์์ญ์ด์ ์ฅ์ธํฌ)์ ์ ์ข
ํ๊ณ 37โ์์ 1์ฐจ(14์ผ) ๋ฐฐ์ํ๊ณ , 1์ฐจ ๋ฐฐ์์ก์
2์ฐจ(14์ผ) ๊ณ๋๋ฐฐ์ํ ํ ์ธํฌ๋ณ๋ณํจ๊ณผ(Cytophatic Effect) ์์ฑ๊ฐฏ์, ์ฑ์๋ ๋ฑ ๊ณ์ฐ์(ํ๋ก๊ทธ๋จ)์ ์ ์ฉํ์ฌ ๋ฐ์ด๋ฌ์ค ๋๋(MPN)๊ฐ์
๊ณ์ฐ. โก ํตํฉ์ธํฌ๋ฐฐ์-์คํฉํจ์์ฐ์๋ฐ์๋ฐฉ๋ฒ(ICC-PCR : integrated cell culture-PCR)
์ด๋ฐฐ์์ฑ๋ฐ์ด๋ฌ์ค๋ถ์๋ฒ๊ณผ ๋์ผํ ๋ฐฉ๋ฒ์ผ๋ก ์๋ฃ์ ์ ์ฒ๋ฆฌ๋ฅผ ์ํํ๊ณ , ์ธํฌ์ ์๋ฃ๋ฅผ ์ ์ข
ํ์ฌ 3์ผ(72์๊ฐ)๊ฐ ๋ฐฐ์ํ ํ์ ์ฆ์ํ ๋ฐ์ด๋ฌ์ค๋ฅผ ์ ์ ์๋ถ์๋ฒ(PCR
; ์คํฉํจ์ ์ฐ์๋ฐ์๋ฐฉ๋ฒ)์ผ๋ก ๊ฒ์ถ. - ์ค์๊ฐ ์คํฉํจ์์ฐ์๋ฐ์(Real time PCR)๋ฐฉ๋ฒ
์ค์๊ฐ์คํฉํจ์์ฐ์๋ฐ์(Real time PCR)๋ฐฉ๋ฒ์ ๋ณ์์ฑ๋ฏธ์๋ฌผ๋ณ ํน์ ์ ์ ์(target sequence)์ ์ฆํญ๊ณผ ํจ๊ป ํ๊ด๋๋ ์ฆ๊ฐํ๋๋ก
์์ฝ์ ์ฒจ๊ฐํ์ฌ, ๋ฐ์ ํ์ ํ๊ด๋ณํ๋ฅผ ๋ถ์ํ๊ณ , ๋๋๋ฅผ ์๊ณ ์๋ ํ์ค์(Standard curve)๊ณผ ๋น๊ตํ์ฌ ๋์ ๋ฏธ์๋ฌผ๋ณ ์ ๋ํ๊ฐ ๊ฐ๋ฅํ
๋ฐฉ๋ฒ. โป Real time PCR์ ํน์ ์ ์ ์๋ฅผ ์ฆํญํ ํ ์ ๊ธฐ์๋ํ์ฌ ์ฆํญ ์ฐ๋ฌผ์ ํ์ธํ ํ์๊ฐ ์๋ค๋ ์ ์์ ๊ธฐ์กด์ ๋ฐฉ๋ฒ๋ณด๋ค ์๊ฐ์ด
๋จ์ถ๋๋ฉฐ, ๋ค๋์ ์๋ฃ๋ฅผ ๋์์ ์ํ์ด ๊ฐ๋ฅํ ์ฅ์ ์ด ์๋ค. - ์ฅ๋ฐ์ด๋ฌ์ค ๋ฐ ์ฅ๊ด๊ณ๋ฐ์ด๋ฌ์ค
๋ฐ์ด๋ฌ์ค๋ ์์ฃผ์ ๋งค์ฐ ํน์ด์ ์ผ๋ก ๊ฐ์ผํ์ฌ, ์ฌ๋์๊ฒ ๊ฐ์ผํ์ฌ ์ง๋ณ์ ์ผ์ผํค๋ ๋ณ์์ฑ ๋ฐ์ด๋ฌ์ค ์ญ์ ๋๋ถ๋ถ ์ฌ๋์๊ฒ๋ง ์ ํ์ ์ผ๋ก ๊ฐ์ผํ๋ค.
๋ถ๋ณ์ ๋ค๋ ์กด์ฌํ๋ ์ฅ๊ด๊ณ๋ฐ์ด๋ฌ์ค๋ก๋ ์ฅ๋ฐ์ด๋ฌ์ค(Enterovirus ; ํด๋ฆฌ์ค๋ฐ์ด๋ฌ์ค, ์ฝ์ฌํค๋ฐ์ด๋ฌ์ค, ์์ฝ๋ฐ์ด๋ฌ์ค)์ ๊ทธ๋ฐ์ ์๋ฐ๋
ธ๋ฐ์ด๋ฌ์ค,
๋ ์ค๋ฐ์ด๋ฌ์ค, ๋กํ๋ฐ์ด๋ฌ์ค, Aํ ๊ฐ์ผ ๋ฐ์ด๋ฌ์ค, Eํ ๊ฐ์ผ๋ฐ์ด๋ฌ์ค, ๊ทธ๋ฆฌ๊ณ ๋
ธ๋ก๋ฐ์ด๋ฌ์ค์ ๊ฐ์ด ์ฅ์ผ์ ์ผ์ผํค๋ ๋ฐ์ด๋ฌ์ค๊ฐ ํฌํจ๋๋ค. ์ฅ๊ด๊ณ๋ฐ์ด๋ฌ์ค(Human
enteric viruses)๋ 100์ฌ์ข
์ ๋ฐ์ด๋ฌ์ค๊ฐ ์๋ ค์ ธ ์๋ค(์ฅ๋ฐ์ด๋ฌ์ค๋ 70์ฌ์ข
).'
- <table border><caption>\( \left \langle \right . \) ํ 6ใ ํฌ๊ธฐ \( 8 \times 8 \) ์ธ
๋ฉ์ฌ์ \( Q_ { 8 } \) ์ ๋ํ ์๋ฒ ๋ฉ \( f_ { 3 } \)</caption> <tbody><tr><td rowspan=2></td><td>\(
j=0 \)</td><td>\( j=1 \)</td><td>\( j=2 \)</td><td>\( j=3 \)</td><td>\( j=4 \)</td><td>\(
j=5 \)</td><td>\( j=6 \)</td><td>\( j=7 \)</td></tr><tr><td colspan=2>01</td><td
colspan=2>00</td><td colspan=2>10</td><td colspan=2>11</td></tr><tr><td>\( j=0
\)</td><td>010101</td><td>001111</td><td>000101</td><td>011111</td><td>110101</td><td>101111</td><td>100101</td><td>111111</td></tr><tr><td>\(
j=1 \)</td><td>010111</td><td>001101</td><td>000111</td><td>011101</td><td>110111</td><td>101101</td><td>100111</td><td>111101</td></tr><tr><td>\(
j=2 \)</td><td>011101</td><td>000111</td><td>001101</td><td>010111</td><td>111101</td><td>100111</td><td>101101</td><td>110111</td></tr><tr><td>\(
j=3 \)</td><td>011111</td><td>000101</td><td>001111</td><td>010101</td><td>111111</td><td>100101</td><td>101111</td><td>110101</td></tr><tr><td>\(
j=4 \)</td><td>110101</td><td>101111</td><td>100101</td><td>111111</td><td>010101</td><td>001111</td><td>000101</td><td>011111</td></tr><tr><td>\(
j=5 \)</td><td>110111</td><td>101101</td><td>100111</td><td>111101</td><td>010111</td><td>001101</td><td>000111</td><td>011101</td></tr><tr><td>\(
j=6 \)</td><td>111101</td><td>100111</td><td>101101</td><td>110111</td><td>011101</td><td>000111</td><td>001101</td><td>010111</td></tr><tr><td>\(
j=7 \)</td><td>111111</td><td>100101</td><td>101111</td><td>110101</td><td>011111</td><td>000101</td><td>001111</td><td>010101</td></tr></tbody></table>
- source_sentence: ๋ค์ค ์ค์์นญ ์์๋ฅผ ์ฌ์ฉํ ๋ฒ
-๋ถ์คํธ ์ปจ๋ฒํฐ์ ํน์ง์ ๋ญ์ผ?
sentences:
- <p>๋ณธ ์คํ์์๋ ์์์ํํ ๊ฒ์์ฑ๋ฅ์ ์๊ฒฉํ(strict) ํ๊ฐ์ ๊ด๋ํ(lenient) ํ๊ฐ์ ๋ ๊ฒฝ์ฐ๋ก ๋๋์ด ํ๊ฐํ๋ค. ์ฌ๊ธฐ์ ์๊ฒฉํ
ํ๊ฐ๋ ์์์ํํ ์งํฉ์ ๊น์ํ ํ, ํค๋น ์งํฉ ์์ ๊ฐ๋ณ ์ํํ ์ฆ ์์์ํํ๊น์ง ๋ชจ๋ ๊ฒ์ํ๋(์์๋งํ๋) ๊ฒฝ์ฐ๋ฅผ ์ ๋ต์ผ๋ก ํ๊ณ , ๊ด๋ํ
ํ๊ฐ๋ ์์์ํํ ์งํฉ๋ง์ ๊น์ํ๋ ๊ฒ์ ์ ๋ต์ผ๋ก ํ๋ค. ์๋ฅผ ๋ค์ด ์ด๋ค ์ํํ ์งํฉ์ ์ค์ ๊ตฌ์ฑ์ด [N, P, P, P, P, P]๋ก ๋์ด์์ผ๋ฉด,
์๊ฒฉํ ํ๊ฐ์ ๊ฒฝ์ฐ, ํด๋น ์งํฉ์ ์์์ํํ์ผ๋ก ๊ฒ์ํ ํ, ์ด ์งํฉ ์์์ N์ผ๋ก ๋ถ๋ฅ๋ ์์์ํํ๊น์ง ๊น์ํ๋ ๊ฒฝ์ฐ([N, P, P, P,
P, P]๋ก ์์๊น์ ๋งํ)๋ฅผ ์ ๋ต์ผ๋ก ๊ฐ์ฃผํ๋ค. ๊ฑธ๊ตญ ์๊ฒฉํ ํ๊ฐ๋ ์ผ๋จ ๋ค์ํ ์ํํ ์งํฉ๋ค ์์์ ์์์ํํ์ ํฌํจํ๋ ์งํฉ๋ค์ ๊ฒ์ํ
ํ์, ํ๋ฐ ๋ ๋์๊ฐ ๊ฐ๋ณ ์งํฉ ์์ ๊ฐ๋ณ ์์์ํํ๊น์ง ์ถ๊ฐ๋ก ์ ๋ณํค ๋ผ ์ ์์ด์ผ ํ๋ค. </p> <p>์ด์ ๋นํด ๊ด๋ํ ํ๊ฐ๋, ๋ค์ํ
์ํํ ์งํฉ๋ค ์์์ ์์์ํํ์ด ์กด์ฌํ๋ ์งํฉ๋ง ๊ฒ์ํ๋ฉด ๋๊ธฐ ๋๋ฌธ์, ๊ฐ๋ณ ์ํํ์ ๊ธ์ /๋ถ์ ๋ถ๋ฅ๊ฐ ์ค๋ น ํ๋ฆฌ๋๋ผ๋ ์ํํ ์งํฉ์ ๊ธ์ /๋ถ์
๋น๋์นญ๋์ ์กฐ๊ฑด๋ง ๋ง์ผ๋ฉด ๋๋ค. ์์ปจ๋ ์ค์ ์งํฉ์ด [N, P, P, P, P, P]์ผ ๋, ๊ธ์ /๋ถ์ ์๋๋ถ๋ฅ ์ค ์ผ๋ถ ์ค๋ฒ๋ฅ๊ฐ ์๋ [P,
N, P, P, P, P]๋ [P, P, P, P, P, N]๋ ๋น๋์นญ๋๊ฐ ๋์ผํ๊ธฐ ๋๋ฌธ์ ์์์ํํ ์งํฉ์ผ๋ก ๊ฒ์๋๋ค. ๋ณธ ์คํ์์๋ ์์์ํํ์ด
ํฌํจ๋ผ์์์ ํ๋จํ๋ ๊ธฐ์ค์ '๋น๋์นญ๋๊ฐ \( 0.5 \) ๋ณด๋ค ํฐ๊ฐ \( (0.5<1 \) Skewness|)'์ '๋น๋์นญ๋๊ฐ \(1 \)๋ณด๋ค
ํฐ๊ฐ \( (1< \) |Skewness|)'๋ก ๋๋์ด ํ๊ฐํ๋ค. </p> <h2>4.4 ์คํ๊ฒฐ๊ณผ</h2> <p>์์์ํํ ๊ฒ์์ฑ๋ฅ์ ์ ๋ฐ๋,
์ฌํ์ธ, F \(1 \) ์ ์๋ก ๊ฐ๊ฐ ์๊ฒฉํ๊ฒ(strict) ๋๋ ๊ด๋ํ๊ฒ(lenient) ํ๊ฐํ ๊ฑธ๊ณผ๋ฅผ Table \(7 \)๊ณผ Table
\(8 \)์ ๊ฐ๊ฐ ์ ๋ฆฌํ๋ค. </p> <p>Table \(7 \)์ ์๊ฒฉํ ํ๊ฐ์ ๊ฒฝ์ฐ, ์ค๋งํธํฐ๊ณผ ์ํ์ ๋ ๋น๋์นญ๋ ์ ์์ ๋ํ์ฌ ๋ฏธ์์
๊ฐ์ฑ์ฌ์ (SWN \&OPL)์ ์ด์ฉํธ์ ๋์ \(4 \)๊ฐ์ F \(1 \)์ ์์ ํ๊ท ์ \( 11.4 \% \) ์๋ค. ํํธ, Table
\(7 \)์์ ๋๋ฉ์ธ ํนํ๋ ๊ฐ์ฑ์ฌ์ (MRG \&SBL)์ ์ด์ฉํฐ์ ๋์ ํ๊ท F \(1 \)์ ์๋ \( 19.8 \% \)์๋ค. Table
\(8 \) ์ ๊ด๋ํ ํ๊ฐ์์๋, ๋ฏธ์์ ์ฌ์ ๊ณผ ๋๋ฉ์ธ ํนํ ์ฌ์ ์ ํ๊ท F \(1 \) ์ ์๊ฐ ๊ฐ๊ฐ \( 48.9 \% \) ์ \( 53.8
\% \)์๋ค. ๋ ํ ๋ชจ๋์์ ๋๋ฉ์ธ ํนํ๋ ๊ฐ์ฑ์ฌ์ ์ ์ด์ฉํ ๊ฒฝ์ฐ๊ฐ ๋ฏธ์์ ๊ฐ์ฑ์ฌ์ ์ ์ด์ฉํ ๊ฒฝ์ฐ๋ณด๋ค ์์์ํํ ๊ฒ์์์ ๋ ์ข์ ์ฑ๋ฅ์
๋ํ๋์ ํ์ธํ ์ ์๋ค. ๋ฏธ์์ ์ฌ์ ๊ณผ ์์ ์ฌ์ ์ ํ๊ท F \(1 \) ์ ์์ ์ฐจ์ด๋ ์๊ฒฉํ ํ๊ฐ์ ๊ฒฝ์ฐ \( 8.4 \% \) ์๊ณ ,
๊ด๋ํ ํ๊ฐ์ ๊ฒฝ์ฐ \( 4.9 \% \) ์๋ค. </p>
- <h1>III ๊ฒฐ๋ก </h1><p>๋ณธ ๋
ผ๋ฌธ์์๋ ๊ธฐ์กด์ ๋ฒ
-๋ถ์คํธ ์ปจ๋ฒํฐ์ ํจ์จ ๋ณด๋ค ๋์ ํจ์จ์ ๊ฐ๋ ๋ค์ค ์ค์์นญ ์์๋ฅผ ์ฌ์ฉํ ๋ฒ
-๋ถ์คํธ
์ปจ๋ฒํฐ๋ฅผ ์ค๊ณํ์๋ค. ์ ์ํ ์ปจ๋ฒํฐ๋ ๋์ผ๋ฉด์ ๋ฐ ๋์ผ ํจ์จ ๋๋ ์ ์ ํจ์จ ๊ฐ์๋ง์ผ๋ก๋ ๋์ ์ถ๋ ฅ ์ ์ ๋ฒ์๋ฅผ ๊ฐ๋๋ก ์ค๊ณํ์๋ค. ๋ฒ
-๋ถ์คํธ์ปจ๋ฒํฐ๋
๊ณ ์ ๋ฅ์์ ๊ณ ํจ์จ์ ์ํด PWM ์ ์ด๋ฒ์ ์ด์ฉํ์ฌ ์ ์ดํ์๊ณ , ์ ๋ฅ๋ชจ๋๋ฅผ ์ด์ฉํ์ฌ ์ค๊ณํ์๋ค. ์ ์ํ ์ปจ๋ฒํฐ๋ ์ต๋ ์ถ๋ ฅ์ ๋ฅ \( 300 \mathrm{~mA}
\), ์
๋ ฅ ์ ์ \( 3.3 \mathrm{~V} \)์ ์ถ๋ ฅ์ ์ \( 700 \mathrm{mV}^{\sim} 12 \mathrm{~V},
1.5 \mathrm{MHz} \) ์ ์ค์์นญ์ฃผํ์๋ฅผ ๊ฐ๋๋ค. ์ต๋ ํจ์จ์ \( 90 \% \) ๋ฅผ ๊ฐ๋๋ก ์ค๊ณํ์๋ค. ๋ํ ๊ณผ๋ถํ ๋ฐ ๊ธฐํ
ํ๊ฒฝ์ ์ธ ๋ณํ์ ์ํ ์ค๋์์ผ๋ก ์ธํด ์ ๋ ฅ ์์ค๊ณผ ๋ด๋ถ ๋ฐ ์ธ๋ถ IC์ ์์์ ๋ฐฉ์งํ๊ธฐ ์ํ ๋ณดํธํ๋ก๋ฅผ IC ๋ด๋ถ์ ์ค๊ณํ์ฌ ์ ๋ขฐ์ฑ์ ํฅ์์์ผฐ๋ค.
๋ง์ง๋ง์ผ๋ก ๊ณ ์๋ ESD ๋ณดํธ ์์๋ฅผ ์ค๊ณ ๋ฐ ํ์ฌํ์ฌ ์ ์ ๊ธฐ ๋ฐฉ์ง๋ก ์ธํ IC์ ์์์ ๋ฐฉ์งํ๊ณ , ๊ธฐ์กด์ ggNMOS์ ๋์ ํธ๋ฆฌ๊ฑฐ ์ ์์
๊ฐ์ ํ์ฌ, ๋ฎ์ํธ๋ฆฌ๊ฑฐ๋ง ํน์ฑ์ ๊ฐ๋ ESD ๋ณดํธํ๋ก๋ฅผ ์ ์ ๋ฐ ์ค๊ณํ์๋ค. ์๋ฎฌ๋ ์ด์
๊ฒฐ๊ณผ ์ผ๋ฐ์ ์ธ ggnmos์ ํธ๋ฆฌ๊ฑฐ์ ์์ด \( 8 \mathrm{~V}
\) ๋ด์ธ์ธ ๊ฒ์ ๋ฐํด ๊ณ ์๋ ์์์ ํธ๋ฆฌ๊ฑฐ์ ์์ \( 4 \mathrm{~V} \) ๋ด์ธ๋ก ๋ ๋ฎ์ ํธ๋ฆฌ๊ฑฐ ์ ์ ํน์ฑ์ ๋ํ๋๋ค. </p>
- <h1>์ ์ฝ</h1><p>๋ณธ ๋
ผ๋ฌธ์์๋ DT-CMOS(Dynamic Threshold voltage Complementary MOSFET)
์ค์์นญ ์์๋ฅผ ์ฌ์ฉํ DC-DC Buck ์ปจ๋ฒํฐ๋ฅผ ์ ์ํ์๋ค. ๋์ ํจ์จ์ ์ป๊ธฐ ์ํ์ฌ PWM ์ ์ด๋ฐฉ์์ ์ฌ์ฉํ์์ผ๋ฉฐ, ๋ฎ์ ์จ ์ ํญ์ ๊ฐ๋
DT-CMOS ์ค์์น ์์๋ฅผ ์ค๊ณํ์ฌ ๋ํต ์์ค์ ๊ฐ์์์ผฐ๋ค. ์ ์ํ Buck ์ปจ๋ฒํฐ๋ ๋ฐด๋๊ฐญ ๊ธฐ์ค ์ ์ ํ๋ก,์ผ๊ฐํ ๋ฐ์๊ธฐ, ์ค์ฐจ ์ฆํญ๊ธฐ,
๋น๊ต๊ธฐ, ๋ณด์ ํ๋ก, PWM ์ ์ด ๋ธ๋ก์ผ๋ก ๊ตฌ์ฑ๋์ด ์๋ค. ์ผ๊ฐํ ๋ฐ์๊ธฐ๋ ์ ์์ ์(3.3V)๋ถํฐ ์ ์ง๊น์ง ์ถ๋ ฅ ์งํญ์ ๋ฒ์๋ฅผ ๊ฐ๋ \( 1.2
\mathrm{MHz} \) ์ ์ฃผํ์๋ฅผ ์์ฑํ๋ฉฐ, ๋น๊ต๊ธฐ๋ 2๋จ ์ฆํญ๊ธฐ๋ก ์ค๊ณ๋์๋ค. ๊ทธ๋ฆฌ๊ณ ์ค์ฐจ ์ฆํญ๊ธฐ๋ \( 70 \mathrm{~dB}
\) ์ ์ด๋๊ณผ \( 64^{\circ} \)์ ์์์ฌ์ ๋ฅผ ๊ฐ๋๋ก ์ค๊ณํ์๋ค. ๋ํ ์ ์ํ Buck ์ปจ๋ฒํฐ๋current-mode PWM ์ ์ดํ๋ก์
๋ฎ์ ์จ์ ํญ์ ๊ฐ๋ ์ค์์น๋ฅผ ์ฌ์ฉํ์ฌ \( 100 \mathrm{~mA} \)์ ์ถ๋ ฅ ์ ๋ฅ์์ ์ต๋ \( 95 \% \)์ ํจ์จ์ ๊ตฌํํ์์ผ๋ฉฐ,
\( 1 \mathrm{~mA} \)์ดํ์ ๋๊ธฐ๋ชจ๋์๋ ๋์ ํจ์จ์ ๊ตฌํํ๊ธฐ ์ํ์ฌ LDO ๋ ๊ทค๋ ์ดํฐ๋ฅผ ์ค๊ณํ์์ผ๋ฉฐ,๋ํ 2๊ฐ์ IC ๋ณดํธ
ํ๋ก๋ฅผ ๋ด์ฅํ์ฌ ์ ๋ขฐ์ฑ์ ํ๋ณดํ์๋ค. </p><h1>1. ์๋ก </h1><p>์ต๊ทผ์ ํด๋์ ํ, PDA, MP3๊ณผ ๊ฐ์ ํด๋์ฉ ๋ฉํฐ๋ฏธ๋์ด์ ์ฌ์ฉ์ด
๊ธ์ฆํจ์ ๋ฐ๋ผ ๊ณ ํจ์จ, ์ํํ๋ฅผ ์ํด ๊ธฐ์กด์ Linear ๋ฐฉ์์ ์ ์์ฅ์น์์ SMPS ๋ฐฉ์์ผ๋ก ๋์ฒด๋๊ณ ์๋ ์ถ์ธ์ด๋ค. SMPS(Switching
Mode Power Supply)๋ ์ค์์นญ์ฃผํ์๋ฅผ ์ด์ฉํด ์๋์ง ์ถ์ ์ฉ ์์์ ์ํํ๋ฅผ ์ด๋ฃฐ ์ ์์ผ๋, ์ค์์นญ ์ฃผํ์์ ๊ณ ์ฃผํํ๋ก ์ธํด ์๊ธฐ๋
์ค์์นญ ์์ค, ์ธ๋ํฐ ์์ค, ์ ๋ ์์ค ๋ฑ์ ๋ํ๋์ฑ
์ ๊ฐ๊ตฌํ์ฌ์ผ ํ๋ค. ๊ธฐ์กด์ ์ ์ ์ DC-DC ์ปจ๋ฒํฐ๋ ์ค์์นญ ์์๋ก์ ์ผ๋ฐ์ ์ธ CMOS
์์๋ฅผ ์ฌ์ฉํด ์๋ค. ๊ทธ๋ฌ๋ CMOS ์ค์์นญ ์์๋ ๋งค์ฐ ์์ ์จ ์ ํญ์ ์ป๊ธฐ ์ํด์ ๋งค์ฐ ํฐ ๋ฉด์ ์ ํ์๋ก ํ๊ธฐ ๋๋ฌธ์ ๋ณธ ์ฐ๊ตฌ์์๋ ์ด๋ฌํ
์ค์์นญ ์์์ ๋ฉด์ ๋ฌธ์ ๋ฅผ ๊ฐ์ ํ๊ณ ์ ๋ฌธํฑ์ ์์ ๋ฎ์ถ์ด ์จ ์ ํญ์ ์ค์ผ ์ ์๋ DT-CMOS๋ฅผ ์ฌ์ฉํ ์ค์์นญ ์์๋ฅผ ์ ์ํ์๋ค. ์ ์๋
์์๋ ๊ธฐ์กด์ ์ผ๋ฐ์ ์ธ CMOS ๊ณต์ ์ ์ด์ฉํ๊ณ , ๊ธฐ์กด์ CMOS ์์ ๋ณด๋ค ๋ ์ ์ ๋ฉด์ ์ ๊ฐ๊ณ , ๋ ์์ ์จ์ ํญ์ ๊ฐ๋๋ค.[3] </p><p>๋ฐ๋ผ์
๋ณธ ๋
ผ๋ฌธ์์๋ DT-CMOS ์ค์์นญ ์์๋ฅผ์ด์ฉํ์ฌ ๋์ผ ๋ฉด์ ์์ ๊ธฐ์กด์ CMOS ์ค์์นญ ์์๋ฅผ ์ฌ์ฉํ SMPS ๋ณด๋ค ๋ ๋์ ํจ์จ์ ๊ฐ๋ DC-DCBuck
์ปจ๋ฒํฐ๋ฅผ ์ค๊ณํ์๋ค. ๋ณธ๋ก 1์ ์์๋ DT-CMOS ์ค์์นญ ์์์ ๊ธฐ๋ณธ์ ์ธ ๊ฐ๋
๊ณผ ๊ตฌํ ๋ฐฉ๋ฒ ๊ทธ๋ฆฌ๊ณ ๋์ ํน์ฑ์ ๋ํด ์ค๋ช
ํ์์ผ๋ฉฐ, 2์ ์์๋
DC-DC Buck ์ปจ๋ฒํฐ ์ค๊ณ์ ๋ํด ์ค๋ช
ํ์๋ค. 3์ ์์๋ ๋ฎ์ ์ถ๋ ฅ ์ ๋ฅ์์ ํจ์จ์ด ๊ธ๊ฒฉํ ๊ฐ์ํ๋ PWM ๋ฐฉ์์ ๋ณด์ํ๋ LDO ๋ ๊ทค๋ ์ดํฐ์
๋ํด ์ค๋ช
ํ์์ผ๋ฉฐ, 4์ ์์๋ IC๋ฅผ ๋ณดํธํ๊ธฐ ์ํ ํ๋ก์ ๋ํด ์ค๋ช
ํ์๋ค. </p>
- source_sentence: Table 1. Natural frequency of each cantilever with different weights์์
10 g์ผ ๋ Type2์ ๊ฐ์ ์ด๋ ํ๊ฐ?
sentences:
- <table border><caption>ํ 6. ๋ฐฉ๋ฒ 2์ ํ์๋ถ๋ฅ์จ ๋ฐ ํ์๋ณ ๋ฌธ์ ์ธ์๊ธฐ์ ์ธ์์จ</caption> <tbody><tr><tr><td>In
Out</td><td>Type1</td><td>Type2</td><td>Type3</td><td>Type4</td><td>Type5</td><td>Type6</td><td>Type7</td><td>Rec(T)</td><td>Rec(C)</td><td>Rec(C\7)</td></tr><tr><td>Type1</td><td>51,076</td><td>524</td><td>305</td><td>955</td><td>252</td><td>30</td><td>150</td><td>95.84</td><td>95.34</td><td>99.48</td></tr><tr><td>Type2</td><td>741</td><td>35,323</td><td>43</td><td>281</td><td>511</td><td>33</td><td>178</td><td>95.18</td><td>95.01</td><td>95.18</td></tr><tr><td>Type3</td><td>816</td><td>642</td><td>9,942</td><td>756</td><td>140</td><td>12</td><td>38</td><td>80.53</td><td>80.25</td><td>99.66</td></tr><tr><td>Type4</td><td>1,257</td><td>1,007</td><td>315</td><td>72,787</td><td>657</td><td>384</td><td>176</td><td>95.04</td><td>94.25</td><td>99.17</td></tr><tr><td>Type5</td><td>541</td><td>1,807</td><td>40</td><td>910</td><td>44.846</td><td>234</td><td>94</td><td>92.52</td><td>91.40</td><td>98.79</td></tr><tr><td>Type6</td><td>182</td><td>182</td><td>119</td><td>905</td><td>201</td><td>4.521</td><td>19</td><td>73.76</td><td>73.41</td><td>99.51</td></tr><tr><td>Type7</td><td>4,784</td><td>2,289</td><td>939</td><td>1,911</td><td>1,268</td><td>196</td><td>55.752</td><td>83.04</td><td>82.85</td><td>99.77</td></tr><tr><td
colspan=8>๊ณ</td><td>91.09</td><td>90.54</td><td>99.39</td></tr></tbody></table>
<table border><caption>ํ 7. ๋ฐฉ๋ฒ 3์ ํ์๋ถ๋ฅ์จ ๋ฐ ํ์๋ณ ๋ฌธ์ ์ธ์๊ธฐ์ ์ธ์์จ</caption> <tbody><tr><tr><td>In
Out</td><td>Type1</td><td>Type2</td><td>Type3</td><td>Type4</td><td>Type5</td><td>Type6</td><td>Type7</td><td>Rec(T)</td><td>Rec(C)</td><td>Rec(C/T)</td></tr><tr><td>Type1</td><td>53,259</td><td>1</td><td>17</td><td>5</td><td>0</td><td>0</td><td>10</td><td>99.94</td><td>98.88</td><td>98.94</td></tr><tr><td>Type2</td><td>0</td><td>37,092</td><td>0</td><td>1</td><td>2</td><td>0</td><td>15</td><td>99.95</td><td>95.51</td><td>99.95</td></tr><tr><td>Type3</td><td>20</td><td>1</td><td>12,314</td><td>8</td><td>0</td><td>3</td><td>0</td><td>99.74</td><td>98.88</td><td>99.14</td></tr><tr><td>Type4</td><td>30</td><td>25</td><td>15</td><td>76,430</td><td>12</td><td>45</td><td>26</td><td>99.80</td><td>98.20</td><td>98.39</td></tr><tr><td>Type5</td><td>1</td><td>34</td><td>1</td><td>6</td><td>48,377</td><td>21</td><td>32</td><td>99.80</td><td>97.64</td><td>97.83</td></tr><tr><td>Type6</td><td>0</td><td>1</td><td>7</td><td>58</td><td>5</td><td>6,058</td><td>0</td><td>98.84</td><td>97.52</td><td>98.66</td></tr><tr><td>Type7</td><td>8</td><td>14</td><td>1</td><td>5</td><td>3</td><td>2</td><td>67.106</td><td>99.95</td><td>99.14</td><td>99.19</td></tr><tr><td
colspan=8>๊ณ</td><td>99.86</td><td>98.61</td><td>98.76</td></tr></tbody></table>
<p>๋ฐฉ๋ฒ 1, 2, 3์ ๊ฒฐ๊ณผ๋ก ๋ณด์, ์ธ์์ฒด ๋ฌธ์์ธ์์ ์์ด์ ๋ฐฉํฅ๊ฐ๋ ํน์ง์ ์
๋ ฅ์ผ๋ก ํ๋ MLP ์ ๊ฒฝ๋ง ํ์๋ถ๋ฅ๊ธฐ๋ \( 99 \%
\) ์ด์์ ๋ถ๋ฅ์จ๋ก ์์ ์กฐํฉ ๋ฐฉ์์ ๊ธฐ๋ฐ์ผ๋ก ํ๋ ๋ฌธ์์ ํ์๋ถ๋ฅ์ ์ ์ ํ์ฌ, ํ์ ๋๋ถ๋ฅ ํ ๋ฌธ์ ์์ธ์ธ์ ์ ๋ต์ ์ฌ์ฉํ๋ ๋ฐฉ๋ฒ์ ๋งค์ฐ
์ ์ฉํ๊ฒ ์ฌ์ฉ๋ ์ ์์์ ์ ์ ์๋ค. ๋ํ ๊ฐ ํ์์ด ์๋ ค์ง ํ์ ๋ฌธ์์ธ์์ ์์ด์๋ ๋ง์ฐฌ๊ฐ์ง๋ก ๋ฐฉํฅ๊ฐ๋ ํน์ง์ ์
๋ ฅ์ผ๋ก ํ๋ 2๋จ๊ณ MLP
์ ๊ฒฝ๋ง ์ธ์ ๋ฐฉ๋ฒ์ด \( 98 \% \) ์ด์์ ์ธ์์จ์ ๋ณด์ฌ ์ ์ฉํ๊ฒ ์ฌ์ฉ๋ ์ ์์์ ์ ์ ์๋ค. </p> <p>๋จ์ ์ค์์นญ ๋ฐฉ๋ฒ๊ณผ ํตํฉ
๋ฐฉ๋ฒ์ ํผ์ฉํ์ฌ, ํ์๋ถ๋ฅ๊ธฐ์ 1์์ ๋ถ๋ฅ๊ฒฐ๊ณผ๋ฟ๋ง ์๋๋ผ 2์์ ํ์์ ๋ํด์๋ ์ธ์์ ํ์ฌ ๋ณด๋ค ๋์ ์ ๋ขฐ๋๊ฐ์ ๊ฐ์ง๋ ๋ฌธ์ํด๋์ค๋ฅผ ์ธ์
๊ฒฐ๊ณผ๋ก ํ๋ ๋ฐฉ๋ฒ์ธ ๋ฐฉ๋ฒ 4, 5, 6, 7์ ๋ํ ๋ฌธ์์ธ์ ๊ฒฐ๊ณผ๋ฅผ ๋ฐฉ๋ฒ 1, 2, 3์ ๊ฒฐ๊ณผ์ ํจ๊ป<๊ทธ๋ฆผ 11>์ ๋ํ๋ด์๋ค. ๋ฐฉ๋ฒ 5์
7์ด \( 98.65 \% \) ์ ์ธ์์จ๋ก ๊ฐ์ฅ ๋์ ์ธ์์จ์ ๋ณด์๋๋ฐ, ๊ฐ๊ฐ ๋ฐฉ๋ฒ 4์ 6์ ๋ฌธ์์ธ์ ์ ๋ขฐ๋๊ฐ์ ํ์๋ถ๋ฅ๊ธฐ์ ๊ฒฐ๊ณผ๊ฐ์ผ๋ก
๊ฐ์คํํ ๋ฐฉ๋ฒ์ผ๋ก ๋ฐฉ๋ฒ 3์ ๊ฒฝ์ฐ์์์ฒ๋ผ ํ์๋ถ๋ฅ๊ธฐ์ ๋ถ๋ฅ๊ฒฐ๊ณผ๊ฐ์ด ๋ฌธ์์ธ์์จ์ ํฅ์์ ๋์์ด ๋์๋ค๋ ๊ฒ์ ๋ํ๋ด๋ ๊ฒ์ด๋ค. ๋ฐฉ๋ฒ 6๊ณผ 7์
๋ฐฉ๋ฒ 4์ 5์ ๋ณํ์ผ๋ก ํ์๋ถ๋ฅ๊ธฐ \( \mathrm{TR} \)์ ์ถ๋ ฅ๊ฐ์ด๋ \( \mathrm{CR} \) ๋ฌธ์์ธ์๊ธฐ์ ์ ๋ขฐ๋ ๊ฐ์ด ์๊ณ์น(
\( \beta \) ) ๋ณด๋ค ๋ฎ์ ๊ฒฝ์ฐ์๋ง 2์์ ํ์์ \( \mathrm{CR} \)์ ์ ํ์ ์ผ๋ก ํธ์ถํ์๋ค. <๊ทธ๋ฆผ 11>์์ ๋ํ๋ฌ๋ฏ์ด,
ํ์๋ถ๋ฅ๊ธฐ์ ๋ฌธ์์ธ์๊ธฐ์ ์ธ์๊ฒฐ๊ณผ๊ฐ ์์ฌ์ค๋ฌ์ด ๊ฒฝ์ฐ๋ง ํธ์ถํ ๋ฐฉ๋ฒ 6๊ณผ 7์ด ๋ฌด์กฐ๊ฑด์ ์ผ๋ก 2์์ ํ์์ ๋ํด์๋ ์ธ์ํ ๋ฐฉ๋ฒ 4์ 5์ ๋นํด
์ฑ๋ฅ์ด ์ฐ์ํจ์ ์ ์ ์๋ค. </p>
- <h1>์ ์ฝ</h1><p>๋ณธ ๋
ผ๋ฌธ์์๋ ํ๋ฉด Texturing ๋ฐฉ๋ฒ ์ค ์ต์ ์์นญ๋ฒ์ ์ด์ฉํ์ฌ ํ์์ ์ง์ ์ฌ์ฉ๋๋ ์ ๊ทน์ ํ๋ฉด์ ๊ฑฐ์น ๊ฒ ์ฒ๋ฆฌํ์๊ณ ,
ํ๋ฉด ์ฒ๋ฆฌ ํ \( \mathrm{TiO}_{2} \) ์ฐํ๋ฌผ ๋ฐ๋์ฒด๋ฅผ ์ฌ์ฉํ ์ผ๋ฃ ๊ฐ์ ํ์์ ์ง๋ฅผ ์ ์ํ์๋ค. ํ๋ฉด ์ฒ๋ฆฌ๋ ์ ๊ทน์ ์์นญ ์๊ฐ์
๋ฐ๋ฅธ ๋ถ๊ดํน์ฑ์ ์ธก์ ๋ถ์ํ์์ผ๋ฉฐ, ์์นญ ์๊ฐ์ ๋ฐ๋ผ ์ ์ํ \( \mathrm{TiO}_{2} \) ์ผ๋ฃ ๊ฐ์ ํ์์ ์ง์ ์ ๊ธฐ์ ํน์ฑ์ ํ๊ฐํจ์ผ๋ก์จ
ํ๋ฉด ์ฒ๋ฆฌ์ ๋ฐ๋ฅธ ํ์์ ์ง์ ํจ์จ ํฅ์์ ๊ดํ ์ฐ๊ตฌ๋ฅผ ์งํํ์๋ค. ๊ฒฐ๊ณผ์ ์ผ๋ก ์ ๊ทน ํ๋ฉด์ 10 ๋ถ๊ฐ ์์นญ ์ฒ๋ฆฌํ ํ์์ ์ง์ ๊ฒฝ์ฐ ๊ธฐ์กด ํจ์จ๊ณผ
๋น๊ตํ์์ ๋, ์ฝ \( 27.46[\%] \) ๊ฐ์ ๋จ์ ํ์ธํ ์ ์์๋ค. </p><h1>I. ์๋ก </h1><p>ํ์์ ์ง ์ฐ์
์์ ์ฃผ๋ก ๋จ๊ฒฐ์
๋ฐ ๋ค๊ฒฐ์ ์ค๋ฆฌ์ฝ๊ณ ํ์์ ์ง๊ฐ ๋์ ์์ฅ ์ ์ ์จ์ ๋ณด์ด์ง๋ง, ์ค๋ฆฌ์ฝ ํ์์ ์ง๋ ๋์ ์ ์กฐ๋จ๊ฐ, ๋ณต์กํ ์ ์กฐ๊ณต์ ๋ฑ์ ์ธก๋ฉด์์ ๊ฒฝ์๋ ฅ์ด ๋ค์
๋จ์ด์ ธ ์ด๋ ค์์ ๊ฒช๋ ์ค์ ์ ๋์ฌ์๋ค. ์ด์ ์ด๋ฅผ ๋์ฒดํ ์ฌ๋ฌ ํ์์ ์ง ์ค์์ ์ผ๋ฃ ๊ฐ์ ํ์์ ์ง๊ฐ ๊ฐ๋ฐ๋์ด ์ง์์ ์ธ ์ฐ๊ตฌ๊ฐ ์งํ๋๊ณ ์๋ค.
</p><p>์ผ๋ฃ ๊ฐ์ ํ์์ ์ง์ ๊ฒฝ์ฐ์๋ ์ ์กฐ๋จ๊ฐ๊ฐ ์ค๋ฆฌ์ฝ์ \(5\) ๋ถ์ \(1\) ์์ค์ ๋ถ๊ณผํ๋ฉฐ, ๋ค์ํ ์์๊ตฌํ, ์ ์ฐ์ฑ ๋ฐ ํฌ๋ช
์ฑ
๋ฑ์ ๋ค์ํ ์์ฉ ๊ฐ๋ฅ์ฑ์ผ๋ก ์์ฉํ์ ์ ๋ฆฌํ ํน์ง์ ์ง๋๊ณ ์์ด ์ฐจ์ธ๋ ํ์์ ์ง๋ก ๋ถ๋ฆฐ๋ค. ํ์ง๋ง ์ด๋ฌํ ์ฌ๋ฌ ์ฅ์ ์๋ ๋ถ๊ตฌํ๊ณ ์ผ๋ฃ ๊ฐ์
ํ์์ ์ง๊ฐ ์์ฉํ๋์ด ์ ํ์ผ๋ก ์์ฐ๋๊ธฐ ์ํด์๋ ํ์์ ์ง์ ํจ์จ์ด ๋์ฑ ๊ฐ์ ๋์ด์ผ ํ๋ ์ฐ๊ตฌ๊ณผ์ ๊ฐ ๋จ์ ์๋ ์ํ์ด๋ค. ์ด๋ฌํ ์ผ๋ฃ ๊ฐ์ ํ์์ ์ง์
ํจ์จ์ ํฅ์ํ๋ ๋ฐฉ์์ผ๋ก๋ ๋๋
ธ์
์์ ์ฐํ๋ฌผ ๋ฐ๋์ฒด์ ์
์ํฌ๊ธฐ, ๊ฒฐ์ ์ฑ, ํ๋ฉด ์ํ ์กฐ์ ๊ธฐ์ ๋ฑ์ ๊ฐ๋ฐ๊ณผ ๋๋
ธ์
์ ์ฐํ๋ฌผ ๋ฐ๋์ฒด ํ๋ฉด๊ณผ์
๊ฒฌ๊ณ ํ ๊ฒฐํฉ๋ ฅ์ ๊ฐ์ง๋ฉฐ ๋์ ๋ฒ์ ํ์ฅ์ ํก์ํ ์ ์๋ ์ผ๋ฃ์ ๊ฐ๋ฐฉ ๋ฑ ๋๋
ธ์
์ ์ฐํ๋ฌผ ๋ฐ๋์ฒด์ ๊ดํ ์ฐ๊ตฌ๊ฐ ํ์ํ๋ค. ๋ ์
์ฌ๋๋ ๋น์ด
ํ์์ ์ง ํ๋ฉด์ ํตํด ์ ์ง ๋ด๋ถ๋ก ๋ชจ๋ ํฌ๊ณผ๋์ง ๋ชปํ๊ณ ํ๋ฉด์์ ๋ฐ์ฌ๋๋ฉด์ ๋ฐ์ํ๋ ๊ดํ์ ์์ค์ ์ค์ด๊ธฐ ์ํ ๋์ฑ
๋ ์ฐ๊ตฌ ๊ฐ๋ฐ ์ด๋ฃจ์ด์ ธ์ผ
ํ๋ค. </p><p>๋ณธ ์ฐ๊ตฌ๋ ์ด์ ๋
ผ๋ฌธ์์ ๋ค๋ฃฌ ๊ฒฐ๊ณผ๋ฅผ ๋ฐํ์ผ๋ก DSSC(Dye-Sensitized Solar Cell)์ ๋ํ์ ์ผ๋ก ์ฌ์ฉ๋๋
\( \mathrm{TiO}_{2} \) ์ฐํ๋ฌผ ๋ฐ๋์ฒด๋ฅผ ์ด์ฉํ์ฌ ํ์์ ์ง๋ฅผ ์ ์ํ๊ณ , ์ถ๊ฐ๋ก ํ์์ ์ง ์์ธต ํ๋ฉด์์์ ๋ฐ์ฌ์์ค์ ๊ฐ์์ํค๊ธฐ
์ํด FTO(Fluorine doped Tin Oxide) ์ ๋ฆฌ ๊ธฐํ์ ํ๋ฉด ์ฒ๋ฆฌํ์ฌ ๊ด ์ ๊ทน์ผ๋ก ์ ๋ฌ๋๋ ๋น์ ์์ ์ฆ๊ฐ์์ผ ํจ์จ์ ๊ฐ์ ํ๊ณ ์
ํ์๋ค. ์ ๋ฆฌ ๊ธฐํ์ ํ๋ฉด ์ฒ๋ฆฌ๋ ๊ณต์ ๊ณผ์ ์ด ๋งค์ฐ ๊ฐ๋จํ๊ณ , ์ปจํธ๋กคํ๊ธฐ ์ฌ์ฐ๋ฉฐ, ๊ฐ๊ฒฉ์ด ์ ๋ ดํ ์ต์ ์์นญ์ ์ด์ฉํ์๋ค. ์ด๋ ๊ฒ ํ๋ฉด ์ฒ๋ฆฌํ
์ ๊ทน๊ณผ ์ผ๋ฃ ๊ฐ์ ํ์์ ์ง์ ์ต์ ์กฐ๊ฑด์ ์ป๊ธฐ ์ํด์ Sample์ ๊ดํ์ , ์ ๊ธฐ์ ํน์ฑ์ ์ฐ๊ตฌํ์๋ค. </p>
- <h1>2. ์ค๊ณ ๋ด์ฉ</h1> <p>์ด๋ฒ ์ฐ๊ตฌ์์๋ ๊ธฐ์กด์ ์ฐ๊ตฌ๋์๋ ์ผํธ๋ฆฌ๋ฒ์ ๊ธธ์ด๋ฐ ์ถ์ ๋ฌด๊ฒ์ ์ง์ ์ ์ผ๋ก ์์กดํ์ง ์๊ณ ์ผํธ๋ฆฌ๋ฒ์ ๊ตฌ์กฐ์
ํ์์ ๋ฐ๋ผ ์์ฉ ์์ ์์(PI์ฌ์ DuraAct)๋ก๋ถํฐ ์ต๋์ ์ ๋ ฅ์ ์ฐ์ถํ๋ ๊ฒ์ด ๋ชฉ์ ์ด๋ค. ๋ฐ๋ผ์ ์ง์ฌ๊ฐํ๊ณผ ์ฌ๋ค๋ฆฌ๊ผด ๊ตฌ์กฐ๋ฅผ Solidworks์
๋ณํ๋ฅ ํด์์ ํตํด ํ๋ฉด์ ๋ณํ๋ฅ ์ ํ์ธํ์๋ค. </p> <h2>2.1. ์บํธ๋ ๋ฒ ์ค๊ณ ๋ณ์ ์ค์ </h2> <p>์บํธ๋ ๋ฒ์ ์ฌ๋ฃ๋ Aluminum
5052๋ก ์ ํ์ผ๋ฉฐ, ๋๊ป๋ 0.8 \(\mathrm{mm}\), ๊ธธ์ด๋ 135 \(\mathrm{mm}\), ๊ทธ๋ฆฌ๊ณ ํญ์ 65 \(\mathrm{mm}\)์
๋์ผํ ํฌ๊ธฐ๋ก ์ค์ ํ์ผ๋ฉฐ ์ฌ๋ค๋ฆฌ๊ผด ๊ตฌ์กฐ๋ ์ผ๊ฐํ๊ณผ ์ต๋ํ๊ฐ๊น๋๋ก ๋ฌด๊ฒ์ถ ๋ถ์ฐฉ์ ์ํ 10 \(\mathrm{mm}\)๋ง ๋จ๊ฒจ๋์๋ค. ์ ์ฒด์ ์ธ
์บํธ๋ ๋ฒ ํฌ๊ธฐ๋ ์ฌ์ฉ๋๋ DuraAct ์์ ์์์ ํฌ๊ธฐ์ ๋ง์ถฐ ์ ์ ๋์๋ค. ๋ ๊ฐ์ง ์บํธ๋ ๋ฒ์ ํ์์ Fig. 1์ ๊ฐ๋ค. </p> <h2>2.2.
Solidworks ๋ณํ๋ฅ ํด</h2> <p>Solidworks ํ๋ก๊ทธ๋จ์ ํ์ฉํ์ฌ ์์ ์์๊ฐ ๋ถ์ฐฉ๋ ์ผํธ๋ฆฌ๋ฒ ํ๋ฉด์ ๊ธธ์ด๋ฐฉํฅ ๋ณํ๋ฅ ์ ๋ถ์ํด
๋ณด์๋ค. ํ๋ฉด๋ณํ๋ฅ ์ ํ๊ท ์ ์ผ๋ก ๋ถ์ํ๊ธฐ ์ํด ์ผํธ๋ฆฌ๋ฒ ์ค์ฌ์ ๋
ธ๋๋ฅผ ๊ฐ๊ฐ ๋น๊ตํด ๋ณด์๋ค. Fig. 2์ ์๊น์ ๋ฐ๋ฅธ ํ๋ฉด ๋ณํ๋ฅ ์ ์ง์ ๋น๊ตํ๊ธฐ๊ฐ
ํ๋๋ฏ๋ก Fig. 3์์ ๊ทธ๋ํ๋กFig. 2 ์ ํ์๋ ๋ฐฉํฅ๋๋ก ํ๋ฉด์ ๋ณํ์จ์ ๋ํ๋ด์๋ค. </p> <p>Fig. 3์ ๊ทธ๋ํ๋ฅผ ๋ณด๋ฉด ์์
์๋ฏ์ด ์ง์ฌ๊ฐํ ์บํธ๋ ๋ฒ๋ณด๋ค ์ผ๊ฐํ์ ๊ฐ๊น์ด ์ฌ๋ค๋ฆฌ๊ผด ์บํธ๋ ๋ฒ๊ฐ ๋ ๋ง์ ํ๋ฉด ๋ณํ๋์ ๋ณด์๊ณ , ์ด ํ๋ฉด์์ ์์ ์์๊ฐ ์๋ค๋ฉด ๋๋ง์ ์ ๋ ฅ๋์
์ํํ ์ ์์ ๊ฒ์ด๋ค. ์ด๋ฌํ ๊ฒฐ๊ณผ์ ๊ธฐ๋๋ฅผ ํ์ฌ ๊ฐ์ ํ์์ผ๋ก ์บํธ๋ ๋ฒ๋ฅผ ์ ์ํ์ฌ ์๋์ง ์ํ์ ๋ํ ์คํ์ ๊ณํํ์๋ค. </p> <h2>2.3.
Solidworks ๊ณ ์ ์ง๋์ ํด์</h2> <p>Solidworks ํด์ ํ๋ก๊ทธ๋จ์ ํตํด์ ์ผํธ๋ฆฌ๋ฒ์ ๊ณ ์ ์ง๋์๋ฅผ ์์ธกํ์๋ค (Fig.
4). ๊ทธ๋ฆฌํ์ฌ ์คํ์ Shaker๋ฅผ ํตํด ์ง๋์ ๋ฐ์์ํฌ ๋ ์คํ ํ์๋ฅผ ์ต์ํํ๋ฉฐ ์ผํธ๋ฆฌ๋ฒ๊ฐ ์ต๋๋ก ์ง๋ํ ์ ์๋ ์ฃผํ์๋ฅผ ์ฐพ์ ์ ์์๋ค.
๋๋ถ์ด ์ผํธ๋ฆฌ๋ฒ ์์ ๋จ์ ์ถ(10 \(\mathrm{g}\), 20 \(\mathrm{g}\))๋ฅผ ์ค์นํจ์ผ๋ก์จ ๊ณ ์ ์ง๋์๋ฅผ ๋ฎ์ถ ์ ์์๋ค.
</p> <table border><caption>Table 1. Natural frequency of each cantilever with
different weights</caption> <tbody><tr><td>๊ตฌ ๋ถ</td><td>Type1</td><td>Type2</td></tr><tr><td>10
g</td><td>29.55 Hz</td><td>28.76 Hz</td></tr><tr><td>20 g</td><td>22.77 Hz</td><td>20.92
Hz</td></tr></tbody></table>
- source_sentence: ์์ธก ๋ฐฉ๋ฒ๋ก ์์๋ ์ด๋ค ๋ฐ์ดํฐ๋ฅผ ๋์์ผ๋ก ํด?
sentences:
- ์ธ๊ณต์ง๋ฅ์ ํตํ ๊ธฐ์ํ์์ ์์ธก์ ๋น๊ต์ ์ต๊ทผ ๋ค์ด ์ฐ๊ตฌ๊ฐ ์งํ๋์๊ธฐ ๋๋ฌธ์ ํฌ๊ฒ ๊ธฐ๊ณํ์ต ๊ธฐ๋ฒ๊ณผ ๋ฅ๋ฌ๋ ๊ธฐ๋ฒ์ ์ด์ฉํ์ฌ ๊ธฐ์ํ์์ ์์ธกํ๋ ค๋
์๋๊ฐ ์ด๋ฃจ์ด์ ธ ์์ผ๋ฉฐ, ๋ค์ํ ์ฅ์์์ ์์ง๋๋ ๋ฐ์ดํฐ์ ๋ํ ์ ์ฒ๋ฆฌ ๋ฐ ํ์ต ๋ฐ์ดํฐ์ ๋ํ ํ์ง ๊ด๋ฆฌ๊ฐ ๋งค์ฐ ์ค์ํ๋ค. ์ธ๊ณต์ง๋ฅ์ ํตํด
๊ธฐ์ ํ์์ ์์ธกํ๊ณ ์ ํ๋ ๊ฒฝ์ฐ, ์์ธก ์๊ณ ๋ฆฌ์ฆ์ ํตํด ๋ฌธ์ ํด๊ฒฐ์ ๋๋ชจํ ์ ์์ผ๋ฉฐ, ๋ฐ์ดํฐ์ ํ์ง ๊ด๋ฆฌ์ ์์ด์ ๋ถ๋ฅ ์๊ณ ๋ฆฌ์ฆ์ ์ด์ฉํ
์ ์๋ค. ๋น
๋ฐ์ดํฐ์ ํ์ฉ์ ์์ด์, ๋ฐ์ดํฐ๋ฒ ์ด์ค๋ง๋ค ๊ธฐ์ค ์ ๋ณด๊ฐ ๋์ผํ์ง ์๊ธฐ ๋๋ฌธ์ ๋ฐ์ดํฐ ์ ์ ์ ๊ธฐ์ค ์ ๋ณด๋ฅผ ํ์คํ ํ๋ ๊ฒ์ด ์ค์ํ๋ฉฐ,
ํตํฉ DB ์ค๊ณ ์ 1) ๊ณตํต๋ ๊ท์น์ ๊ฐ์ง๋๋ก ํ๊ณ , 2) ๋ฐ์ดํฐ ๋ฌด๊ฒฐ์ฑ์ด๋ ์ฑ๋ฅ ์์ ์ด์ ์๋ ๊ตฌ์กฐ์ ์ผ๋ก ์ค๊ณ๋๋ฉฐ, 3) ์ค๊ณ๋ ๋ฐ์ดํฐ๋ฒ ์ด์ค์
๋ํ ์ ๋๋ก ๋ ๊ด๋ฆฌ ์ฒด๊ณ๊ฐ ๋ณด์ฅ ๋๋๋ก ํด๋น ๋ด์ฉ์ ๊ณ ๋ คํด์ผ ํ๋ค.
- <h1>II. ๋ณธ๋ก </h1><h2>1. ์์ธก ๋ฐฉ๋ฒ๋ก </h2><p>์์ธก ๋ฐฉ๋ฒ๋ก ์์ ์ฌ์ฉํ๋ ๋ฐ์ดํฐ๋ ์ฌ์ ์ ์์ง๋ ๊ณผ๊ฑฐ ๋ฐ์ดํฐ๋ ์ ํ๋ ๋ฐ์ดํฐ๋ฅผ
๋์์ผ๋ก ํ๋ฏ๋ก ์๊ธฐ์น ์์ ์ฌํ์ ์ด์์ ๋ฏธ์ธ๋จผ์ง์ ๊ฐ์ด ์๋กญ๊ฒ ์ฃผ๋ชฉ์ ๋ฐ๋ ์์ธ๋ค์ ์ฒด๊ณ์ ์ผ๋ก ์์ง๋์ด ์์ง ์์ ๊ฒฝ์ฐ๊ฐ ๋ง๋ค. ์ด๋ฐ ํ์ ์ ์ธ
๋ฐ์ดํฐ๋ฅผ ์ด์ฉํ ์์ธก์ ๊ฐ ๋ฐ์ดํฐ์ ์ํฅ๋ ฅ์ด ๊ณผ๋ํ๊ฐ๋ ์์์ผ๋ฏ๋ก ๋ฏธ๋์ ์์ธก๊ฐ์ ์ค์ฐจ๋ฅผ ํฌ๊ฒ ํ ์ ์๋ค. ์ฐ์์ ์ด๊ฑฐ๋ ์ด์ฐ๋์ด ์๋ ์
๋ ฅ๋ฐ์ดํฐ๋ค์
์ฐจ์ด๋ ์ ์ ํ ๊ฒฐ๊ณผ๋ฅผ ์์ธกํ๊ธฐ์ ๋ฌธ์ ๊ฐ ๋ ์ ์๋ค. </p><p>์ด๋ฌํ ์์ธก ๋ฐฉ๋ฒ๋ก ์ด ๊ฐ์ง ์ค์ฐจ์ ํ๊ณ์๋ ๋ถ๊ตฌํ๊ณ ์์ธก์ ๊ธ์ ์ ์ธ ํจ์ฉ์ฑ
๋๋ฌธ์ ์ต๊ทผ ๋ฑ์ฅํ๊ณ ์๋ ๋น
๋ฐ์ดํฐ ๊ธฐ๋ฐ์ ๋จธ์ ๋ฌ๋ ๋ฑ์ ๋ฐฉ๋ฒ๋ก ์ ํตํ ๋
ธ๋ ฅ์ด ๊พธ์คํ ์งํ๋๊ณ ์๋ค. </p><h3>๊ฐ. ์ ํํ๊ท๋ถ์</h3><p>์ ํํ๊ท๋ถ์์
๋ฒกํฐ ๋
๋ฆฝ๋ณ์ \( x \) ์ ์ค์นผ๋ผ ์ข
์๋ณ์ \( y \) ์ ๊ด๊ณ๋ฅผ ์ ๋์ ์ผ๋ก ๋ถ์ํ์ฌ ๊ฐ์ฅ ๋น์ทํ ์์ธก๊ฐ \( \hat{y} \) ์
๋์ถํ๋ ๋ฐฉ๋ฒ๋ก ์ด๋ค. </p><p>\[ \hat{y}=f(x) \approx y \]</p><p>์ ํํ๊ท๋ถ์์ ์ํด์๋ ๊ฐ ๋ณ์์ ์กด์ฌ๋ฅผ ์ฌ์ ์
ํ์
ํ ํ์๊ฐ ์๋ค. ๊ด์ค์ ์์ธก์์ ์ ํ ํ๊ท ๋ถ์(๋ค์คํ๊ท๋ถ์)์ ์ฌ์ฉํ ๊ฒฝ์ฐ, ๊ด์ค์์ ์ํฅ์ ๋ฏธ์น๋ ๋ณ์๋ฅผ ์ด๋ ์ ๋ ์ ์ ์์ด์ผ
ํ๋ฏ๋ก ์์ธก ๊ฒฐ๊ณผ๊ฐ์ด ์ด๊ด์ค์์ ๊ฐ์ ํ๊ท ๊ฐ ๋์ถ์๋ ์ ํฉํ์ง๋ง, ๊ตฌ์ญ๋ณ ๊ด์ค์ ๋ฑ์ ์ธ๋ฐํ๊ฒ ์์ธก๊ฐ์ ๋์ถํด์ผ ํ ๊ฒฝ์ฐ์๋ ์
๋ ฅ ๋ณ์ ๋ฐ
๋ฐ์ดํฐ์ ํ๊ณ๋ก ๊ฒฐ๊ณผ๊ฐ์ ์ค์ฐจ๊ฐ ์ปค์ง ์ ์๋ค. </p><h3>๋. ์๊ณ์ด๋ถ์</h3><p>์๊ณ์ด๋ถ์ ๋ฐฉ๋ฒ์ ์์ ์์ธก๋ฐฅ๋ฒ์ผ๋ก ๊ณผ๊ฑฐ์ ๋ฐ์ดํฐ๋ฅผ
์๊ฐ์ ๋ฐ๋ฅธ ๋ณํ๋ฅผ ํ์
ํ์ฌ ์์ธก๊ฐ์ ๋์ถํ๋ ๋ฐฉ๋ฒ๋ก ์ด๋ค. ์๊ณ์ด ๋ถ์๋ฐฉ๋ฒ์๋ ์ง์ํํ๋ฒ, ์๊ธฐํ๊ท๋ฒ, ARIMA๋ฒ์ด ์๋ค. ์ง์ํํ๋ฒ์ ๊ณผ๊ฑฐ
๋ฐ์ดํฐ ์ํฅ๋ ฅ์ ์ฐจ์ด๋ฅผ ์ค์ด๊ธฐ ์ต์ ์๋ฃ์ ๊ฐ์ค์น๋ฅผ ์ฃผ์ด์ ์์ธก๊ฐ์ ๋์ถํ๋ ๋ฐฉ๋ฒ์ด๋ค. ์๊ธฐํ๊ท๋ฒ์ ๊ณผ๊ฑฐ ๋ฐ์ดํฐ๊ฐ ๋ฏธ์น๋ ์ํฅ๋ ฅ์ ์ด๋ ์ ๋
์ ๊ฑฐํ์ฌ ์์ธก๊ฐ์ ๋์ถํ๋ ๋ฐฉ๋ฒ์ด๋ค. ARIMA๋ฒ์ ์๊ณ์ด ๋ถ์ ๋ฐฉ๋ฒ์ ๋ํ์ ์ธ ๋ฐฉ๋ฒ์ผ๋ก์จ, ์๊ณ์ด ์๋ฃ์ ์๊ธฐ ์๊ด ํน์ฑ์ ์ด์ฉํ๋ค. ์ด์
๊ฐ์ ๋ค์ํ ์๊ณ์ด๋ถ์ ๋ฐฉ๋ฒ๋ก ์ ์ด์ฉํ ์์ธก์ ํต์ ์ค๋ ๊ธฐ๊ฐ์ ๋ฐ์ดํฐ๊ฐ ์์ ๋ ์ฌ์ฉํ๋ค. </p><p>์๊ณ์ด ๋ถ์(์ง์ํํ, ์๊ธฐํ๊ท,
ARIMA) ๋ฐฉ๋ฒ๋ก ์ ๊ด์ค์ ์์ธก์ ํ์ฉํ๋ ค๋ฉด ์ค๋ ๊ธฐ๊ฐ์ ๊ด์ค์ ๋ฐ์ดํฐ๊ฐ ์์ด์ผ ํ๋ค. ์๋ก์ด ์ด๋ฒคํธ์ ๊ฒฝ์ฐ ๋์ ๋ฐ์ดํฐ๊ฐ ๋ถ์กฑํ๊ธฐ ๋๋ฌธ์
์ํ๋ ๊ด์ค์ ์์ธก๊ฐ์ ๋์ถํ๋๋ฐ๋ ํ๊ณ๊ฐ ์์ ์ ๋ฐ์ ์๋ค. </p><h3>๋ค. ์๋ฎฌ๋ ์ด์
</h3><p>์๋ฎฌ๋ ์ด์
(์ํ์ ๋ชจ๋ธ๋ง) ๊ธฐ๋ฒ์
๊ธฐ์
์ ๋น์ฆ๋์ค ๋ก์ง์ ์ํ์ ์ผ๋ก ๊ตฌ์ถํ์ฌ ์ปดํจํฐ ์๋ฎฌ๋ ์ด์
์ ํตํด ์์ธก๊ฐ์ ๋์ถํ๋ ๋ฐฉ๋ฒ์ด๋ค. ๋ณดํต ๋ฌผ๋ฅ, ์ ํต ๋ฑ ๋น์ฆ๋์ค ๋ก์ง์ ์ธ์ธํ
์ ์๊ณ ์์ ๋ ์ฌ์ฉํ๋ค. ์๋ฎฌ๋ ์ด์
๊ธฐ๋ฒ์ ์ต์ ์ ์ฐํธ๋ฌผ ๋ฐฐ๋ฌ ๊ฒฝ๋ก ๋์ถ๊ณผ ๊ฐ์ด ํต๊ณ์ ์ด๊ฑฐ๋ ์ํ์ ์ธ ๋ถ์์ผ๋ก๋ ์ ํํ ์์ธก ๊ฐ์ ์ฃผ์ด์ง
์๊ฐ ๋ด์ ๋์ถํ ์ ์์ ๋ ์ฃผ๋ก ์ฌ์ฉํ๋ค. </p><p>์๋ฎฌ๋ ์ด์
(์ํ์ ๋ชจ๋ธ๋ง) ๊ธฐ๋ฒ์ ๊ด์ค์์ ์ํฅ์ ๋ฏธ์น๋ ๋น์ฆ๋์ค ๋ก์ง์ ์ ํํ
ํ์
ํ๊ณ ์์ ๋ ์ฌ์ฉํ ์ ์๋ค. ์์ง ํ๋ก์ผ๊ตฌ ๊ด์ค์์ ์ํฅ์ ๋ฏธ์น๋ ์์ธ์ ๋ํ ์ฐ๊ตฌ๋ ์ธ๋ฐํ ๋น์ฆ๋์ค๋ก์ง์ ๋ถ์ํ ๊ฒฐ๊ณผ๊ฐ ๋ง์ง ์์์
๊ด์ค์ ์์ธก์ ์ ์ฉํ๊ธฐ๋ ์ฝ์ง ์์ ๊ฒ์ด๋ค. </p><h3>๋ผ. ๋จธ์ ๋ฌ๋</h3><p>๋จธ์ ๋ฌ๋์ด๋ ์ฃผ๋ก ๋น
๋ฐ์ดํฐ๋ฅผ ํ์ฉํด ๋น์ ํ์ ํํ๋ก ๊ฒฐ๊ณผ๊ฐ์
์์ธกํ๋ ๋ฐฉ๋ฒ์ด๋ค. ๋จธ์ ๋ฌ๋ ๊ธฐ๋ฒ์ ์ ํํ๊ท๋ถ์ ๋ฐฉ๋ฒ๋ก ๊ณผ ๋ฌ๋ฆฌ ์ฌ์ ์ ์ํฅ์ ๋ฏธ์น๋ ๋ณ์๋ฅผ ๋ชจ๋ ์์ง ๋ชปํ ์ํ์์๋ ์์ธก๊ฐ์ ๋์ถํ ์ ์๋ค.
๋ฐ๋ผ์ ๋น
๋ฐ์ดํฐ ํํ๋ก ์๋ฃ๋ฅผ ์์งํ ์ ์๊ณ , ์์ธกํ์ง ๋ชปํ ๋ณ์๋ค์ด ์ข
์ข
๋ฑ์ฅํ๋ ๊ฒฝ์ฐ์ ์ ์ ํ ํ์ฉํ ์ ์๋ค. ๋ค๋ง ์ค์ ๋ถ์์๊ฐ๋ณด๋ค
๋ฐ์ดํฐ๋ฅผ ์ปดํจํฐ๊ฐ ์ดํดํ๊ธฐ ์ฝ๋๋ก ์ ์ ํ๋ ์๊ฐ์ด ๋ ๋ง์ด ๊ฑธ๋ฆด ์๊ฐ ์๊ณ , ๋ถ์๋ฐฉ๋ฒ๋ก ์ ๋ฐ๋ผ ์์ธก๊ฐ์ด ๋ฌ๋ผ์ง๋ ํ๊ณ๋ ์กด์ฌํ๋ค. </p><p>๋จธ์ ๋ฌ๋
๋ฐฉ๋ฒ๋ก ์ ๋น
๋ฐ์ดํฐ๋ฅผ ํ์ฉํด ๋น์ ํ์ ํํ๋ก ๊ด์ค ์๋ฅผ ์์ธกํ ๋ ์ฌ์ฉํ ์ ์๋ค. ๋ํ ์์ธก ๋ชปํ ๋ณ์๊ฐ ์๋๋ผ๋ ๋ฐ์ดํฐ์ ํ์ต์ ํตํ์ฌ ์ด๋
์ ๋ ์์ธก๊ฐ์ ๋์ถํ๋ ๊ฒ์ด ๊ฐ๋ฅํ๋ค. ๋ฐ๋ผ์ ํ์ฌ์ ์ ํ๋ ๊ธฐ๊ฐ์ ์์ง๋ ๋น
๋ฐ์ดํฐ๋ฅผ ํ์ฉํ์ฌ ์์ธก๊ฐ์ ๋์ถํ๊ธฐ์ ์ต์ ์ ๋ฐฉ๋ฒ๋ก ์ผ๋ก ๋ณผ ์
์๋ค. </p>
- <h2>2. ์๋น์ค ๊ตฌ์ฑ๊ธฐ๋ฒ</h2><p>์๋น์ค ๊ตฌ์ฑ๊ธฐ๋ฒ์ ์ฌ์ฉ์๊ฐ ์ํ๋ ์๋น์ค๋ฅผ ์ ์ ํ ์ ๊ณตํ ์ ์๋ ๋๋ฐ์ด์ค๋ฅผ ์ ํํ์ฌ ์๋น์ค ์ธ์
์
๊ตฌ์ฑํ๋ ๊ฒ์ด๋ค. ๋ํ ์๋น์ค ์ธ์
์ ์ํด ์ ํ๋๋ ๋๋ฐ์ด์ค๋ ์ฌ์ฉ์์ ์์น๋ ์
๋ฌด, ์๋น์ค๊ฐ ์์ฒญ๋๋ ์๊ธฐ์ ๋ฐ๋ผ ์์๋ก ๋ณํ๊ฒ ๋๋ค. ์ด๋ฌํ
์๋น์ค ๊ตฌ์ฑ ๊ธฐ๋ฒ์ ์ฌ์ฉ์์๊ฒ ์์ฒญํ๋ ์๋น์ค์ ๋ํด์ ์ฌ์ฉ์๊ฐ ๋ง์กฑํ ์ ์๋ ํ์ง์ ์ ๊ณตํ ์ ์์ด์ผ ํ๋ฉฐ, ๋๊น์๋ ์๋น์ค ์ ๊ณต์ ์ํด
์ฌ์๊ฐ๋ฅํ ๋๋ฐ์ด์ค๋ฅผ ๋ฏธ๋ฆฌ ์์ฝํ์ฌ ์ฌ์ฉ์๊ฐ ์๋น์ค ์์ฒญ์ ์๋น์ค ์ ๊ณต์๊ฐ์ ์ค์ผ ํ์๋ ์๋ค. ๊ทธ๋ฆฌ๊ณ ์๋น์ค ์์ฝ๊ธฐ๋ฒ์ ์ฌ์ฉ์์ priority๋
์ค์ผ์ค ๋ฐ ์ด๋์ฑ ์ ๋ณด ๋ฑ์ ์ํฉ ์ ๋ณด๋ฅผ ๊ธฐ๋ฐ์ผ๋ก ์์ธกํ ์๋น์ค ์์ฝ ๊ตฌ์ฑ๊ธฐ๋ฒ์ด ๊ฐ๋ฅํด์ผ ํ๋ค. ์ด๋ฌํ ๋์ ์ธ ์๋น์ค ๊ตฌ์ฑ๊ธฐ๋ฒ์ ํตํด ์ฌ์ฉ์๋
์ด๋์์๋ ์ธ์ ๋ ์ง ์ํ๋ ์๋น์ค๋ฅผ ์ ๊ณต๋ฐ์ ์๊ฒ ํจ์ผ๋ก์จ ์๋น์ค ๊ฐ์ฉ์ฑ์ ๋์ผ ์ ์๊ฒ ํ๋ค. </p><p>์ง๋ฅ์ ์ธ ์๋น์ค ๊ตฌ์ฑ์ ์ํด์
๊ณต๊ฐ๋ด์ ๋๋ฐ์ด์ค๋ค์ ์ฌ์ฉ์๊ฐ ์ํ๋ ์๋น์ค๋ฅผ ์ ์ ํ ์ ๊ณตํ ์ ์๋์ง ์ฌ๋ถ๋ฅผ ํ๋ณํ๊ธฐ ์ํ ์๋น์ค ์ปดํฌ๋ํธ๋ฅผ ๊ฐ์ ธ์ผ ํ๋ค. ์ฆ ํ๋์ ์ดํ๋ฆฌ์ผ์ด์
์๋น์ค๋ ๋ค์ํ ์๋น์ค ์ปดํฌ๋ํธ๋ค์ ์กฐํฉ์ ํตํด ์ ๊ณต๋ ์ ์๋ค. ์๋ฅผ ๋ค์ด ๋์์์ ๋ณด๊ธฐ ์ํด์๋ ๋์์์ ๋ณด์ฌ์ค ์ ์๋ ๋์คํ๋ ์ด ๋ถ๋ถ๊ณผ
์์ฑ์ ์ฌ์ํ ์ ์๋ ์ค๋์ค์ ๊ฐ์ ์๋น์ค ์ปดํฌ๋ํธ๋ค์ด ์ ๊ณต๋์ด์ผ ์ฌ์ฉ์์๊ฒ ์ ์ ํ ์ดํ๋ฆฌ์ผ์ด์
์๋น์ค๋ฅผ ์ ๊ณตํ ์ ์๋ ๊ฒ์ด๋ค. ์ด๋ฌํ
์๋น์ค ์ปดํฌ๋ํธ๋ ๋๋ฐ์ด์ค๊ฐ ๊ฐ์ง๋ ๋ค์ํ ์๋น์ค ์ปดํฌ์ง์
(service composition)์ ์ ์ํ๋ ๋ฐฉ๋ฒ์ ์์กด์ ์ด๋ฉฐ, ์ฐธ๊ณ ๋ฌธํ [7\(\sim\)9]์์์
๊ฐ์ด ๋ค์ํ ํ๋ก์ ํธ๋ค์ ํตํด ํ๋ฐํ ์ฐ๊ตฌ๊ฐ ์งํ๋๊ณ ์๋ค. ์ด๋ฌํ ์๋น์ค ์ปดํฌ๋ํธ๋ ์ฌ์ฌ์ฉ์ด ๊ฐ๋ฅํ๋ฉฐ, ์๋ก ๋ค๋ฅธ ์ปดํฌ๋ํธ๊ฐ์ ํ์
์ ํตํด
๋ณด๋ค ๋์ ์๋น์ค๋ฅผ ์ ๊ณตํ ์๋ ์๋ค. ์ด๋ฌํ ์๋น์ค ์ปดํฌ๋ํธ๋ฅผ ํ์ฉํ์ฌ ํผ์ค๋ ์๋ฒ๋ ์ดํ๋ฆฌ์ผ์ด์
์๋น์ค๋ฅผ ์ ์ ํ๊ฒ ์ ๊ณตํ ์ ์๋ ํ๋ณด
๋๋ฐ์ด์ค๋ฅผ ์ฐพ์ ์๋น์ค ๊ตฌ์ฑ ํ
์ด๋ธ์ ์์ฑํ์ฌ ๊ด๋ผํด์ผ ํ๋ค. ๊ทธ๋ฆฌ๊ณ ๊ฐ ์๋น์ค ๊ตฌ์ฑ๋ฆฌ์คํธ๋ ์ฌ์ฉ์์๊ฒ ๋ณด๋ค ๋์ ์๋น์ค ์ง์ ์ ๊ณตํ ์ ์๋
์ฐ์ ์์์ ๋ฐ๋ผ ๊ตฌ์ฑ๋๋ค. ํ 1 ์ ์ดํ๋ฆฌ์ผ์ด์
์๋น์ค์ ๋ฐ๋ฅธ ์๋น์ค ๊ตฌ์ฑ ํ
์ด๋ธ ์๋ฅผ ๋ณด์ฌ์ค๋ค. </p><h2>3. ๊ธฐ์กด์ ์ ๊ทผ์ ์ด ๋ฐ
์๋น์ค ๊ตฌ์ฑ๊ธฐ๋ฒ์ ๋ฌธ์ ์ </h2><p>๊ทธ๋ฆผ 2์์ ๋ณด๋ ๋ฐ์ ๊ฐ์ด ์ ๊ทผ๋ชจ๋๊ฐ group mode ์ผ ๋ ์ฌ๋ฌ ํผ์ค๋ ์๋ฒ๋ค์ด ์์ ์๊ฒ ์ฃผ์ด์ง
๊ถํ๋ด์์ ๊ณต๊ฐ๋ด์ ์ธ์ญ ๋๋ฐ์ด์ค๋ฅผ ์ด์ฉํ๊ณ ์ ํ ๋ ์์์ ์ฌ์ฉ์์ ์ํด ๋๋ฐ์ด์ค๊ฐ ์ฌ์ฉ๋๊ณ ์์ด ์ฃผ๋ณ์ ๋ค๋ฅธ ์ฌ์ฉ์๊ฐ ๊ณต์ ๋ ๋๋ฐ์ด์ค๋ก
์๋น์ค๋ฅผ ๋ฐ์ง ๋ชปํ๋ ์๋น์ค ์ถฉ๋ ํ์์ด ๋ฐ์ํ ์ ์๋ค. ์๋ฅผ ๋ค์ด PS 1์ ๋ฌด์ ์ธํฐํ์ด์ค๋ก UWB๋ฅผ ์ฌ์ฉํ๋ฉฐ, PS2๋ 802.11๊ธฐ๋ฐ์
WLAN์ ์ฌ์ฉํ๋ค. ๋ํ ๊ณต๊ฐ๋ด์๋ ๊ฐ๊ฐ์ ๋ฌด์ ์ธํฐํ์ด์ค์ ๋ํ AP ๊ฐ ์ค์น๋์ด ์๋ค๊ณ ๊ฐ์ ํ์. ์ด๋ PS 1์ด Digital TV๋ฅผ
ํตํด VOD ์๋น์ค๋ฅผ ์ ๊ณต๋ฐ๊ณ ์๋ค. ์ด ๊ฒฝ์ฐ PS 2 ๊ฐ ๋์ผํ ์๋น์ค๋ฅผ Digital TV ์ ์์ฒญํ๋ฉด ๋ฌด์ ๋งํฌ ๊ณ์ธต์์ ์ธ์งํ ์ ์๋
์ดํ๋ฆฌ์ผ์ด์
๊ณ์ธต์์์ ์๋น์ค ์ถฉ๋์ด ๋ฐ์ํ๋ค. ์ด๋ฌํ ์๋น์ค ์ถฉ๋๋ก ์ธํด ํผ์ค๋ ์๋ฒ๋ ๋ถํ์ํ ๋ฉ์์ง๋ฅผ ๋ฐ์์์ผ ๋นํจ์จ์ ์ผ๋ก ๋ฐฐํฐ๋ฆฌ๋ฅผ ์๋ชจํ๋ค.
</p><p>๋ํ ์๋น์ค ์ปดํฌ๋ํธ ๊ธฐ๋ฐ์ผ๋ก ์ดํ๋ฆฌ์ผ์ด์
์๋น์ค๋ฅผ ์ ์ ํ ์ ๊ณตํ ์ ์๋ ํ๋ณด ๋๋ฐ์ด์ค๋ฅผ ํตํด ์๋น์ค๋ฅผ ์ ๊ณต๋ฐ์ ๋ ๊ทธ๋ฆผ 3์์
๋ณด๋ฏ์ด ์ฐ์ ์์๊ฐ ๋์ ํ๋ณด ๋๋ฐ์ด์ค์๊ฒ ๋จผ์ ์๋น์ค ์์ฒญ ๋ฉ์์ง๋ฅผ ๋ณด๋ด๊ฒ ๋๋ค. ์ด๋ ์๋น์ค ์์ฒญ๋ฉ์์ง๋ฅผ ๋ฐ์ ๋๋ฐ์ด์ค๊ฐ ์ฌ์ฉ ์ค์ผ ๋ ์ฌ์ฉ์๋
์ฐจ์ ์ฑ
์ธ ๋๋ฐ์ด์ค์๊ฒ ์๋น์ค ์์ฒญ ๋ฉ์์ง๋ฅผ ๋ณด๋ด๊ฒ ๋๋ค. ์ด๋ฌํ ๋ถํ์ํ ์๋น์ค ์์ฒญ ๋ฉ์์ง๋ฅผ ์ฃผ๊ณ ๋ฐ๋ ๋ฐ ๊ฑธ๋ฆฌ๋ ์๊ฐ์ ์ฌ์ฉ์์๊ฒ ์๋น์ค๋ฅผ
์ ๊ณตํ๋๋ฐ ์ง์ฐ์ ๋ฐ์์ํจ๋ค. </p>
- source_sentence: ์ด์ํ ์์์๋ ๋ฌผ๊ณผ ๋ฐ์ํ์ฌ ํ๋ก๋ธ์ค์นด์ดํธ ์์ฌ์ ๋ถํด๋ฅผ ์ผ๊ธฐํ๋ ์์ธ์ด ๋ญ์ผ?
sentences:
- <h1>์ ์ฝ</h1><p>๋ณธ ๋
ผ๋ฌธ์์๋ Electrocorticogram(ECoG) ์ ํธ๋ฅผ ์ด์ฉํ์ฌ ์๊ณผ ํ๊ฟ์น์ ์์ง์์ ์ถ๋ก ํ๋ ๋ฐฉ๋ฒ์
์ ์ํ๋ค. ํ์๋ก๋ถํฐ ๋ค์์ ์ฑ๋์ ์ด์ฉํ์ฌ ํ๋ฉด ๊ทผ์ ๋ ์ ํธ์ ECoG ์ ํธ๋ฅผ ๋์์ ์ทจ๋ํ์๋ค. ์ถ๋ก ํ๋ ๋์์ ์์ ์ฅ์๋ค ํด๋ ๋์๊ณผ
ํ๊ฟ์น๋ฅผ ์์ผ๋ก ๊ตฝํ๋ ๋์์ด๋ฉฐ, ์ธ๋ถ ์๊ทน์ ์ํด ๋์์ ์ํํ๋ ๋ฐฉ๋ฒ ๋์ ํ์์ ์์ ์์ง์ ์ํด ๋์์ ์ํํ๊ฒ ํ์๋ค. ํ๋ฉด ๊ทผ์ ๋
์ ํธ๋ฅผ ์ด์ฉํ์ฌ ๋์์ ์ํํ ์ด๋ ์์ ์ ์ฐพ๊ณ , ECOG ์ ํธ๋ฅผ ์ด์ฉํ์ฌ ๋์์ ์ถ๋ก ํ๋ค. ๊ฐ ๋์์ ํน์ง์ ์ถ์ถํ๊ธฐ ์ํ์ฌ ECoG ์ ํธ๋ฅผ
์ ์ฒด ๋์ญ์ ํฌํจํ \( \delta, \theta, a \), \( \beta, \mathrm{y} \) ์ด 6 ๊ฐ์ ๋์ญ์ ๋๋์ด ์ ๋ณด
์ํธ๋กํผ๋ฅผ ๊ตฌํ๊ณ , ์ต๋์ฐ๋์ถ์ ๋ฒ์ ์ฌ์ฉํ์ฌ ๋์์ ์ถ์ ํ์๋ค. ์คํ ๊ฒฐ๊ณผ ๊ฐ๋ง๋์ญ์ ECOG๋ฅผ ์ฌ์ฉํ ๊ฒฝ์ฐ ๋ค๋ฅธ ๋์ญ์ ์ฌ์ฉํ ๋ ๋ณด๋ค ๋์
ํ๊ท \( 74 \% \) ์ ์ฑ๋ฅ์ ๋ณด์ด๋ฉฐ, ๋ค๋ฅธ ๋์ญ๋ณด๋ค ๊ฐ๋ง ๋์ญ์์ ๋์ ์ถ์ ์ฑ๊ณต๋ฅ ์ ๋ณด์๋ค. ๋ํ ์ด๋ ์์ ์ ๊ธฐ์ค์ผ๋ก 3 ๊ฐ์
์๊ฐ ๊ตฌ๊ฐ์ผ๋ก ๋๋์ด ์ค๋น์ ์๋ฅผ ํฌํจํ๋ 'before' ๊ตฌ๊ฐ๊ณผ 'onset' ๊ตฌ๊ฐ์ ๋น๊ตํ์๋ค. 'before' ๊ตฌ๊ฐ๊ณผ 'onset' ๊ตฌ๊ฐ์์
์ถ์ ์ฑ๊ณต๋ฅ ์ ๊ฐ๊ฐ \( 66 \% \), \( 65 \% \) ๋ก ์ค๋น์ ์๋ฅผ ์ด์ฉํ ์ ์๋ค๋ ๊ฒ์ ์ ์ ์์๋ค. </p>
- "ํจ์จ์ด ๋๊ณ ๊ด์์ ์ฑ์ด ์ฐ์ํ ํ๋ก๋ธ์ค์นด์ดํธ ํ์์ ์ง ์์ฌ/์์ ๊ธฐ์ ๊ฐ๋ฐ - ๊ณ ํจ์จ(21.2%)๊ณผ ๊ณ ์์ ์ฑ(1,000์๊ฐ ์ ์ง)์ ๋ชจ๋\
\ ๋ง์กฑํ๋ ํ๋ก๋ธ์ค์นด์ดํธ ํ์์ ์ง์ฉ ํต์ฌ ์์ฌ ๋ฐ ์ ๋น์ฉ ์ ์กฐ ๊ธฐ์ ๊ฐ๋ฐ-\nโก ์ด๋ฒ ์ฐ๊ตฌ์์๋ ์ด์ ์ฐ๊ตฌ์ฑ๊ณผ(๊ตฌ์กฐ, ๊ณต์ , ์ ์กฐ์ฑ ๊ธฐ์ )๋ฅผ\
\ ๊ธฐ๋ฐ*์ผ๋ก ์ด์ข
์ ํฉ** ํ๋ก๋ธ์ค์นด์ดํธ ํ์์ ์ง์ ๊ณ ํจ์จํ(21.2%)์ ๋์ ๊ด์์ ์ฑ(์์ธ์ ํฌํจํ ๊ด์กฐ์ฌ์์ 1,000์๊ฐ ์ด์ ์์ ํ\
\ ํจ์จ ์ ์ง)์ ๋ชจ๋ ๋ง์กฑํ๋ ๊ด์ ๊ทน ์์ฌ๋ฅผ ์ ์จ(๊ธฐ์กด 900 โ์ด์ ๊ณ ์จ โ 200 โ์ดํ) ์์ ํฉ์ฑํ๋ ๋ฐฉ๋ฒ์ ๊ฐ๋ฐํ์๋ค. *ใ ์ฐ๊ตฌ์ง\
\ ์ด์ ์ฐ๊ตฌ์ฑ๊ณผ ใ\nใป๋ฌด-์ ๊ธฐ ํ์ด๋ธ๋ฆฌ๋ ํ๋ก๋ธ์ค์นด์ดํธ ํ์์ ์ง ํ๋ซํผ ๊ตฌ์กฐ ๊ธฐ์ ๊ฐ๋ฐ (Nature Photonics 2013.5) \n\
ใป๋งค์ฐ ๊ท ์ผํ๊ณ ์น๋ฐํ ํ๋ก๋ธ์ค์นด์ดํธ ๋ฐ๋ง ์ ์กฐ ์ ๊ท ์ฉ์ก ๊ณต์ ๊ธฐ์ ๊ฐ๋ฐ (Nature Materials 2014.7) \nใป๊ณ ํจ์จ์ ์ํ\
\ ํ๋ก๋ธ์ค์นด์ดํธ ๊ฒฐ์ ์ ์์ ํ ์ ์กฐ์ฑ ๊ธฐ์ ๊ฐ๋ฐ (Nature 2015.1) \nใป๊ณ ํ์ง ํ๋ก๋ธ์ค์นด์ดํธ ๋ฐ๋ง ํ์ฑ์ ์ํ ์ ๊ท ๊ณต์ ๊ธฐ์ \
\ ๊ฐ๋ฐ (Science 2015.6) ๋ฑ\n** ์ด์ข
์ ํฉ : ๊ฐ์ ์์ฌ๊ฐ์ ์ ํฉ์ธ ๋์ข
์ ํฉ๊ณผ ๋ฌ๋ฆฌ ๋ค๋ฅธ ์ข
๋ฅ์ ์์ฌ๊ฐ์ ์ ํฉ์ ์๋ฏธ, ํ๋ก๋ธ์ค์นด์ดํธ๋\
\ ๋ฌด๊ธฐ๋ฌผ, ์ ๊ธฐ๋ฌผ, ๋ฌด/์ ๊ธฐ ํผ์ฑ๋ฌผ ๊ฐ์ ์ด์ข
์ ํฉ์ ์ด๋ฃธ.\nใ
๋ ๋์๊ฐ์ ์ฐ์์ ์ด๋ฉฐ ๋๋ ์์ฐ ๊ณต์ ์ด ๊ฐ๋ฅํโํซ-ํ๋ ์ฑ (hot-pressing)\
\ ๊ณต๋ฒ*โ์ ์๋กญ๊ฒ ์ ์ํ์ฌ, ๊ณ ํจ์จ / ๊ณ ์์ ์ฑ / ์ ๋น์ฉ์ ๋ฐฉ๋ฒ์ผ๋ก ํ๋ก๋ธ์ค์นด์ดํธ ํ์์ ์ง๋ฅผ ์ ์กฐํ๋ ์๋ก์ด ํ์์ ์ง์ ์กฐ ๋ฐฉ๋ฒ๋ก ์ ์ ์ํ์๋ค.\
\ * ํซ-ํ๋ ์ฑ ๊ณต๋ฒ : ์จ๋์ ์๋ ฅ์ ๊ฐํ์ฌ ๋ ๋ฌผ์ฒด๋ฅผ ๋จ๋จํ ์ ์ฐฉ ์ํค๋ ๋ฐฉ๋ฒ"
- <h1>2. ํ๊ฒฝ์ ์์ธ์ ์ํ ํ๋ก๋ธ์นด์ดํธ ์์ฌ ๋ถ์์ ์ฑ</h1><h2>2.1. ์๋ถ์ ์ํ ์์ ์ฑ ์ํฅ</h2><p>์ ๊ธฐ ํ๋ก๋ธ์ค์นด์ดํธ์ธ
\( \mathrm{MAPbI}_{3} \) ์ \(\mathrm{MA}^{+}\)์ \(\mathrm{I}^{-}\)๋ ์ฝํ ๊ฒฐํฉ์ ํ๊ณ
์์ด ์ด์ํ ์ (dihydrate phase)์์๋ ๋ฌผ๊ณผ ๋ฐ์ํ์ฌ ํ๋ก๋ธ์ค์นด์ดํธ ์์ฌ์ ๋ถํด๋ฅผ ์ผ๊ธฐํ๋ค. ์ด๋ \( \mathrm{MAPbI}_{3}
\) ์ ๋ฌผ์ด ๋ฐ์ํ์ฌ ์์ฑ๋ ์ด์ํ ํํฉ๋ฌผ (\( \mathrm{MAPbI}_{3} \cdot \mathrm{H}_{2} \mathrm{O}
\)) ์ด \( \mathrm{CH}_{3} \mathrm{NH}_{2}\), \(\mathrm{HI}\), \(\mathrm{PbI}_{2}
\) ๋ก ๋ถํด๋๊ณ , ์์ฑ๋ \( \mathrm{CH}_{3} \mathrm{NH}_{2} \) ์ \( \mathrm{HI} \) ๋ ๋ฌผ์
๋
น์ ๊ฒฐ๊ตญ ๊ณ ์์ \( \mathrm{PbI}_{2} \) ๋ง ๋จ๋ ๊ฒ์ผ๋ก ์ค๋ช
ํ ์ ์๋ค. </p><p>๋ฌด๊ธฐ ํ๋ก๋ธ์ค์นด์ดํธ๋ ์๋ถ์ ์ํ
์ฌ๊ฒฐ์ ํ ๋ฐ ํ๋ฉด ๊ฒฐํฉ ๋ฆฌ๊ฐ๋์ ์์ค๊ณผ ๋ถํด๋ก ์ธํด ํ๋ฉด์ ํธ๋ฉ ์ค์๊ฐ ์ฆ๊ฐํ์ฌ ๋ฐ๊ดํจ์จ์ด ๊ฐ์ํ๋ค. ๋ํ ํ๋ก๋ธ ์ค์นด์ดํธ ์์ฌ๋ ๋น์ด ์๋
์ํฉ์์๋ ๋ฌผ์ ์ํด ์์ฌ๊ฐ ๋ถํด๋์ด ์์ ์ฑ์ด ๊ฐ์ํ๋ค. </p><h2>2.2. ๋น์ ์ํ ์์ ์ฑ ์ํฅ</h2><p>ํ๋ก๋ธ์ค์นด์ดํธ ์์ฌ๊ฐ ์ฅ์๊ฐ
๋น์ ๋
ธ์ถ๋๋ ๊ฒฝ์ฐ ๊ด-์์ฑ ์ ํ (photo-generated carrier)๊ฐ ํ๋ก๋ธ์ค์นด์ดํธ ์์ฌ ํ๋ฉด์ผ๋ก ํ์ฐ๋์ด ์ด์จ์ฑ ํ๋ฉด ๋ฆฌ๊ฐ๋์
๊ฒฐํฉํ๋ค. ์ด ๊ณผ์ ์ค์ ๋ช ๊ฐ์ ๋ฆฌ๊ฐ๋๋ค์ ์ฉ๋งค์ ๋
น์, ๋ณดํธ๋์ง ์์ ๋ฉด์ ์ค์ฌ์ผ๋ก ํ๋ก๋ธ์ค์นด์ดํธ ์์ฌ๋ผ๋ฆฌ ์์งํ์ฌ ๋ฐ๊ด ํจ์จ์ด ๊ฐ์ํ๋ค.
๋ํ ํ๋ก๋ธ์ค์นด์ดํธ ์์ฌ์ ์์ง ๋ฐ ๋ฆฌ๊ฐ๋ ์์ค๋ก ์ธํด ํธ๋ฉ ์ค์๊ฐ ์ฆ๊ฐํ์ฌ ๊ดํ์ ํน์ฑ์ด ํ์ ํ ๊ฐ์๋๋ค. pc-LED๋ ์ค์ํ์์ ์ฅ์๊ฐ
๋น์ ๋
ธ์ถ๋๊ธฐ๋๋ฌธ์ ๋น์ ์ํ ๋ฐ๊ด ๊ฐ์ ๋ฐ ์์ฌ ์์ ์ฑ ๊ฐ์๋ ๊ณ ์ฐ์ ๋ฐ๊ด์ ํ์๋ก ํ๋ pc-LED์ ์ ์ฉ์ ๋ฌธ์ ๊ฐ ๋๋ค. </p><h2>2.3.
์ฐ์์ ์ํ ์์ ์ฑ ์ํฅ</h2><p>ํ๋ก๋ธ์ค์นด์ดํธ ์์ฌ๋ ๋น์ ๋
ธ์ถ๋ ๊ฒฝ์ฐ์๋ง ์ฐ์์ ๋ฐ์ํ๋ฉฐ ํนํ ๊ด-์์ฑ ์ ํ๋ฅผ ๊ฐ์ง ํ๋ก๋ธ์ค์นด์ดํธ ์์ฌ๋
์ฐ์ ๋ถ์์ ์ํฅ์ ๋ฐ๊ธฐ ์ฝ๋ค. ์ฐ์ ๋ถ์๊ฐ ๊ฒฉ์๋ก ํ์ฐ๋์ด ๊ณต๊ณต ๊ฒฐํจ (vacancy)์ ์ฑ์ฐ๊ฒ ๋๊ณ ๊ด-์์ฑ ์ ์๊ฐ ์ ๋๋์, ์ ๊ณต์ด ๊ฐ์ ์๋์
์์ฑ๋๋ค. ํ๋ก๋ธ์ค์นด์ดํธ ์์ฌ์ ์ฐ์๊ฐ ๋ฐ์ํด \( \mathrm{O}^{2-} \) ๊ฐ ์์ฑ๋์ด \( \mathrm{MAPbI}_{3} \)
๊ฐ \( \mathrm{PbI}_{2}\), \(\mathrm{H}_{2} \mathrm{O}\), \(\mathrm{I}_{2}\), \(\mathrm{CH}_{3}
\mathrm{NH}_{2} \) ๋ก ๋ถํด๋๋ค. ์ด๋ฌํ ๊ด-์ฐํ (photo-oxidation) ๊ณผ์ ์ผ๋ก ํ๋ก๋ธ์ค์นด์ดํธ ์์ฌ๊ฐ ๋ถํด๋์ด ์์ ์ฑ์ด
๊ฐ์ํ๋ค. </p><h2>2.4. ์ด์ ์ํ ์์ ์ฑ ์ํฅ</h2><p>์ด์ค๋๋ถ์ (TGA) ๋ถ์์ผ๋ก ํ์ธํ ํ๋ก๋ธ์ค์นด์ดํธ ์์ฌ๋ ์๋ถ๊ณผ ์ฐ์๊ฐ
์์ ๋ \( \mathrm{CsPbX}_{3} \) ๋ \( 500{ }^{\circ} \mathrm{C} \),\( \mathrm{MAPbX}_{3}
\) ๋ \( 220{ }^{\circ} \mathrm{C} \) ๊น์ง ๊ตฌ์กฐ๋ฅผ ์ ์งํ ์ ์๋ค. ์ ยท ๋ฌด๊ธฐ ํ๋ก๋ธ์ค์นด์ดํธ๋ ์ด์ ์ํด ๋น๊ต์
๋์ ์์ ์ฑ์ ๊ฐ์ง๊ณ ์์ง๋ง ๊ณ ์จ์์ ํ๋ก๋ธ์ค์นด์ดํธ ์์ฌ๊ฐ ์๋ถ๊ณผ ์ฐ์์ ๋ฐ์ํ๋ฉด ๊ตฌ์กฐ ๋ถํด๊ฐ ๋ ๊ฐ์ํ๋์ด ์์ ์ฑ์ด ๊ธ๊ฒฉํ ๊ฐ์ํ๋ค. </p><p>๋ํ
๊ณ ์จ์์ ๋ฐ๊ด ํจ์จ์ด ๊ฐ์ํ๋๋ฐ ์ด๋ ์ด์ ์ผ๋ก ํ์ฑํ๋ ํ ๋ก๊ฒ ๊ณต๊ณต ๊ฒฐํจ์ ์ํด \(\mathrm{MAPbBr}_{3} \) ๋\( 100{
}^{\circ} \mathrm{C} \) ์ด์์ ์จ๋์์ ๋ฐ๊ด์ ๊ฑฐ์ ๋ณด์ด์ง ์์ผ๋ฉฐ \( \mathrm{CsPbBr}_{3} \) ๋ ์ฝ
\( 80 \% \) ์ ๋ฐ๊ด ์์ค์ ๋ณด์ด๋ ๊ฒ์ผ๋ก ํ์ธํ ์ ์๋ค. </p>
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on BAAI/bge-m3
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-m3](https://huggingface.co./BAAI/bge-m3). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-m3](https://huggingface.co./BAAI/bge-m3) <!-- at revision 5617a9f61b028005a4858fdac845db406aefb181 -->
- **Maximum Sequence Length:** 1024 tokens
- **Output Dimensionality:** 1024 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co./models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("seongil-dn/bge-m3-kor-retrieval-451949-bs64-science")
# Run inference
sentences = [
'์ด์ํ ์์์๋ ๋ฌผ๊ณผ ๋ฐ์ํ์ฌ ํ๋ก๋ธ์ค์นด์ดํธ ์์ฌ์ ๋ถํด๋ฅผ ์ผ๊ธฐํ๋ ์์ธ์ด ๋ญ์ผ?',
'<h1>2. ํ๊ฒฝ์ ์์ธ์ ์ํ ํ๋ก๋ธ์นด์ดํธ ์์ฌ ๋ถ์์ ์ฑ</h1><h2>2.1. ์๋ถ์ ์ํ ์์ ์ฑ ์ํฅ</h2><p>์ ๊ธฐ ํ๋ก๋ธ์ค์นด์ดํธ์ธ \\( \\mathrm{MAPbI}_{3} \\) ์ \\(\\mathrm{MA}^{+}\\)์ \\(\\mathrm{I}^{-}\\)๋ ์ฝํ ๊ฒฐํฉ์ ํ๊ณ ์์ด ์ด์ํ ์ (dihydrate phase)์์๋ ๋ฌผ๊ณผ ๋ฐ์ํ์ฌ ํ๋ก๋ธ์ค์นด์ดํธ ์์ฌ์ ๋ถํด๋ฅผ ์ผ๊ธฐํ๋ค. ์ด๋ \\( \\mathrm{MAPbI}_{3} \\) ์ ๋ฌผ์ด ๋ฐ์ํ์ฌ ์์ฑ๋ ์ด์ํ ํํฉ๋ฌผ (\\( \\mathrm{MAPbI}_{3} \\cdot \\mathrm{H}_{2} \\mathrm{O} \\)) ์ด \\( \\mathrm{CH}_{3} \\mathrm{NH}_{2}\\), \\(\\mathrm{HI}\\), \\(\\mathrm{PbI}_{2} \\) ๋ก ๋ถํด๋๊ณ , ์์ฑ๋ \\( \\mathrm{CH}_{3} \\mathrm{NH}_{2} \\) ์ \\( \\mathrm{HI} \\) ๋ ๋ฌผ์ ๋
น์ ๊ฒฐ๊ตญ ๊ณ ์์ \\( \\mathrm{PbI}_{2} \\) ๋ง ๋จ๋ ๊ฒ์ผ๋ก ์ค๋ช
ํ ์ ์๋ค. </p><p>๋ฌด๊ธฐ ํ๋ก๋ธ์ค์นด์ดํธ๋ ์๋ถ์ ์ํ ์ฌ๊ฒฐ์ ํ ๋ฐ ํ๋ฉด ๊ฒฐํฉ ๋ฆฌ๊ฐ๋์ ์์ค๊ณผ ๋ถํด๋ก ์ธํด ํ๋ฉด์ ํธ๋ฉ ์ค์๊ฐ ์ฆ๊ฐํ์ฌ ๋ฐ๊ดํจ์จ์ด ๊ฐ์ํ๋ค. ๋ํ ํ๋ก๋ธ ์ค์นด์ดํธ ์์ฌ๋ ๋น์ด ์๋ ์ํฉ์์๋ ๋ฌผ์ ์ํด ์์ฌ๊ฐ ๋ถํด๋์ด ์์ ์ฑ์ด ๊ฐ์ํ๋ค. </p><h2>2.2. ๋น์ ์ํ ์์ ์ฑ ์ํฅ</h2><p>ํ๋ก๋ธ์ค์นด์ดํธ ์์ฌ๊ฐ ์ฅ์๊ฐ ๋น์ ๋
ธ์ถ๋๋ ๊ฒฝ์ฐ ๊ด-์์ฑ ์ ํ (photo-generated carrier)๊ฐ ํ๋ก๋ธ์ค์นด์ดํธ ์์ฌ ํ๋ฉด์ผ๋ก ํ์ฐ๋์ด ์ด์จ์ฑ ํ๋ฉด ๋ฆฌ๊ฐ๋์ ๊ฒฐํฉํ๋ค. ์ด ๊ณผ์ ์ค์ ๋ช ๊ฐ์ ๋ฆฌ๊ฐ๋๋ค์ ์ฉ๋งค์ ๋
น์, ๋ณดํธ๋์ง ์์ ๋ฉด์ ์ค์ฌ์ผ๋ก ํ๋ก๋ธ์ค์นด์ดํธ ์์ฌ๋ผ๋ฆฌ ์์งํ์ฌ ๋ฐ๊ด ํจ์จ์ด ๊ฐ์ํ๋ค. ๋ํ ํ๋ก๋ธ์ค์นด์ดํธ ์์ฌ์ ์์ง ๋ฐ ๋ฆฌ๊ฐ๋ ์์ค๋ก ์ธํด ํธ๋ฉ ์ค์๊ฐ ์ฆ๊ฐํ์ฌ ๊ดํ์ ํน์ฑ์ด ํ์ ํ ๊ฐ์๋๋ค. pc-LED๋ ์ค์ํ์์ ์ฅ์๊ฐ ๋น์ ๋
ธ์ถ๋๊ธฐ๋๋ฌธ์ ๋น์ ์ํ ๋ฐ๊ด ๊ฐ์ ๋ฐ ์์ฌ ์์ ์ฑ ๊ฐ์๋ ๊ณ ์ฐ์ ๋ฐ๊ด์ ํ์๋ก ํ๋ pc-LED์ ์ ์ฉ์ ๋ฌธ์ ๊ฐ ๋๋ค. </p><h2>2.3. ์ฐ์์ ์ํ ์์ ์ฑ ์ํฅ</h2><p>ํ๋ก๋ธ์ค์นด์ดํธ ์์ฌ๋ ๋น์ ๋
ธ์ถ๋ ๊ฒฝ์ฐ์๋ง ์ฐ์์ ๋ฐ์ํ๋ฉฐ ํนํ ๊ด-์์ฑ ์ ํ๋ฅผ ๊ฐ์ง ํ๋ก๋ธ์ค์นด์ดํธ ์์ฌ๋ ์ฐ์ ๋ถ์์ ์ํฅ์ ๋ฐ๊ธฐ ์ฝ๋ค. ์ฐ์ ๋ถ์๊ฐ ๊ฒฉ์๋ก ํ์ฐ๋์ด ๊ณต๊ณต ๊ฒฐํจ (vacancy)์ ์ฑ์ฐ๊ฒ ๋๊ณ ๊ด-์์ฑ ์ ์๊ฐ ์ ๋๋์, ์ ๊ณต์ด ๊ฐ์ ์๋์ ์์ฑ๋๋ค. ํ๋ก๋ธ์ค์นด์ดํธ ์์ฌ์ ์ฐ์๊ฐ ๋ฐ์ํด \\( \\mathrm{O}^{2-} \\) ๊ฐ ์์ฑ๋์ด \\( \\mathrm{MAPbI}_{3} \\) ๊ฐ \\( \\mathrm{PbI}_{2}\\), \\(\\mathrm{H}_{2} \\mathrm{O}\\), \\(\\mathrm{I}_{2}\\), \\(\\mathrm{CH}_{3} \\mathrm{NH}_{2} \\) ๋ก ๋ถํด๋๋ค. ์ด๋ฌํ ๊ด-์ฐํ (photo-oxidation) ๊ณผ์ ์ผ๋ก ํ๋ก๋ธ์ค์นด์ดํธ ์์ฌ๊ฐ ๋ถํด๋์ด ์์ ์ฑ์ด ๊ฐ์ํ๋ค. </p><h2>2.4. ์ด์ ์ํ ์์ ์ฑ ์ํฅ</h2><p>์ด์ค๋๋ถ์ (TGA) ๋ถ์์ผ๋ก ํ์ธํ ํ๋ก๋ธ์ค์นด์ดํธ ์์ฌ๋ ์๋ถ๊ณผ ์ฐ์๊ฐ ์์ ๋ \\( \\mathrm{CsPbX}_{3} \\) ๋ \\( 500{ }^{\\circ} \\mathrm{C} \\),\\( \\mathrm{MAPbX}_{3} \\) ๋ \\( 220{ }^{\\circ} \\mathrm{C} \\) ๊น์ง ๊ตฌ์กฐ๋ฅผ ์ ์งํ ์ ์๋ค. ์ ยท ๋ฌด๊ธฐ ํ๋ก๋ธ์ค์นด์ดํธ๋ ์ด์ ์ํด ๋น๊ต์ ๋์ ์์ ์ฑ์ ๊ฐ์ง๊ณ ์์ง๋ง ๊ณ ์จ์์ ํ๋ก๋ธ์ค์นด์ดํธ ์์ฌ๊ฐ ์๋ถ๊ณผ ์ฐ์์ ๋ฐ์ํ๋ฉด ๊ตฌ์กฐ ๋ถํด๊ฐ ๋ ๊ฐ์ํ๋์ด ์์ ์ฑ์ด ๊ธ๊ฒฉํ ๊ฐ์ํ๋ค. </p><p>๋ํ ๊ณ ์จ์์ ๋ฐ๊ด ํจ์จ์ด ๊ฐ์ํ๋๋ฐ ์ด๋ ์ด์ ์ผ๋ก ํ์ฑํ๋ ํ ๋ก๊ฒ ๊ณต๊ณต ๊ฒฐํจ์ ์ํด \\(\\mathrm{MAPbBr}_{3} \\) ๋\\( 100{ }^{\\circ} \\mathrm{C} \\) ์ด์์ ์จ๋์์ ๋ฐ๊ด์ ๊ฑฐ์ ๋ณด์ด์ง ์์ผ๋ฉฐ \\( \\mathrm{CsPbBr}_{3} \\) ๋ ์ฝ \\( 80 \\% \\) ์ ๋ฐ๊ด ์์ค์ ๋ณด์ด๋ ๊ฒ์ผ๋ก ํ์ธํ ์ ์๋ค. </p>',
'ํจ์จ์ด ๋๊ณ ๊ด์์ ์ฑ์ด ์ฐ์ํ ํ๋ก๋ธ์ค์นด์ดํธ ํ์์ ์ง ์์ฌ/์์ ๊ธฐ์ ๊ฐ๋ฐ - ๊ณ ํจ์จ(21.2%)๊ณผ ๊ณ ์์ ์ฑ(1,000์๊ฐ ์ ์ง)์ ๋ชจ๋ ๋ง์กฑํ๋ ํ๋ก๋ธ์ค์นด์ดํธ ํ์์ ์ง์ฉ ํต์ฌ ์์ฌ ๋ฐ ์ ๋น์ฉ ์ ์กฐ ๊ธฐ์ ๊ฐ๋ฐ-\nโก ์ด๋ฒ ์ฐ๊ตฌ์์๋ ์ด์ ์ฐ๊ตฌ์ฑ๊ณผ(๊ตฌ์กฐ, ๊ณต์ , ์ ์กฐ์ฑ ๊ธฐ์ )๋ฅผ ๊ธฐ๋ฐ*์ผ๋ก ์ด์ข
์ ํฉ** ํ๋ก๋ธ์ค์นด์ดํธ ํ์์ ์ง์ ๊ณ ํจ์จํ(21.2%)์ ๋์ ๊ด์์ ์ฑ(์์ธ์ ํฌํจํ ๊ด์กฐ์ฌ์์ 1,000์๊ฐ ์ด์ ์์ ํ ํจ์จ ์ ์ง)์ ๋ชจ๋ ๋ง์กฑํ๋ ๊ด์ ๊ทน ์์ฌ๋ฅผ ์ ์จ(๊ธฐ์กด 900 โ์ด์ ๊ณ ์จ โ 200 โ์ดํ) ์์ ํฉ์ฑํ๋ ๋ฐฉ๋ฒ์ ๊ฐ๋ฐํ์๋ค. *ใ ์ฐ๊ตฌ์ง ์ด์ ์ฐ๊ตฌ์ฑ๊ณผ ใ\nใป๋ฌด-์ ๊ธฐ ํ์ด๋ธ๋ฆฌ๋ ํ๋ก๋ธ์ค์นด์ดํธ ํ์์ ์ง ํ๋ซํผ ๊ตฌ์กฐ ๊ธฐ์ ๊ฐ๋ฐ (Nature Photonics 2013.5) \nใป๋งค์ฐ ๊ท ์ผํ๊ณ ์น๋ฐํ ํ๋ก๋ธ์ค์นด์ดํธ ๋ฐ๋ง ์ ์กฐ ์ ๊ท ์ฉ์ก ๊ณต์ ๊ธฐ์ ๊ฐ๋ฐ (Nature Materials 2014.7) \nใป๊ณ ํจ์จ์ ์ํ ํ๋ก๋ธ์ค์นด์ดํธ ๊ฒฐ์ ์ ์์ ํ ์ ์กฐ์ฑ ๊ธฐ์ ๊ฐ๋ฐ (Nature 2015.1) \nใป๊ณ ํ์ง ํ๋ก๋ธ์ค์นด์ดํธ ๋ฐ๋ง ํ์ฑ์ ์ํ ์ ๊ท ๊ณต์ ๊ธฐ์ ๊ฐ๋ฐ (Science 2015.6) ๋ฑ\n** ์ด์ข
์ ํฉ : ๊ฐ์ ์์ฌ๊ฐ์ ์ ํฉ์ธ ๋์ข
์ ํฉ๊ณผ ๋ฌ๋ฆฌ ๋ค๋ฅธ ์ข
๋ฅ์ ์์ฌ๊ฐ์ ์ ํฉ์ ์๋ฏธ, ํ๋ก๋ธ์ค์นด์ดํธ๋ ๋ฌด๊ธฐ๋ฌผ, ์ ๊ธฐ๋ฌผ, ๋ฌด/์ ๊ธฐ ํผ์ฑ๋ฌผ ๊ฐ์ ์ด์ข
์ ํฉ์ ์ด๋ฃธ.\nใ
๋ ๋์๊ฐ์ ์ฐ์์ ์ด๋ฉฐ ๋๋ ์์ฐ ๊ณต์ ์ด ๊ฐ๋ฅํโํซ-ํ๋ ์ฑ (hot-pressing) ๊ณต๋ฒ*โ์ ์๋กญ๊ฒ ์ ์ํ์ฌ, ๊ณ ํจ์จ / ๊ณ ์์ ์ฑ / ์ ๋น์ฉ์ ๋ฐฉ๋ฒ์ผ๋ก ํ๋ก๋ธ์ค์นด์ดํธ ํ์์ ์ง๋ฅผ ์ ์กฐํ๋ ์๋ก์ด ํ์์ ์ง์ ์กฐ ๋ฐฉ๋ฒ๋ก ์ ์ ์ํ์๋ค. * ํซ-ํ๋ ์ฑ ๊ณต๋ฒ : ์จ๋์ ์๋ ฅ์ ๊ฐํ์ฌ ๋ ๋ฌผ์ฒด๋ฅผ ๋จ๋จํ ์ ์ฐฉ ์ํค๋ ๋ฐฉ๋ฒ',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 64
- `learning_rate`: 3e-05
- `num_train_epochs`: 1
- `warmup_ratio`: 0.05
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 3e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.05
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: True
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 0.0312 | 1 | 0.941 |
| 0.0625 | 2 | 0.9909 |
| 0.0938 | 3 | 0.7258 |
| 0.125 | 4 | 0.538 |
| 0.1562 | 5 | 0.567 |
| 0.1875 | 6 | 0.4329 |
| 0.2188 | 7 | 0.4238 |
| 0.25 | 8 | 0.3989 |
| 0.2812 | 9 | 0.3825 |
| 0.3125 | 10 | 0.392 |
| 0.3438 | 11 | 0.3822 |
| 0.375 | 12 | 0.3271 |
| 0.4062 | 13 | 0.3284 |
| 0.4375 | 14 | 0.3468 |
| 0.4688 | 15 | 0.3098 |
| 0.5 | 16 | 0.3332 |
| 0.5312 | 17 | 0.2871 |
| 0.5625 | 18 | 0.3132 |
| 0.5938 | 19 | 0.3172 |
| 0.625 | 20 | 0.3133 |
| 0.6562 | 21 | 0.3134 |
| 0.6875 | 22 | 0.2968 |
| 0.7188 | 23 | 0.3227 |
| 0.75 | 24 | 0.2977 |
| 0.7812 | 25 | 0.3022 |
| 0.8125 | 26 | 0.2556 |
| 0.8438 | 27 | 0.3152 |
| 0.875 | 28 | 0.2597 |
| 0.9062 | 29 | 0.3088 |
| 0.9375 | 30 | 0.2702 |
| 0.9688 | 31 | 0.3415 |
| 1.0 | 32 | 0.2765 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.2.1
- Transformers: 4.44.2
- PyTorch: 2.3.1+cu121
- Accelerate: 1.1.1
- Datasets: 2.21.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### CachedMultipleNegativesRankingLoss
```bibtex
@misc{gao2021scaling,
title={Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup},
author={Luyu Gao and Yunyi Zhang and Jiawei Han and Jamie Callan},
year={2021},
eprint={2101.06983},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
nhung01/eff20207-9dd3-4910-87df-54f00afa70d0 | nhung01 | "2025-01-28T06:12:02Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Hermes-3-Llama-3.1-8B",
"base_model:adapter:NousResearch/Hermes-3-Llama-3.1-8B",
"license:llama3",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-28T05:53:52Z" | ---
library_name: peft
license: llama3
base_model: NousResearch/Hermes-3-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: eff20207-9dd3-4910-87df-54f00afa70d0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Hermes-3-Llama-3.1-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 94ffa3eaa02f0f89_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/94ffa3eaa02f0f89_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhung01/eff20207-9dd3-4910-87df-54f00afa70d0
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/94ffa3eaa02f0f89_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d3379fa4-7a55-407e-8f15-7b0aefbda53d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: d3379fa4-7a55-407e-8f15-7b0aefbda53d
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# eff20207-9dd3-4910-87df-54f00afa70d0
This model is a fine-tuned version of [NousResearch/Hermes-3-Llama-3.1-8B](https://huggingface.co./NousResearch/Hermes-3-Llama-3.1-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.9093
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 5.9423 | 0.2904 | 200 | 4.9093 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
hkivancoral/smids_5x_deit_tiny_adamax_00001_fold2 | hkivancoral | "2023-12-18T00:34:08Z" | 5 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-12-14T14:52:53Z" | ---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_5x_deit_tiny_adamax_00001_fold2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8752079866888519
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_tiny_adamax_00001_fold2
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co./facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0539
- Accuracy: 0.8752
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3245 | 1.0 | 375 | 0.3357 | 0.8586 |
| 0.2435 | 2.0 | 750 | 0.3012 | 0.8802 |
| 0.1837 | 3.0 | 1125 | 0.3092 | 0.8802 |
| 0.0922 | 4.0 | 1500 | 0.3362 | 0.8719 |
| 0.064 | 5.0 | 1875 | 0.4063 | 0.8619 |
| 0.0948 | 6.0 | 2250 | 0.4674 | 0.8619 |
| 0.0452 | 7.0 | 2625 | 0.5334 | 0.8602 |
| 0.0373 | 8.0 | 3000 | 0.6077 | 0.8619 |
| 0.0111 | 9.0 | 3375 | 0.6364 | 0.8769 |
| 0.0018 | 10.0 | 3750 | 0.7083 | 0.8636 |
| 0.0038 | 11.0 | 4125 | 0.7404 | 0.8752 |
| 0.0175 | 12.0 | 4500 | 0.8300 | 0.8719 |
| 0.0012 | 13.0 | 4875 | 0.8986 | 0.8652 |
| 0.0087 | 14.0 | 5250 | 0.8825 | 0.8686 |
| 0.004 | 15.0 | 5625 | 0.8822 | 0.8785 |
| 0.0001 | 16.0 | 6000 | 0.9237 | 0.8735 |
| 0.0162 | 17.0 | 6375 | 0.9830 | 0.8619 |
| 0.0 | 18.0 | 6750 | 1.0120 | 0.8702 |
| 0.0 | 19.0 | 7125 | 1.0192 | 0.8719 |
| 0.0001 | 20.0 | 7500 | 0.9781 | 0.8735 |
| 0.0 | 21.0 | 7875 | 1.0188 | 0.8702 |
| 0.0 | 22.0 | 8250 | 0.9776 | 0.8735 |
| 0.0 | 23.0 | 8625 | 1.0494 | 0.8702 |
| 0.0 | 24.0 | 9000 | 0.9531 | 0.8752 |
| 0.0 | 25.0 | 9375 | 1.0293 | 0.8719 |
| 0.0 | 26.0 | 9750 | 1.0427 | 0.8652 |
| 0.0 | 27.0 | 10125 | 1.0483 | 0.8719 |
| 0.0 | 28.0 | 10500 | 1.0202 | 0.8735 |
| 0.0 | 29.0 | 10875 | 1.0779 | 0.8686 |
| 0.0 | 30.0 | 11250 | 1.0065 | 0.8719 |
| 0.0018 | 31.0 | 11625 | 1.0762 | 0.8702 |
| 0.0202 | 32.0 | 12000 | 1.0874 | 0.8669 |
| 0.0024 | 33.0 | 12375 | 1.0366 | 0.8735 |
| 0.0 | 34.0 | 12750 | 1.1165 | 0.8686 |
| 0.0 | 35.0 | 13125 | 1.0244 | 0.8752 |
| 0.0 | 36.0 | 13500 | 1.1014 | 0.8719 |
| 0.0 | 37.0 | 13875 | 1.0995 | 0.8702 |
| 0.0 | 38.0 | 14250 | 1.1070 | 0.8719 |
| 0.0 | 39.0 | 14625 | 1.0209 | 0.8769 |
| 0.0048 | 40.0 | 15000 | 1.0540 | 0.8752 |
| 0.0 | 41.0 | 15375 | 1.0624 | 0.8752 |
| 0.0015 | 42.0 | 15750 | 1.0637 | 0.8752 |
| 0.0013 | 43.0 | 16125 | 1.0536 | 0.8752 |
| 0.0013 | 44.0 | 16500 | 1.0479 | 0.8752 |
| 0.0013 | 45.0 | 16875 | 1.0540 | 0.8752 |
| 0.0 | 46.0 | 17250 | 1.0694 | 0.8752 |
| 0.0016 | 47.0 | 17625 | 1.0601 | 0.8752 |
| 0.0 | 48.0 | 18000 | 1.0596 | 0.8752 |
| 0.0013 | 49.0 | 18375 | 1.0574 | 0.8752 |
| 0.0012 | 50.0 | 18750 | 1.0539 | 0.8752 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
sultan/BioM-ALBERT-xxlarge | sultan | "2023-11-04T23:06:35Z" | 12 | 2 | transformers | [
"transformers",
"pytorch",
"albert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-03-02T23:29:05Z" | # BioM-Transformers: Building Large Biomedical Language Models with BERT, ALBERT and ELECTRA
# Abstract
The impact of design choices on the performance
of biomedical language models recently
has been a subject for investigation. In
this paper, we empirically study biomedical
domain adaptation with large transformer models
using different design choices. We evaluate
the performance of our pretrained models
against other existing biomedical language
models in the literature. Our results show that
we achieve state-of-the-art results on several
biomedical domain tasks despite using similar
or less computational cost compared to other
models in the literature. Our findings highlight
the significant effect of design choices on
improving the performance of biomedical language
models.
# Model Description
This model was pre-trained on PubMed Abstracts only with biomedical domain vocabulary for 264K steps with a batch size of 8192 on TPUv3-512 unit. In order to help researchers with limited resources to fine-tune larger models, we created an example with PyTorch XLA. PyTorch XLA (https://github.com/pytorch/xla) is a library that allows you to use PyTorch on TPU units, which is provided for free by Google Colab and Kaggle. Follow this example to work with PyTorch/XLA [Link](https://github.com/salrowili/BioM-Transformers/blob/main/examples/Fine_Tuning_Biomedical_Models_on_Text_Classification_Task_With_HuggingFace_Transformers_and_PyTorch_XLA.ipynb)
Check our GitHub repo at https://github.com/salrowili/BioM-Transformers for TensorFlow and GluonNLP checkpoints. We also updated this repo with a couple of examples on how to fine-tune LMs on text classification and questions answering tasks such as ChemProt, SQuAD, and BioASQ.
# Colab Notebook Examples
BioM-ELECTRA-LARGE on NER and ChemProt Task [![Open In Colab][COLAB]](https://colab.research.google.com/github/salrowili/BioM-Transformers/blob/main/examples/Example_of_NER_and_ChemProt_Task_on_TPU.ipynb)
BioM-ELECTRA-Large on SQuAD2.0 and BioASQ7B Factoid tasks [![Open In Colab][COLAB]](https://colab.research.google.com/github/salrowili/BioM-Transformers/blob/main/examples/Example_of_SQuAD2_0_and_BioASQ7B_tasks_with_BioM_ELECTRA_Large_on_TPU.ipynb)
BioM-ALBERT-xxlarge on SQuAD2.0 and BioASQ7B Factoid tasks [![Open In Colab][COLAB]](https://colab.research.google.com/github/salrowili/BioM-Transformers/blob/main/examples/Example_of_SQuAD2_0_and_BioASQ7B_tasks_with_BioM_ALBERT_xxlarge_on_TPU.ipynb)
Text Classification Task With HuggingFace Transformers and PyTorchXLA on Free TPU [![Open In Colab][COLAB]](https://colab.research.google.com/github/salrowili/BioM-Transformers/blob/main/examples/Fine_Tuning_Biomedical_Models_on_Text_Classification_Task_With_HuggingFace_Transformers_and_PyTorch_XLA.ipynb)
Reproducing our BLURB results with JAX [![Open In Colab][COLAB]](https://colab.research.google.com/github/salrowili/BioM-Transformers/blob/main/examples/BLURB_LeaderBoard_with_TPU_VM.ipynb)
Finetunning BioM-Transformers with Jax/Flax on TPUv3-8 with free Kaggle resource [![Open In Colab][COLAB]](https://www.kaggle.com/code/sultanalrowili/biom-transoformers-with-flax-on-tpu-with-kaggle)
[COLAB]: https://colab.research.google.com/assets/colab-badge.svg
# Acknowledgment
We would like to acknowledge the support we have from Tensorflow Research Cloud (TFRC) team to grant us access to TPUv3 units.
# Citation
```bibtex
@inproceedings{alrowili-shanker-2021-biom,
title = "{B}io{M}-Transformers: Building Large Biomedical Language Models with {BERT}, {ALBERT} and {ELECTRA}",
author = "Alrowili, Sultan and
Shanker, Vijay",
booktitle = "Proceedings of the 20th Workshop on Biomedical Language Processing",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.bionlp-1.24",
pages = "221--227",
abstract = "The impact of design choices on the performance of biomedical language models recently has been a subject for investigation. In this paper, we empirically study biomedical domain adaptation with large transformer models using different design choices. We evaluate the performance of our pretrained models against other existing biomedical language models in the literature. Our results show that we achieve state-of-the-art results on several biomedical domain tasks despite using similar or less computational cost compared to other models in the literature. Our findings highlight the significant effect of design choices on improving the performance of biomedical language models.",
}
``` |
stefan-it/hmbench-ajmc-de-hmbert_64k-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 | stefan-it | "2023-10-26T10:06:14Z" | 5 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:dbmdz/bert-base-historic-multilingual-64k-td-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-64k-td-cased",
"license:mit",
"region:us"
] | token-classification | "2023-10-23T15:48:43Z" | ---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-64k-td-cased
widget:
- text: โ Dramatiลฟch war der Stoff vor Sophokles von รลฟchylos behandelt worden in
den ฮฯฮฟแฟฯฯฮฑฮน , denen vielleicht in der Trilogie das Stรผc>"OnJwยป ฮบฮฟฮฏฯฮนฯ vorherging
, das Stรผck ฮฃฮฑฮปฮฑฮผฮฏฮฝฮนฮฑฮน folgte .
---
# Fine-tuned Flair Model on AjMC German NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[AjMC German](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md)
NER Dataset using hmBERT 64k as backbone LM.
The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics,
and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/)
project.
The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[5e-05, 3e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|--------------|--------------|--------------|-----------------|--------------|-----------------|
| `bs4-e10-lr3e-05` | [0.8806][1] | [0.8988][2] | [0.8967][3] | [0.8924][4] | [0.8994][5] | 0.8936 ยฑ 0.0078 |
| `bs8-e10-lr5e-05` | [0.8951][6] | [0.8972][7] | [0.8933][8] | [**0.8892**][9] | [0.8902][10] | 0.893 ยฑ 0.0033 |
| `bs4-e10-lr5e-05` | [0.8789][11] | [0.891][12] | [0.9012][13] | [0.891][14] | [0.8873][15] | 0.8899 ยฑ 0.008 |
| `bs8-e10-lr3e-05` | [0.88][16] | [0.8889][17] | [0.8764][18] | [0.897][19] | [0.8948][20] | 0.8874 ยฑ 0.009 |
[1]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert_64k-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert_64k-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert_64k-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert_64k-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert_64k-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert_64k-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert_64k-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert_64k-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert_64k-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert_64k-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert_64k-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert_64k-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert_64k-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert_64k-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert_64k-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert_64k-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert_64k-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert_64k-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert_64k-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert_64k-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa Mรคrz](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion รano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs โค๏ธ
|
tensorblock/OLMo-1B-hf-GGUF | tensorblock | "2024-11-16T00:59:36Z" | 62 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"en",
"dataset:allenai/dolma",
"base_model:allenai/OLMo-1B-hf",
"base_model:quantized:allenai/OLMo-1B-hf",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-11-10T18:30:19Z" | ---
license: apache-2.0
datasets:
- allenai/dolma
language:
- en
tags:
- TensorBlock
- GGUF
base_model: allenai/OLMo-1B-hf
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## allenai/OLMo-1B-hf - GGUF
This repo contains GGUF format model files for [allenai/OLMo-1B-hf](https://huggingface.co./allenai/OLMo-1B-hf).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
<div style="text-align: left; margin: 20px 0;">
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Run them on the TensorBlock client using your local machine โ
</a>
</div>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [OLMo-1B-hf-Q2_K.gguf](https://huggingface.co./tensorblock/OLMo-1B-hf-GGUF/blob/main/OLMo-1B-hf-Q2_K.gguf) | Q2_K | 0.447 GB | smallest, significant quality loss - not recommended for most purposes |
| [OLMo-1B-hf-Q3_K_S.gguf](https://huggingface.co./tensorblock/OLMo-1B-hf-GGUF/blob/main/OLMo-1B-hf-Q3_K_S.gguf) | Q3_K_S | 0.510 GB | very small, high quality loss |
| [OLMo-1B-hf-Q3_K_M.gguf](https://huggingface.co./tensorblock/OLMo-1B-hf-GGUF/blob/main/OLMo-1B-hf-Q3_K_M.gguf) | Q3_K_M | 0.563 GB | very small, high quality loss |
| [OLMo-1B-hf-Q3_K_L.gguf](https://huggingface.co./tensorblock/OLMo-1B-hf-GGUF/blob/main/OLMo-1B-hf-Q3_K_L.gguf) | Q3_K_L | 0.607 GB | small, substantial quality loss |
| [OLMo-1B-hf-Q4_0.gguf](https://huggingface.co./tensorblock/OLMo-1B-hf-GGUF/blob/main/OLMo-1B-hf-Q4_0.gguf) | Q4_0 | 0.643 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [OLMo-1B-hf-Q4_K_S.gguf](https://huggingface.co./tensorblock/OLMo-1B-hf-GGUF/blob/main/OLMo-1B-hf-Q4_K_S.gguf) | Q4_K_S | 0.649 GB | small, greater quality loss |
| [OLMo-1B-hf-Q4_K_M.gguf](https://huggingface.co./tensorblock/OLMo-1B-hf-GGUF/blob/main/OLMo-1B-hf-Q4_K_M.gguf) | Q4_K_M | 0.683 GB | medium, balanced quality - recommended |
| [OLMo-1B-hf-Q5_0.gguf](https://huggingface.co./tensorblock/OLMo-1B-hf-GGUF/blob/main/OLMo-1B-hf-Q5_0.gguf) | Q5_0 | 0.768 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [OLMo-1B-hf-Q5_K_S.gguf](https://huggingface.co./tensorblock/OLMo-1B-hf-GGUF/blob/main/OLMo-1B-hf-Q5_K_S.gguf) | Q5_K_S | 0.768 GB | large, low quality loss - recommended |
| [OLMo-1B-hf-Q5_K_M.gguf](https://huggingface.co./tensorblock/OLMo-1B-hf-GGUF/blob/main/OLMo-1B-hf-Q5_K_M.gguf) | Q5_K_M | 0.789 GB | large, very low quality loss - recommended |
| [OLMo-1B-hf-Q6_K.gguf](https://huggingface.co./tensorblock/OLMo-1B-hf-GGUF/blob/main/OLMo-1B-hf-Q6_K.gguf) | Q6_K | 0.901 GB | very large, extremely low quality loss |
| [OLMo-1B-hf-Q8_0.gguf](https://huggingface.co./tensorblock/OLMo-1B-hf-GGUF/blob/main/OLMo-1B-hf-Q8_0.gguf) | Q8_0 | 1.166 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/OLMo-1B-hf-GGUF --include "OLMo-1B-hf-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/OLMo-1B-hf-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
huggingtweets/discarddiscord | huggingtweets | "2021-05-22T01:45:29Z" | 6 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-03-02T23:29:05Z" | ---
language: en
thumbnail: https://www.huggingtweets.com/discarddiscord/1614246710317/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1029964613029437440/3_fRmZuH_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">luna ๐ค AI Bot </div>
<div style="font-size: 15px">@discarddiscord bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@discarddiscord's tweets](https://twitter.com/discarddiscord).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 1495 |
| Retweets | 289 |
| Short tweets | 213 |
| Tweets kept | 993 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1tvxkurq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co./gpt2) which is fine-tuned on @discarddiscord's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2g2xt22m) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2g2xt22m/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/discarddiscord')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co./gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
tensorblock/Reasoning-0.5b-GGUF | tensorblock | "2024-11-26T03:35:01Z" | 83 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"sft",
"reasoning",
"TensorBlock",
"GGUF",
"en",
"dataset:KingNish/reasoning-base-20k",
"base_model:KingNish/Reasoning-0.5b",
"base_model:quantized:KingNish/Reasoning-0.5b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-11-26T03:32:07Z" | ---
base_model: KingNish/Reasoning-0.5b
language:
- en
license: apache-2.0
datasets:
- KingNish/reasoning-base-20k
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
- reasoning
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## KingNish/Reasoning-0.5b - GGUF
This repo contains GGUF format model files for [KingNish/Reasoning-0.5b](https://huggingface.co./KingNish/Reasoning-0.5b).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
<div style="text-align: left; margin: 20px 0;">
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Run them on the TensorBlock client using your local machine โ
</a>
</div>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Reasoning-0.5b-Q2_K.gguf](https://huggingface.co./tensorblock/Reasoning-0.5b-GGUF/blob/main/Reasoning-0.5b-Q2_K.gguf) | Q2_K | 0.339 GB | smallest, significant quality loss - not recommended for most purposes |
| [Reasoning-0.5b-Q3_K_S.gguf](https://huggingface.co./tensorblock/Reasoning-0.5b-GGUF/blob/main/Reasoning-0.5b-Q3_K_S.gguf) | Q3_K_S | 0.338 GB | very small, high quality loss |
| [Reasoning-0.5b-Q3_K_M.gguf](https://huggingface.co./tensorblock/Reasoning-0.5b-GGUF/blob/main/Reasoning-0.5b-Q3_K_M.gguf) | Q3_K_M | 0.355 GB | very small, high quality loss |
| [Reasoning-0.5b-Q3_K_L.gguf](https://huggingface.co./tensorblock/Reasoning-0.5b-GGUF/blob/main/Reasoning-0.5b-Q3_K_L.gguf) | Q3_K_L | 0.369 GB | small, substantial quality loss |
| [Reasoning-0.5b-Q4_0.gguf](https://huggingface.co./tensorblock/Reasoning-0.5b-GGUF/blob/main/Reasoning-0.5b-Q4_0.gguf) | Q4_0 | 0.352 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Reasoning-0.5b-Q4_K_S.gguf](https://huggingface.co./tensorblock/Reasoning-0.5b-GGUF/blob/main/Reasoning-0.5b-Q4_K_S.gguf) | Q4_K_S | 0.385 GB | small, greater quality loss |
| [Reasoning-0.5b-Q4_K_M.gguf](https://huggingface.co./tensorblock/Reasoning-0.5b-GGUF/blob/main/Reasoning-0.5b-Q4_K_M.gguf) | Q4_K_M | 0.398 GB | medium, balanced quality - recommended |
| [Reasoning-0.5b-Q5_0.gguf](https://huggingface.co./tensorblock/Reasoning-0.5b-GGUF/blob/main/Reasoning-0.5b-Q5_0.gguf) | Q5_0 | 0.397 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Reasoning-0.5b-Q5_K_S.gguf](https://huggingface.co./tensorblock/Reasoning-0.5b-GGUF/blob/main/Reasoning-0.5b-Q5_K_S.gguf) | Q5_K_S | 0.413 GB | large, low quality loss - recommended |
| [Reasoning-0.5b-Q5_K_M.gguf](https://huggingface.co./tensorblock/Reasoning-0.5b-GGUF/blob/main/Reasoning-0.5b-Q5_K_M.gguf) | Q5_K_M | 0.420 GB | large, very low quality loss - recommended |
| [Reasoning-0.5b-Q6_K.gguf](https://huggingface.co./tensorblock/Reasoning-0.5b-GGUF/blob/main/Reasoning-0.5b-Q6_K.gguf) | Q6_K | 0.506 GB | very large, extremely low quality loss |
| [Reasoning-0.5b-Q8_0.gguf](https://huggingface.co./tensorblock/Reasoning-0.5b-GGUF/blob/main/Reasoning-0.5b-Q8_0.gguf) | Q8_0 | 0.531 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Reasoning-0.5b-GGUF --include "Reasoning-0.5b-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Reasoning-0.5b-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
TheBloke/airoboros-33B-gpt4-1.2-GPTQ | TheBloke | "2023-08-21T08:40:51Z" | 23 | 9 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"dataset:jondurbin/airoboros-gpt4-1.2",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] | text-generation | "2023-06-14T13:07:17Z" | ---
inference: false
license: other
datasets:
- jondurbin/airoboros-gpt4-1.2
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# John Durbin's Airoboros 33B GPT4 1.2 GPTQ
These files are GPTQ 4bit model files for [John Durbin's Airoboros 33B GPT4 1.2](https://huggingface.co./jondurbin/airoboros-33b-gpt4-1.2).
It is the result of quantising to 4bit using [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ).
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co./TheBloke/airoboros-33B-gpt4-1.2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co./TheBloke/airoboros-33B-gpt4-1.2-GGML)
* [Jon Durbin's unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co./jondurbin/airoboros-33b-gpt4-1.2)
## Prompt template
```
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input.
USER: prompt
ASSISTANT:
```
## How to easily download and use this model in text-generation-webui
Please make sure you're using the latest version of text-generation-webui
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/airoboros-33B-gpt4-1.2-GPTQ`.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done"
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `airoboros-33B-gpt4-1.2-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
* Note that you do not need to set GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
## How to use this GPTQ model from Python code
First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed:
`pip install auto-gptq`
Then try the following example code:
```python
from transformers import AutoTokenizer, pipeline, logging
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
import argparse
model_name_or_path = "TheBloke/airoboros-33B-gpt4-1.2-GPTQ"
model_basename = "airoboros-33b-gpt4-1.2-GPTQ-4bit--1g.act.order"
use_triton = False
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
model_basename=model_basename,
use_safetensors=True,
trust_remote_code=False,
device="cuda:0",
use_triton=use_triton,
quantize_config=None)
prompt = "Tell me about AI"
prompt_template=f'''### Human: {prompt}
### Assistant:'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
# Prevent printing spurious transformers error when using pipeline with AutoGPTQ
logging.set_verbosity(logging.CRITICAL)
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.15
)
print(pipe(prompt_template)[0]['generated_text'])
```
## Provided files
**airoboros-33b-gpt4-1.2-GPTQ-4bit--1g.act.order.safetensors**
This will work with AutoGPTQ and CUDA versions of GPTQ-for-LLaMa. There are reports of issues with Triton mode of recent GPTQ-for-LLaMa. If you have issues, please use AutoGPTQ instead.
It was created without group_size to lower VRAM requirements, and with --act-order (desc_act) to boost inference accuracy as much as possible.
* `airoboros-33b-gpt4-1.2-GPTQ-4bit--1g.act.order.safetensors`
* Works with AutoGPTQ in CUDA or Triton modes.
* Works with GPTQ-for-LLaMa in CUDA mode. May have issues with GPTQ-for-LLaMa Triton mode.
* Works with text-generation-webui, including one-click-installers.
* Parameters: Groupsize = -1. Act Order / desc_act = True.
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, ้ฟๆ, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieล, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: John Durbin's Airoboros 33B GPT4 1.2
### Overview
This is a qlora fine-tuned 33b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros
This is mostly an extension of [1.1](https://huggingface.co./jondurbin/airoboros-33b-gpt4-1.1) with thousands of new training data and an update to allow "PLAINFORMAT" at the end of coding prompts to just print the code without backticks or explanations/usage/etc.
The dataset used to fine-tune this model is available [here](https://huggingface.co./datasets/jondurbin/airoboros-gpt4-1.2), with a specific focus on:
- coding
- math/reasoning (using orca style ELI5 instruction/response pairs)
- trivia
- role playing
- multiple choice and fill-in-the-blank
- context-obedient question answering
- theory of mind
- misc/general
This model was fine-tuned with a fork of [qlora](https://github.com/jondurbin/qlora), which among other things was updated to use a slightly modified vicuna template to be compatible with the 7b/13b versions:
```
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. USER: [prompt] ASSISTANT:
```
So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon).
### Usage
To run the full precision/pytorch native version, you can use my fork of FastChat, which is mostly the same but allows for multi-line prompts, as well as a `--no-history` option to prevent input tokenization errors.
```
pip install git+https://github.com/jondurbin/FastChat
```
Be sure you are pulling the latest branch!
Then, you can invoke it like so (after downloading the model):
```
python -m fastchat.serve.cli \
--model-path airoboros-33b-gpt4-1.2 \
--temperature 0.5 \
--max-new-tokens 2048 \
--no-history
```
Alternatively, please check out TheBloke's quantized versions:
- https://huggingface.co./TheBloke/airoboros-33B-gpt4-1.2-GPTQ
- https://huggingface.co./TheBloke/airoboros-33B-gpt4-1.2-GGML
### Coding updates from gpt4/1.1:
I added a few hundred instruction/response pairs to the training data with "PLAINFORMAT" as a single, all caps term at the end of the normal instructions, which produce plain text output instead of markdown/backtick code formatting.
It's not guaranteed to work all the time, but mostly it does seem to work as expected.
So for example, instead of:
```
Implement the Snake game in python.
```
You would use:
```
Implement the Snake game in python. PLAINFORMAT
```
### Other updates from gpt4/1.1:
- Several hundred role-playing data.
- A few thousand ORCA style reasoning/math questions with ELI5 prompts to generate the responses (should not be needed in your prompts to this model however, just ask the question).
- Many more coding examples in various languages, including some that use specific libraries (pandas, numpy, tensorflow, etc.)
|
AlekseiPravdin/Hermes-2-Pro-Llama-3-8B-Llama3-8B-Chinese-Chat-slerp-merge | AlekseiPravdin | "2024-08-16T15:42:21Z" | 9 | 0 | null | [
"safetensors",
"llama",
"merge",
"mergekit",
"lazymergekit",
"NousResearch/Hermes-2-Pro-Llama-3-8B",
"shenzhi-wang/Llama3-8B-Chinese-Chat",
"license:apache-2.0",
"region:us"
] | null | "2024-08-16T02:50:14Z" | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- NousResearch/Hermes-2-Pro-Llama-3-8B
- shenzhi-wang/Llama3-8B-Chinese-Chat
---
# Hermes-2-Pro-Llama-3-8B-Llama3-8B-Chinese-Chat-slerp-merge
Hermes-2-Pro-Llama-3-8B-Llama3-8B-Chinese-Chat-slerp-merge is a sophisticated language model resulting from the strategic merging of two distinct models: [NousResearch/Hermes-2-Pro-Llama-3-8B](https://huggingface.co./NousResearch/Hermes-2-Pro-Llama-3-8B) and [shenzhi-wang/Llama3-8B-Chinese-Chat](https://huggingface.co./shenzhi-wang/Llama3-8B-Chinese-Chat). The merging process was executed using [mergekit](https://github.com/cg123/mergekit), a specialized tool designed for precise model blending to achieve optimal performance and synergy between the merged architectures.
## ๐งฉ Merge Configuration
```yaml
slices:
- sources:
- model: NousResearch/Hermes-2-Pro-Llama-3-8B
layer_range: [0, 31]
- model: shenzhi-wang/Llama3-8B-Chinese-Chat
layer_range: [0, 31]
merge_method: slerp
base_model: NousResearch/Hermes-2-Pro-Llama-3-8B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: float16
```
## Model Features
This merged model combines the advanced generative capabilities of [NousResearch/Hermes-2-Pro-Llama-3-8B](https://huggingface.co./NousResearch/Hermes-2-Pro-Llama-3-8B), which excels in function calling and structured outputs, with the robust performance of [shenzhi-wang/Llama3-8B-Chinese-Chat](https://huggingface.co./shenzhi-wang/Llama3-8B-Chinese-Chat), which is fine-tuned for Chinese and English interactions. The result is a versatile model that supports a wide range of text generation tasks, including conversational AI, structured data outputs, and multilingual capabilities.
## Use Cases
- **Conversational AI**: Engage in natural dialogues in both English and Chinese, leveraging the strengths of both parent models.
- **Function Calling**: Utilize advanced function calling capabilities for structured outputs, making it suitable for applications requiring precise data handling.
- **Multilingual Support**: Effectively communicate in both English and Chinese, catering to a diverse user base.
## Evaluation Results
### Hermes-2-Pro-Llama-3-8B
- Function Calling Evaluation: 90%
- JSON Structured Outputs Evaluation: 84%
### Llama3-8B-Chinese-Chat
- Enhanced performance in roleplay, function calling, and math capabilities, particularly in the latest version.
## Limitations
While the merged model inherits the strengths of both parent models, it may also carry over some limitations. For instance, the model's performance in highly specialized domains may not match that of dedicated models. Additionally, biases present in the training data of either parent model could influence the outputs, necessitating careful consideration in sensitive applications.
In summary, Hermes-2-Pro-Llama-3-8B-Llama3-8B-Chinese-Chat-slerp-merge represents a significant advancement in language modeling, combining the best features of its predecessors to deliver a powerful tool for a variety of applications. |
lucas-meyer/seq-xls-r-fleurs_zu-run3-asr_xh-run2 | lucas-meyer | "2023-11-07T13:17:40Z" | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:lucas-meyer/asr_xh",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2023-10-31T12:24:23Z" | ---
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: seq-xls-r-fleurs_zu-run3-asr_xh-run2
results: []
datasets:
- lucas-meyer/asr_xh
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# seq-xls-r-fleurs_zu-run3-asr_xh-run2
This model is a fine-tuned version of [lucas-meyer/xls-r-fleurs_zu-run3](https://huggingface.co./lucas-meyer/xls-r-fleurs_zu-run3) on the asr_xh dataset.
It achieves the following results:
- Wer (Validation): 51.15%
- Wer (Test): 51.32%
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 3
- total_train_batch_size: 12
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer (Train) |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 8.7872 | 0.48 | 100 | 3.3525 | 1.0 |
| 3.1413 | 0.96 | 200 | 3.0025 | 1.0 |
| 1.7204 | 1.44 | 300 | 0.6932 | 0.7477 |
| 0.6719 | 1.91 | 400 | 0.5336 | 0.6871 |
| 0.5452 | 2.39 | 500 | 0.4911 | 0.6239 |
| 0.4981 | 2.87 | 600 | 0.4559 | 0.6339 |
| 0.4112 | 3.35 | 700 | 0.4295 | 0.5604 |
| 0.3807 | 3.83 | 800 | 0.3999 | 0.5390 |
| 0.3222 | 4.31 | 900 | 0.3903 | 0.5303 |
| 0.3041 | 4.78 | 1000 | 0.3714 | 0.5125 |
| 0.258 | 5.26 | 1100 | 0.4244 | 0.5368 |
| 0.2356 | 5.74 | 1200 | 0.4421 | 0.5494 |
| 0.2136 | 6.22 | 1300 | 0.4220 | 0.5420 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3 |
gubartz/testea | gubartz | "2024-04-15T17:32:30Z" | 107 | 0 | transformers | [
"transformers",
"safetensors",
"longt5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-04-15T17:32:02Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Eigeen/Xwin-LM-13B-V0.2-exl2 | Eigeen | "2023-10-29T15:23:07Z" | 19 | 3 | transformers | [
"transformers",
"llama",
"text-generation",
"text generation",
"instruct",
"en",
"license:llama2",
"autotrain_compatible",
"region:us"
] | text-generation | "2023-10-19T05:22:16Z" | ---
inference: false
language:
- en
license: llama2
model_creator: Xwin-LM
model_link: https://huggingface.co./Xwin-LM/Xwin-LM-13B-V0.2
model_name: Xwin-LM-13B-V0.2
model_type: llama
pipeline_tag: text-generation
quantized_by: Eigeen
tags:
- text generation
- instruct
thumbnail: null
---
# Xwin-LM-13B-V0.2 - ExLlamaV2
Original model: [Xwin-LM-13B-V0.2](https://huggingface.co./Xwin-LM/Xwin-LM-13B-V0.2)
# Quantizations
- [3bpw](https://huggingface.co./Eigeen/Xwin-LM-13B-V0.2-exl2/tree/main)
- [4bpw](https://huggingface.co./Eigeen/Xwin-LM-13B-V0.2-exl2/tree/4bpw)
- [5bpw](https://huggingface.co./Eigeen/Xwin-LM-13B-V0.2-exl2/tree/5bpw)
- [5.5bpw](https://huggingface.co./Eigeen/Xwin-LM-13B-V0.2-exl2/tree/5.5bpw)
- [6bpw](https://huggingface.co./Eigeen/Xwin-LM-13B-V0.2-exl2/tree/6bpw)
- [8bpw](https://huggingface.co./Eigeen/Xwin-LM-13B-V0.2-exl2/tree/8bpw)
|
Salesforce/blip2-itm-vit-g-coco | Salesforce | "2025-02-03T06:39:12Z" | 1,231 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"blip-2",
"zero-shot-image-classification",
"arxiv:1910.09700",
"license:mit",
"endpoints_compatible",
"region:us"
] | zero-shot-image-classification | "2023-08-23T21:34:45Z" | ---
library_name: transformers
license: mit
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Ethical Considerations
This release is for research purposes only in support of an academic paper. Our models, datasets, and code are not specifically designed or evaluated for all downstream purposes. We strongly recommend users evaluate and address potential concerns related to accuracy, safety, and fairness before deploying this model. We encourage users to consider the common limitations of AI, comply with applicable laws, and leverage best practices when selecting use cases, particularly for high-risk scenarios where errors or misuse could significantly impact peopleโs lives, rights, or safety. For further guidance on use cases, refer to our AUP and AI AUP.
|
juhw/uiop51 | juhw | "2025-02-21T14:42:27Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-21T14:37:49Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Zoyd/CreitinGameplays_ConvAI-9b-v2-2_5bpw_exl2 | Zoyd | "2024-05-30T19:15:23Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:CreitinGameplays/merged-data-v2",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"base_model:quantized:mistralai/Mistral-7B-Instruct-v0.3",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] | text-generation | "2024-05-30T17:42:32Z" | ---
license: mit
datasets:
- CreitinGameplays/merged-data-v2
base_model:
- mistralai/Mistral-7B-v0.3
- mistralai/Mistral-7B-Instruct-v0.3
language:
- en
---
**Exllamav2** quant (**exl2** / **2.5 bpw**) made with ExLlamaV2 v0.1.1
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co./Zoyd/CreitinGameplays_ConvAI-9b-v2-2_2bpw_exl2)**</center> | <center>2671 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co./Zoyd/CreitinGameplays_ConvAI-9b-v2-2_5bpw_exl2)**</center> | <center>2958 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co./Zoyd/CreitinGameplays_ConvAI-9b-v2-3_0bpw_exl2)**</center> | <center>3477 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co./Zoyd/CreitinGameplays_ConvAI-9b-v2-3_5bpw_exl2)**</center> | <center>3997 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co./Zoyd/CreitinGameplays_ConvAI-9b-v2-3_75bpw_exl2)**</center> | <center>4256 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co./Zoyd/CreitinGameplays_ConvAI-9b-v2-4_0bpw_exl2)**</center> | <center>4515 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co./Zoyd/CreitinGameplays_ConvAI-9b-v2-4_25bpw_exl2)**</center> | <center>4776 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co./Zoyd/CreitinGameplays_ConvAI-9b-v2-5_0bpw_exl2)**</center> | <center>5556 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co./Zoyd/CreitinGameplays_ConvAI-9b-v2-6_0bpw_exl2)**</center> | <center>6605 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co./Zoyd/CreitinGameplays_ConvAI-9b-v2-6_5bpw_exl2)**</center> | <center>7137 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co./Zoyd/CreitinGameplays_ConvAI-9b-v2-8_0bpw_exl2)**</center> | <center>7983 MB</center> | <center>8</center> |
# **ConvAI-9b v2: A Conversational AI Model**

## **1. Model Details**
* **Model Name:** ConvAI-9b v2
* **Authors:** CreitinGameplays
* **Date:** May 29th, 2024
## **2. Model Description**
ConvAI-9b v2 is a fine-tuned conversational AI model with 9 billion parameters. It is based on the following models:
* **Base Model:** [mistralai/Mistral-7B-v0.3](https://huggingface.co./mistralai/Mistral-7B-v0.3)
* **Merged Model:** [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co./mistralai/Mistral-7B-Instruct-v0.3)
## **3. Training Data**
The model was fine-tuned on a custom dataset of conversations between an AI assistant and a user. The dataset format followed a specific structure:
```
<|system|> (system prompt, e.g.: You are a helpful AI language model called ChatGPT, your goal is helping users with their questions) </s> <|user|> (user prompt) </s>
```
## **4. Intended Uses**
ConvAI-9b is intended for use in conversational AI applications, such as:
* Chatbots
* Virtual assistants
* Interactive storytelling
* Educational tools
## **5. Limitations**
* Like any other language model, ConvAI-9b v2 may generate incorrect or misleading responses.
* It may exhibit biases present in the training data.
* The model's performance can be affected by the quality and format of the input text.
## **6. Evaluation**
~ soon |
ppppppppeter/CNMB | ppppppppeter | "2023-06-06T02:39:06Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-06-06T02:35:12Z" | ---
title: ECG MAC
emoji: ๐จ
colorFrom: blue
colorTo: green
sdk: streamlit
sdk_version: 1.19.0
app_file: app.py
pinned: false
---
Check out the configuration reference at https://huggingface.co./docs/hub/spaces-config-reference
|
nttx/81803abe-fe4d-41e0-8848-26301bd41fa3 | nttx | "2025-01-14T03:45:50Z" | 10 | 0 | peft | [
"peft",
"safetensors",
"phi",
"axolotl",
"generated_from_trainer",
"base_model:microsoft/phi-1_5",
"base_model:adapter:microsoft/phi-1_5",
"license:mit",
"region:us"
] | null | "2025-01-14T02:35:27Z" | ---
library_name: peft
license: mit
base_model: microsoft/phi-1_5
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 81803abe-fe4d-41e0-8848-26301bd41fa3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: microsoft/phi-1_5
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 9e52b1647ca8ad56_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9e52b1647ca8ad56_train_data.json
type:
field_input: author
field_instruction: title
field_output: description
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: nttx/81803abe-fe4d-41e0-8848-26301bd41fa3
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/9e52b1647ca8ad56_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a05fa1bd-feca-4a09-ae0a-b6400ceec5d1
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: a05fa1bd-feca-4a09-ae0a-b6400ceec5d1
warmup_steps: 30
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 81803abe-fe4d-41e0-8848-26301bd41fa3
This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co./microsoft/phi-1_5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8537
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 30
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.616 | 0.0001 | 1 | 3.3481 |
| 3.5885 | 0.0042 | 50 | 3.0556 |
| 2.8249 | 0.0084 | 100 | 2.9615 |
| 2.7647 | 0.0126 | 150 | 2.8701 |
| 3.039 | 0.0169 | 200 | 2.8537 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
CyberHarem/aihara_yuzu_citrus | CyberHarem | "2023-09-28T15:45:15Z" | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/aihara_yuzu_citrus",
"license:mit",
"region:us"
] | text-to-image | "2023-09-28T15:26:55Z" | ---
license: mit
datasets:
- CyberHarem/aihara_yuzu_citrus
pipeline_tag: text-to-image
tags:
- art
---
# Lora of aihara_yuzu_citrus
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co./deepghs).
The base model used during training is [NAI](https://huggingface.co./deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co./Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 9000, you need to download `9000/aihara_yuzu_citrus.pt` as the embedding and `9000/aihara_yuzu_citrus.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 9000**, with the score of 0.603. The trigger words are:
1. `aihara_yuzu_citrus`
2. `blonde_hair, green_eyes, long_hair, jewelry, earrings, brown_hair`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | pattern_14 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:--------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-----------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| **9000** | **0.603** | [**Download**](9000/aihara_yuzu_citrus.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](9000/previews/pattern_11.png) |  |  |  |  | [<NSFW, click to see>](9000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](9000/previews/nude.png) | [<NSFW, click to see>](9000/previews/nude2.png) |  |  |
| 8400 | 0.592 | [Download](8400/aihara_yuzu_citrus.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](8400/previews/pattern_11.png) |  |  |  |  | [<NSFW, click to see>](8400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](8400/previews/nude.png) | [<NSFW, click to see>](8400/previews/nude2.png) |  |  |
| 7800 | 0.539 | [Download](7800/aihara_yuzu_citrus.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7800/previews/pattern_11.png) |  |  |  |  | [<NSFW, click to see>](7800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7800/previews/nude.png) | [<NSFW, click to see>](7800/previews/nude2.png) |  |  |
| 7200 | 0.560 | [Download](7200/aihara_yuzu_citrus.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7200/previews/pattern_11.png) |  |  |  |  | [<NSFW, click to see>](7200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7200/previews/nude.png) | [<NSFW, click to see>](7200/previews/nude2.png) |  |  |
| 6600 | 0.538 | [Download](6600/aihara_yuzu_citrus.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6600/previews/pattern_11.png) |  |  |  |  | [<NSFW, click to see>](6600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6600/previews/nude.png) | [<NSFW, click to see>](6600/previews/nude2.png) |  |  |
| 6000 | 0.526 | [Download](6000/aihara_yuzu_citrus.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6000/previews/pattern_11.png) |  |  |  |  | [<NSFW, click to see>](6000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6000/previews/nude.png) | [<NSFW, click to see>](6000/previews/nude2.png) |  |  |
| 5400 | 0.514 | [Download](5400/aihara_yuzu_citrus.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5400/previews/pattern_11.png) |  |  |  |  | [<NSFW, click to see>](5400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5400/previews/nude.png) | [<NSFW, click to see>](5400/previews/nude2.png) |  |  |
| 4800 | 0.533 | [Download](4800/aihara_yuzu_citrus.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4800/previews/pattern_11.png) |  |  |  |  | [<NSFW, click to see>](4800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4800/previews/nude.png) | [<NSFW, click to see>](4800/previews/nude2.png) |  |  |
| 4200 | 0.557 | [Download](4200/aihara_yuzu_citrus.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4200/previews/pattern_11.png) |  |  |  |  | [<NSFW, click to see>](4200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4200/previews/nude.png) | [<NSFW, click to see>](4200/previews/nude2.png) |  |  |
| 3600 | 0.460 | [Download](3600/aihara_yuzu_citrus.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3600/previews/pattern_11.png) |  |  |  |  | [<NSFW, click to see>](3600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3600/previews/nude.png) | [<NSFW, click to see>](3600/previews/nude2.png) |  |  |
| 3000 | 0.328 | [Download](3000/aihara_yuzu_citrus.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3000/previews/pattern_11.png) |  |  |  |  | [<NSFW, click to see>](3000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3000/previews/nude.png) | [<NSFW, click to see>](3000/previews/nude2.png) |  |  |
| 2400 | 0.427 | [Download](2400/aihara_yuzu_citrus.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2400/previews/pattern_11.png) |  |  |  |  | [<NSFW, click to see>](2400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2400/previews/nude.png) | [<NSFW, click to see>](2400/previews/nude2.png) |  |  |
| 1800 | 0.395 | [Download](1800/aihara_yuzu_citrus.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1800/previews/pattern_11.png) |  |  |  |  | [<NSFW, click to see>](1800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1800/previews/nude.png) | [<NSFW, click to see>](1800/previews/nude2.png) |  |  |
| 1200 | 0.288 | [Download](1200/aihara_yuzu_citrus.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1200/previews/pattern_11.png) |  |  |  |  | [<NSFW, click to see>](1200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1200/previews/nude.png) | [<NSFW, click to see>](1200/previews/nude2.png) |  |  |
| 600 | 0.221 | [Download](600/aihara_yuzu_citrus.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](600/previews/pattern_11.png) |  |  |  |  | [<NSFW, click to see>](600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](600/previews/nude.png) | [<NSFW, click to see>](600/previews/nude2.png) |  |  |
|
stablediffusionapi/cheyenne | stablediffusionapi | "2024-06-13T08:11:36Z" | 0 | 0 | diffusers | [
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-06-13T08:03:04Z" | ---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# CHEYENNE API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "cheyenne"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://docs.modelslab.com)
Try model for free: [Generate Images](https://modelslab.com/models/cheyenne)
Model link: [View model](https://modelslab.com/models/cheyenne)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "cheyenne",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
Karim-Gamal/XLM-Roberta-finetuned-emojis-1-client-toxic-cen-2 | Karim-Gamal | "2023-03-26T02:57:25Z" | 103 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"en",
"es",
"it",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-03-07T02:27:57Z" | ---
license: apache-2.0
language:
- en
- es
- it
- fr
metrics:
- f1
---
# Federated Learning Based Multilingual Emoji Prediction
This repository contains code for training and evaluating transformer-based models for Uni/multilingual emoji prediction in clean and attack scenarios using Federated Learning. This work is described in the paper "Federated Learning-Based Multilingual Emoji Prediction in Clean and Attack Scenarios."
# Abstract
Federated learning is a growing field in the machine learning community due to its decentralized and private design. Model training in federated learning is distributed over multiple clients giving access to lots of client data while maintaining privacy. Then, a server aggregates the training done on these multiple clients without access to their data, which could be emojis widely used in any social media service and instant messaging platforms to express users' sentiments. This paper proposes federated learning-based multilingual emoji prediction in both clean and attack scenarios. Emoji prediction data have been crawled from both Twitter and SemEval emoji datasets. This data is used to train and evaluate different transformer model sizes including a sparsely activated transformer with either the assumption of clean data in all clients or poisoned data via label flipping attack in some clients. Experimental results on these models show that federated learning in either clean or attacked scenarios performs similarly to centralized training in multilingual emoji prediction on seen and unseen languages under different data sources and distributions. Our trained transformers perform better than other techniques on the SemEval emoji dataset in addition to the privacy as well as distributed benefits of federated learning.
# Performance
> * Acc : 47.710 %
> * Mac-F1 : 33.991 %
> * Also see our [GitHub Repo](https://github.com/kareemgamalmahmoud/FEDERATED-LEARNING-BASED-MULTILINGUAL-EMOJI-PREDICTION-IN-CLEAN-AND-ATTACK-SCENARIOS)
# Dependencies
> * Python 3.6+
> * PyTorch 1.7.0+
> * Transformers 4.0.0+
# Usage
> To use the model, first install the `transformers` package from Hugging Face:
```python
pip install transformers
```
> Then, you can load the model and tokenizer using the following code:
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import numpy as np
import urllib.request
import csv
```
```python
MODEL = "Karim-Gamal/XLM-Roberta-finetuned-emojis-1-client-toxic-cen-2"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
```
> Once you have the tokenizer and model, you can preprocess your text and pass it to the model for prediction:
```python
# Preprocess text (username and link placeholders)
def preprocess(text):
new_text = []
for t in text.split(" "):
t = '@user' if t.startswith('@') and len(t) > 1 else t
t = 'http' if t.startswith('http') else t
new_text.append(t)
return " ".join(new_text)
text = "Hello world"
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
scores = output[0][0].detach().numpy()
```
> The scores variable contains the probabilities for each of the possible emoji labels. To get the top k predictions, you can use the following code:
```python
# download label mapping
labels=[]
mapping_link = "https://raw.githubusercontent.com/cardiffnlp/tweeteval/main/datasets/emoji/mapping.txt"
with urllib.request.urlopen(mapping_link) as f:
html = f.read().decode('utf-8').split("\n")
csvreader = csv.reader(html, delimiter='\t')
labels = [row[1] for row in csvreader if len(row) > 1]
k = 3 # number of top predictions to show
ranking = np.argsort(scores)
ranking = ranking[::-1]
for i in range(k):
l = labels[ranking[i]]
s = scores[ranking[i]]
print(f"{i+1}) {l} {np.round(float(s), 4)}")
```
## Note : this is the source for that code : [Link](https://huggingface.co./cardiffnlp/twitter-roberta-base-emoji) |
mradermacher/Xwin-LM-13B-V0.2-GGUF | mradermacher | "2024-12-16T10:35:58Z" | 19 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Xwin-LM/Xwin-LM-13B-V0.2",
"base_model:quantized:Xwin-LM/Xwin-LM-13B-V0.2",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | "2024-12-16T09:43:18Z" | ---
base_model: Xwin-LM/Xwin-LM-13B-V0.2
language:
- en
library_name: transformers
license: llama2
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co./Xwin-LM/Xwin-LM-13B-V0.2
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co./mradermacher/Xwin-LM-13B-V0.2-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co./TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co./mradermacher/Xwin-LM-13B-V0.2-GGUF/resolve/main/Xwin-LM-13B-V0.2.Q2_K.gguf) | Q2_K | 5.0 | |
| [GGUF](https://huggingface.co./mradermacher/Xwin-LM-13B-V0.2-GGUF/resolve/main/Xwin-LM-13B-V0.2.Q3_K_S.gguf) | Q3_K_S | 5.8 | |
| [GGUF](https://huggingface.co./mradermacher/Xwin-LM-13B-V0.2-GGUF/resolve/main/Xwin-LM-13B-V0.2.Q3_K_M.gguf) | Q3_K_M | 6.4 | lower quality |
| [GGUF](https://huggingface.co./mradermacher/Xwin-LM-13B-V0.2-GGUF/resolve/main/Xwin-LM-13B-V0.2.Q3_K_L.gguf) | Q3_K_L | 7.0 | |
| [GGUF](https://huggingface.co./mradermacher/Xwin-LM-13B-V0.2-GGUF/resolve/main/Xwin-LM-13B-V0.2.IQ4_XS.gguf) | IQ4_XS | 7.1 | |
| [GGUF](https://huggingface.co./mradermacher/Xwin-LM-13B-V0.2-GGUF/resolve/main/Xwin-LM-13B-V0.2.Q4_K_S.gguf) | Q4_K_S | 7.5 | fast, recommended |
| [GGUF](https://huggingface.co./mradermacher/Xwin-LM-13B-V0.2-GGUF/resolve/main/Xwin-LM-13B-V0.2.Q4_K_M.gguf) | Q4_K_M | 8.0 | fast, recommended |
| [GGUF](https://huggingface.co./mradermacher/Xwin-LM-13B-V0.2-GGUF/resolve/main/Xwin-LM-13B-V0.2.Q5_K_S.gguf) | Q5_K_S | 9.1 | |
| [GGUF](https://huggingface.co./mradermacher/Xwin-LM-13B-V0.2-GGUF/resolve/main/Xwin-LM-13B-V0.2.Q5_K_M.gguf) | Q5_K_M | 9.3 | |
| [GGUF](https://huggingface.co./mradermacher/Xwin-LM-13B-V0.2-GGUF/resolve/main/Xwin-LM-13B-V0.2.Q6_K.gguf) | Q6_K | 10.8 | very good quality |
| [GGUF](https://huggingface.co./mradermacher/Xwin-LM-13B-V0.2-GGUF/resolve/main/Xwin-LM-13B-V0.2.Q8_0.gguf) | Q8_0 | 13.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co./mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
RichardErkhov/lllyasviel_-_omost-dolphin-2.9-llama3-8b-gguf | RichardErkhov | "2024-07-19T14:46:50Z" | 24 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-07-19T11:20:19Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
omost-dolphin-2.9-llama3-8b - GGUF
- Model creator: https://huggingface.co./lllyasviel/
- Original model: https://huggingface.co./lllyasviel/omost-dolphin-2.9-llama3-8b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [omost-dolphin-2.9-llama3-8b.Q2_K.gguf](https://huggingface.co./RichardErkhov/lllyasviel_-_omost-dolphin-2.9-llama3-8b-gguf/blob/main/omost-dolphin-2.9-llama3-8b.Q2_K.gguf) | Q2_K | 2.96GB |
| [omost-dolphin-2.9-llama3-8b.IQ3_XS.gguf](https://huggingface.co./RichardErkhov/lllyasviel_-_omost-dolphin-2.9-llama3-8b-gguf/blob/main/omost-dolphin-2.9-llama3-8b.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [omost-dolphin-2.9-llama3-8b.IQ3_S.gguf](https://huggingface.co./RichardErkhov/lllyasviel_-_omost-dolphin-2.9-llama3-8b-gguf/blob/main/omost-dolphin-2.9-llama3-8b.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [omost-dolphin-2.9-llama3-8b.Q3_K_S.gguf](https://huggingface.co./RichardErkhov/lllyasviel_-_omost-dolphin-2.9-llama3-8b-gguf/blob/main/omost-dolphin-2.9-llama3-8b.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [omost-dolphin-2.9-llama3-8b.IQ3_M.gguf](https://huggingface.co./RichardErkhov/lllyasviel_-_omost-dolphin-2.9-llama3-8b-gguf/blob/main/omost-dolphin-2.9-llama3-8b.IQ3_M.gguf) | IQ3_M | 3.52GB |
| [omost-dolphin-2.9-llama3-8b.Q3_K.gguf](https://huggingface.co./RichardErkhov/lllyasviel_-_omost-dolphin-2.9-llama3-8b-gguf/blob/main/omost-dolphin-2.9-llama3-8b.Q3_K.gguf) | Q3_K | 3.74GB |
| [omost-dolphin-2.9-llama3-8b.Q3_K_M.gguf](https://huggingface.co./RichardErkhov/lllyasviel_-_omost-dolphin-2.9-llama3-8b-gguf/blob/main/omost-dolphin-2.9-llama3-8b.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [omost-dolphin-2.9-llama3-8b.Q3_K_L.gguf](https://huggingface.co./RichardErkhov/lllyasviel_-_omost-dolphin-2.9-llama3-8b-gguf/blob/main/omost-dolphin-2.9-llama3-8b.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [omost-dolphin-2.9-llama3-8b.IQ4_XS.gguf](https://huggingface.co./RichardErkhov/lllyasviel_-_omost-dolphin-2.9-llama3-8b-gguf/blob/main/omost-dolphin-2.9-llama3-8b.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [omost-dolphin-2.9-llama3-8b.Q4_0.gguf](https://huggingface.co./RichardErkhov/lllyasviel_-_omost-dolphin-2.9-llama3-8b-gguf/blob/main/omost-dolphin-2.9-llama3-8b.Q4_0.gguf) | Q4_0 | 4.34GB |
| [omost-dolphin-2.9-llama3-8b.IQ4_NL.gguf](https://huggingface.co./RichardErkhov/lllyasviel_-_omost-dolphin-2.9-llama3-8b-gguf/blob/main/omost-dolphin-2.9-llama3-8b.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [omost-dolphin-2.9-llama3-8b.Q4_K_S.gguf](https://huggingface.co./RichardErkhov/lllyasviel_-_omost-dolphin-2.9-llama3-8b-gguf/blob/main/omost-dolphin-2.9-llama3-8b.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
| [omost-dolphin-2.9-llama3-8b.Q4_K.gguf](https://huggingface.co./RichardErkhov/lllyasviel_-_omost-dolphin-2.9-llama3-8b-gguf/blob/main/omost-dolphin-2.9-llama3-8b.Q4_K.gguf) | Q4_K | 4.58GB |
| [omost-dolphin-2.9-llama3-8b.Q4_K_M.gguf](https://huggingface.co./RichardErkhov/lllyasviel_-_omost-dolphin-2.9-llama3-8b-gguf/blob/main/omost-dolphin-2.9-llama3-8b.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [omost-dolphin-2.9-llama3-8b.Q4_1.gguf](https://huggingface.co./RichardErkhov/lllyasviel_-_omost-dolphin-2.9-llama3-8b-gguf/blob/main/omost-dolphin-2.9-llama3-8b.Q4_1.gguf) | Q4_1 | 4.78GB |
| [omost-dolphin-2.9-llama3-8b.Q5_0.gguf](https://huggingface.co./RichardErkhov/lllyasviel_-_omost-dolphin-2.9-llama3-8b-gguf/blob/main/omost-dolphin-2.9-llama3-8b.Q5_0.gguf) | Q5_0 | 5.21GB |
| [omost-dolphin-2.9-llama3-8b.Q5_K_S.gguf](https://huggingface.co./RichardErkhov/lllyasviel_-_omost-dolphin-2.9-llama3-8b-gguf/blob/main/omost-dolphin-2.9-llama3-8b.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [omost-dolphin-2.9-llama3-8b.Q5_K.gguf](https://huggingface.co./RichardErkhov/lllyasviel_-_omost-dolphin-2.9-llama3-8b-gguf/blob/main/omost-dolphin-2.9-llama3-8b.Q5_K.gguf) | Q5_K | 5.34GB |
| [omost-dolphin-2.9-llama3-8b.Q5_K_M.gguf](https://huggingface.co./RichardErkhov/lllyasviel_-_omost-dolphin-2.9-llama3-8b-gguf/blob/main/omost-dolphin-2.9-llama3-8b.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [omost-dolphin-2.9-llama3-8b.Q5_1.gguf](https://huggingface.co./RichardErkhov/lllyasviel_-_omost-dolphin-2.9-llama3-8b-gguf/blob/main/omost-dolphin-2.9-llama3-8b.Q5_1.gguf) | Q5_1 | 5.65GB |
| [omost-dolphin-2.9-llama3-8b.Q6_K.gguf](https://huggingface.co./RichardErkhov/lllyasviel_-_omost-dolphin-2.9-llama3-8b-gguf/blob/main/omost-dolphin-2.9-llama3-8b.Q6_K.gguf) | Q6_K | 6.14GB |
| [omost-dolphin-2.9-llama3-8b.Q8_0.gguf](https://huggingface.co./RichardErkhov/lllyasviel_-_omost-dolphin-2.9-llama3-8b-gguf/blob/main/omost-dolphin-2.9-llama3-8b.Q8_0.gguf) | Q8_0 | 7.95GB |
Original model description:
---
tags:
- pytorch
- trl
- sft
inference: false
---
omost-dolphin-2.9-llama3-8b is Omost's llama3-8b model with dolphin-2.9 instruct pretraining in fp16.
|
MidnightRunner/MIDNIGHT_NAI-XL_vPredV1 | MidnightRunner | "2025-02-18T18:46:42Z" | 181 | 1 | diffusers | [
"diffusers",
"SDXL",
"noobai-XL",
"Vpred-1.0",
"text-to-image",
"ComfyUI",
"Automatic1111",
"Diffuser",
"en",
"dataset:LaxharLab/NoobAI-XL-dataset",
"base_model:Laxhar/noobai-XL-Vpred-1.0",
"base_model:finetune:Laxhar/noobai-XL-Vpred-1.0",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | "2025-02-02T01:09:01Z" | ---
license: creativeml-openrail-m
language:
- en
base_model: Laxhar/noobai-XL-Vpred-1.0
tags:
- SDXL
- noobai-XL
- Vpred-1.0
- text-to-image
- ComfyUI
- Automatic1111
- Diffuser
pipeline_tag: text-to-image
library_name: diffusers
datasets:
- LaxharLab/NoobAI-XL-dataset
metrics:
- FID
- IS
widget:
- text: >-
high quality, masterpiece, detailed, 8K, artist:nyantcha,
evangeline_(nyantcha), vibrant surreal artwork, rainbow, light particles,
from above, volumetric lighting, ((adult girl:1.2)), natural huge breasts,
woman dressed as white rabbit, sleek pure white outfit, delicate white bunny
ears, braid, plump, skindentation, huge breasts, falling into swirling black
hole, seen from behind, glancing over shoulder, alluring mysterious
expression, dress, zipper, zipper pull, detached sleeves, breasts apart
(shoulder straps), buckles, long dress, swirling cosmic patterns, glowing
particles, dramatic lighting, vibrant neon pink and blue tones,
hyper-detailed, cinematic depth of field, smooth texture, film grain,
chromatic aberration, high contrast, limited palette
parameters:
negative_prompt: >-
lowres, worst quality, low quality, bad anatomy, bad hands, 4koma, comic,
greyscale, censored, jpeg artifacts, overly saturated, overly vivid,
(multiple views:1.1), (bad:1.05), fewer, extra, missing, worst quality,
jpeg artifacts, bad quality, watermark, unfinished, displeasing, sepia,
sketch, flat color, signature, artistic error, username, scan, (blurry,
lowres, worst quality, (low quality:1.1), ugly, (bad anatomy:1.05), artist
name, (patreon username:1.2)
output:
url: stand_on_ripplewater.jpeg
---
# MIDNIGHT_NAI-XL_vPredV1
**Model Type:** Diffusion-based text-to-image generative model
**Base Model:** SDXL 1.0 & Laxhar/noobai-XL-Vpred-1.0
**License:** [CreativeML Open RAIL++-M](https://github.com/CompVis/stable-diffusion/blob/main/LICENSE)
## Model Description
MIDNIGHT_NAI-XL_vPredV1 is a specialized fine-tuning of the NoobAI-XL (NAI-XL) model, designed to enhance anatomical precision, compositional coherence, and versatile style integration. This model excels in generating high-quality images with vibrant colors while minimizing overexposure.
## Usage Recommendations
### **Sampling Methods**
MIDNIGHT_NAI-XL_vPred is optimized specifically for **Euler (normal)**.
Use **ModelSamplingDiscrete** with **V-prediction** and **ZsNR set to true**.
Other samplers may not provide stable results, and **V-prediction models do not support other samplers**.
### **CFG Scaling**
**Dynamic CFG Plugin is bypassed as a backup for potential future needs.**
Manually adjust **CFG scaling within a range of 5-6** for the best balance.
For optimal results, a **preferred setting of 5.3** is recommended.
### **Custom Workflow**
For an optimized generation process, use the [**MIDNIGHT1111_Chasm 2025-02-04**](https://huggingface.co./MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/MIDNIGHT1111_Chasm%202025-02-04.json) ComfyUI workflow.
This workflow is specifically designed to **leverage the strengths of MIDNIGHT_NAI-XL_vPred**, providing a streamlined and efficient image generation pipeline.
## MIDNIGHT1111_Chasm
For an optimized generation process, consider using the custom workflow [MIDNIGHT1111_Chasm 02-05-25](https://huggingface.co./MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/MIDNIGHT1111_Chasm%2002-05-25.json). This workflow is tailored to leverage the strengths of the MIDNIGHT_NAI-XL_vPredV1 model, providing a streamlined and efficient image generation pipeline.

*Note: The above image is a preview of the `MIDNIGHT1111_Chasm` workflow.*
### Method I: reForge without MIDNIGHT1111_Chasm Workflow
1. **Installation:** If not already installed, follow the instructions in the [reForge repository](https://github.com/Panchovix/stable-diffusion-webui-reForge) to set up.
2. **Usage:** Launch WebUI and use the model as usual.
### Method II: ComfyUI *with* MIDNIGHT1111_Chasm Workflow
1. **Installation:** Follow the setup instructions in the [ComfyUI repository](https://github.com/comfyanonymous/ComfyUI).
2. **Workflow Sample:** Utilize the provided [ComfyUI workflow sample](https://huggingface.co./MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/MIDNIGHT1111_Chasm%2002-05-25.json) for guidance.
### Method III: WebUI without MIDNIGHT1111_Chasm Workflow
1. **Installation:** Follow the instructions in the [WebUI repository](https://github.com/AUTOMATIC1111/stable-diffusion-webui) to set up.
2. **Navigate to the WebUI Directory:** Before updating or switching branches, ensure you're inside the `stable-diffusion-webui` folder
command: |
```bash
cd stable-diffusion-webui
```
3. **Switch to the Development Branch (Optional, for testing new features):** If you want to use the latest features from the development branch, run:
command: |
```bash
git switch dev
git pull
```
โ ๏ธ **Note:** The `dev` branch may contain bugs. If stability is your priority, it's best to stay on the `main` branch.
4. **Update WebUI (Main or Dev Branch):** To pull the latest updates while on either branch, run:
command: |
```bash
git pull
```
๐ **Restart WebUI after updating to apply changes.**"
5. **Configuration:** Ensure you're using a stable branch, as the dev branch may contain bugs.
### Method IV: Diffusers without MIDNIGHT1111_Chasm Workflow
```bash
import torch
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerDiscreteScheduler
ckpt_path = "/path/to/model.safetensors"
pipe = StableDiffusionXLPipeline.from_single_file(
ckpt_path,
use_safetensors=True,
torch_dtype=torch.float16,
)
scheduler_args = {"prediction_type": "v_prediction", "rescale_betas_zero_snr": True}
pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config, **scheduler_args)
pipe.enable_xformers_memory_efficient_attention()
pipe = pipe.to("cuda")
prompt = """masterpiece, best quality,artist:john_kafka,artist:nixeu,artist:quasarcake, chromatic aberration, film grain, horror \(theme\), limited palette, x-shaped pupils, high contrast, color contrast, cold colors, arlecchino \(genshin impact\), black theme, gritty, graphite \(medium\)"""
negative_prompt = "nsfw, worst quality, old, early, low quality, lowres, signature, username, logo, bad hands, mutated hands, mammal, anthro, furry, ambiguous form, feral, semi-anthro"
image = pipe(
prompt=prompt,
negative_prompt=negative_prompt,
width=832,
height=1216,
num_inference_steps=28,
guidance_scale=5,
generator=torch.Generator().manual_seed(42),
).images[0]
image.save("output.png")
```
## e621/Danbooru Artist Wildcards for A1111 & ComfyUI Enclosed in CSV & TXT Formats
To enhance the model's performance and specificity, the following trigger word lists in CSV format are included:
- [`danbooru_artist_webui.csv`](https://huggingface.co./MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/danbooru_artist_webui.csv)
- [`danbooru_character_webui.csv`](https://huggingface.co./MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/danbooru_character_webui.csv)
- [`e621_artist_webui.csv`](https://huggingface.co./MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/e621_artist_webui.csv)
- [`e621_character_webui.csv`](https://huggingface.co./MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/e621_character_webui.csv)
These lists provide recognized tags for various artists and characters, facilitating more accurate and tailored image generation.
The wildcard file in 'TXT' format is included and designed for seamless integration with **AUTOMATIC1111** and **ComfyUI**, optimized for dynamic prompt generation using artist data from **e621** and **Danbooru**.
- **TXT Format:** Sanitized artist tags by removing URLs and converted from `.csv` to `.txt` format for improved readability across different extensions.
- **Dual Dataset Support:** Supports both e621 and Danbooru datasets to enhance art style diversity.
- **Smooth Randomization:** Structured with trailing commas for seamless wildcard cycling during prompt generation.
## How to Use Wildcards
### For A1111
1. **Install:** [stable-diffusion-webui-wildcards](https://github.com/AUTOMATIC1111/stable-diffusion-webui-wildcards)
2. **Place the `.txt` file in:**
```
/A1111/extensions/stable-diffusion-webui-wildcards
```
3. **Use in your prompt like this:**
```
__e621_artist_wildcard__, very awa, masterpiece, best quality, amazing quality
```
```
__danbooru_character_wildcard__, very awa, masterpiece, best quality, amazing quality
```
```
__e621_artist_wildcard__, __danbooru_character_wildcard__, very awa, masterpiece, best quality, amazing quality
```
### For ComfyUI
1. **Install:** [ComfyUI-Impact-Pack](https://github.com/ltdrdata/ComfyUI-Impact-Pack)
2. **Place the `.txt` file in:**
```
/ComfyUI/custom_nodes/ComfyUI-Impact-Pack/wildcards
```
or
```
/ComfyUI/custom_nodes/ComfyUI-Impact-Pack/custom_wildcards
```
3. **Use the wildcard node to trigger dynamic randomization in your workflows.**
## Whatโs Included in Wildcards
TXT formatted file containing clean, artist-focused wildcard files ready for dynamic prompt workflows in A1111 and ComfyUI.
- [danbooru_artist_wildcard.txt](https://huggingface.co./MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/danbooru_artist_wildcard.txt)
- [danbooru_character_wildcard.txt](https://huggingface.co./MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/danbooru_character_wildcard.txt)
- [e621_artist_wildcard.txt](https://huggingface.co./MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/e621_artist_wildcard.txt)
- [e621_character_wildcard.txt](https://huggingface.co./MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/e621_character_wildcard.txt)
## Acknowledgments
Special thanks to:
- **Development Team:** Laxhar Lab
- **Coding Contributions:** Euge
- **e621/Danbooru Wildcards** [ipsylon0000](https://civitai.com/user/ipsylon0000)
- **Community Support:** Various contributors
## Additional Resources
- **Guidebook for NoobAI XL:** [English Version](https://civitai.com/articles/8962)
- **Recommended LoRa List for NoobAI XL:** [Resource Link](https://fcnk27d6mpa5.feishu.cn/wiki/IBVGwvVGViazLYkMgVEcvbklnge)
- **Fixing Black Images in ComfyUI on macOS (M1/M2):** [Read the Article](https://civitai.com/articles/11106)
- **Creative Solutions and Services:** [Magnabos.co](https://magnabos.co/)
## License
This model is licensed under the [CreativeML Open RAIL++-M License](https://github.com/CompVis/stable-diffusion/blob/main/LICENSE). By using this model, you agree to the terms and conditions outlined in the license. |
ermi8/amharic-hate-speech-detection-mBERT | ermi8 | "2024-12-13T11:26:22Z" | 107 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-12-07T13:02:39Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/calme-2.3-rys-78b-i1-GGUF | mradermacher | "2025-02-06T17:11:48Z" | 114 | 0 | transformers | [
"transformers",
"gguf",
"chat",
"qwen",
"qwen2",
"finetune",
"chatml",
"en",
"dataset:MaziyarPanahi/truthy-dpo-v0.1-axolotl",
"base_model:MaziyarPanahi/calme-2.3-rys-78b",
"base_model:quantized:MaziyarPanahi/calme-2.3-rys-78b",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2024-10-23T08:29:51Z" | ---
base_model: MaziyarPanahi/calme-2.3-rys-78b
datasets:
- MaziyarPanahi/truthy-dpo-v0.1-axolotl
language:
- en
library_name: transformers
license: mit
model_creator: MaziyarPanahi
model_name: calme-2.3-rys-78b
quantized_by: mradermacher
tags:
- chat
- qwen
- qwen2
- finetune
- chatml
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co./MaziyarPanahi/calme-2.3-rys-78b
<!-- provided-files -->
static quants are available at https://huggingface.co./mradermacher/calme-2.3-rys-78b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co./TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co./mradermacher/calme-2.3-rys-78b-i1-GGUF/resolve/main/calme-2.3-rys-78b.i1-IQ1_S.gguf) | i1-IQ1_S | 24.4 | for the desperate |
| [GGUF](https://huggingface.co./mradermacher/calme-2.3-rys-78b-i1-GGUF/resolve/main/calme-2.3-rys-78b.i1-IQ1_M.gguf) | i1-IQ1_M | 25.5 | mostly desperate |
| [GGUF](https://huggingface.co./mradermacher/calme-2.3-rys-78b-i1-GGUF/resolve/main/calme-2.3-rys-78b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 27.4 | |
| [GGUF](https://huggingface.co./mradermacher/calme-2.3-rys-78b-i1-GGUF/resolve/main/calme-2.3-rys-78b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 29.1 | |
| [GGUF](https://huggingface.co./mradermacher/calme-2.3-rys-78b-i1-GGUF/resolve/main/calme-2.3-rys-78b.i1-IQ2_S.gguf) | i1-IQ2_S | 30.0 | |
| [GGUF](https://huggingface.co./mradermacher/calme-2.3-rys-78b-i1-GGUF/resolve/main/calme-2.3-rys-78b.i1-IQ2_M.gguf) | i1-IQ2_M | 31.5 | |
| [GGUF](https://huggingface.co./mradermacher/calme-2.3-rys-78b-i1-GGUF/resolve/main/calme-2.3-rys-78b.i1-Q2_K.gguf) | i1-Q2_K | 31.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co./mradermacher/calme-2.3-rys-78b-i1-GGUF/resolve/main/calme-2.3-rys-78b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 34.1 | lower quality |
| [GGUF](https://huggingface.co./mradermacher/calme-2.3-rys-78b-i1-GGUF/resolve/main/calme-2.3-rys-78b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 35.2 | |
| [GGUF](https://huggingface.co./mradermacher/calme-2.3-rys-78b-i1-GGUF/resolve/main/calme-2.3-rys-78b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 36.9 | IQ3_XS probably better |
| [GGUF](https://huggingface.co./mradermacher/calme-2.3-rys-78b-i1-GGUF/resolve/main/calme-2.3-rys-78b.i1-IQ3_S.gguf) | i1-IQ3_S | 37.0 | beats Q3_K* |
| [GGUF](https://huggingface.co./mradermacher/calme-2.3-rys-78b-i1-GGUF/resolve/main/calme-2.3-rys-78b.i1-IQ3_M.gguf) | i1-IQ3_M | 38.0 | |
| [GGUF](https://huggingface.co./mradermacher/calme-2.3-rys-78b-i1-GGUF/resolve/main/calme-2.3-rys-78b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 40.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co./mradermacher/calme-2.3-rys-78b-i1-GGUF/resolve/main/calme-2.3-rys-78b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 42.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co./mradermacher/calme-2.3-rys-78b-i1-GGUF/resolve/main/calme-2.3-rys-78b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 42.7 | |
| [GGUF](https://huggingface.co./mradermacher/calme-2.3-rys-78b-i1-GGUF/resolve/main/calme-2.3-rys-78b.i1-Q4_0.gguf) | i1-Q4_0 | 44.4 | fast, low quality |
| [GGUF](https://huggingface.co./mradermacher/calme-2.3-rys-78b-i1-GGUF/resolve/main/calme-2.3-rys-78b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 47.0 | optimal size/speed/quality |
| [PART 1](https://huggingface.co./mradermacher/calme-2.3-rys-78b-i1-GGUF/resolve/main/calme-2.3-rys-78b.i1-Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co./mradermacher/calme-2.3-rys-78b-i1-GGUF/resolve/main/calme-2.3-rys-78b.i1-Q4_K_M.gguf.part2of2) | i1-Q4_K_M | 50.8 | fast, recommended |
| [PART 1](https://huggingface.co./mradermacher/calme-2.3-rys-78b-i1-GGUF/resolve/main/calme-2.3-rys-78b.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co./mradermacher/calme-2.3-rys-78b-i1-GGUF/resolve/main/calme-2.3-rys-78b.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 55.2 | |
| [PART 1](https://huggingface.co./mradermacher/calme-2.3-rys-78b-i1-GGUF/resolve/main/calme-2.3-rys-78b.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co./mradermacher/calme-2.3-rys-78b-i1-GGUF/resolve/main/calme-2.3-rys-78b.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 58.4 | |
| [PART 1](https://huggingface.co./mradermacher/calme-2.3-rys-78b-i1-GGUF/resolve/main/calme-2.3-rys-78b.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co./mradermacher/calme-2.3-rys-78b-i1-GGUF/resolve/main/calme-2.3-rys-78b.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 69.1 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co./mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co./nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
rtl-llm/codellama7b-v2c2v-2 | rtl-llm | "2025-02-04T08:17:01Z" | 24 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-04T08:13:14Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
huggingtweets/eripsa | huggingtweets | "2021-05-22T03:26:19Z" | 6 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-03-02T23:29:05Z" | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/615850415972679680/zeVerOYq_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">eripsa ๐ค AI Bot </div>
<div style="font-size: 15px">@eripsa bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@eripsa's tweets](https://twitter.com/eripsa).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3212 |
| Retweets | 1511 |
| Short tweets | 149 |
| Tweets kept | 1552 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/i4inmqrl/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co./gpt2) which is fine-tuned on @eripsa's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2xn30w4y) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2xn30w4y/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/eripsa')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co./gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
arcwarden46/e0a572e9-7ab6-49d0-969b-9d8320a49c38 | arcwarden46 | "2025-02-04T03:20:32Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/OpenHermes-2.5-Mistral-7B",
"base_model:adapter:unsloth/OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"region:us"
] | null | "2025-02-04T01:53:59Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/OpenHermes-2.5-Mistral-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e0a572e9-7ab6-49d0-969b-9d8320a49c38
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/OpenHermes-2.5-Mistral-7B
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 9c4378b501f71de8_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9c4378b501f71de8_train_data.json
type:
field_input: prompt
field_instruction: reason1
field_output: reason2
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: arcwarden46/e0a572e9-7ab6-49d0-969b-9d8320a49c38
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/9c4378b501f71de8_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 432ed5ae-dbea-46a8-8795-45618fe0369a
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 432ed5ae-dbea-46a8-8795-45618fe0369a
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# e0a572e9-7ab6-49d0-969b-9d8320a49c38
This model is a fine-tuned version of [unsloth/OpenHermes-2.5-Mistral-7B](https://huggingface.co./unsloth/OpenHermes-2.5-Mistral-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6418
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 5.7904 | 0.0002 | 1 | 1.5057 |
| 2.8028 | 0.0088 | 50 | 0.8330 |
| 2.416 | 0.0177 | 100 | 0.7194 |
| 2.454 | 0.0265 | 150 | 0.6717 |
| 2.6065 | 0.0354 | 200 | 0.6418 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
shivanikerai/Llama-2-7b-chat-hf-adapter-sku-title-ner-generation-reversed-v2.2 | shivanikerai | "2024-03-04T09:26:19Z" | 2 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | "2024-03-04T09:25:31Z" | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.9.1.dev0 |
kyleeasterly/openllama-7b_purple-aerospace-v2-200-13 | kyleeasterly | "2023-08-09T07:49:24Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-08-09T07:44:13Z" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
cmncomp/coldint_0694 | cmncomp | "2024-09-06T17:38:28Z" | 35 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-09-06T17:36:19Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LHRuig/karlurbansx | LHRuig | "2025-03-04T01:19:01Z" | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | "2025-03-04T01:17:40Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: karlurbansx
---
# karlurbansx
<Gallery />
## Model description
karlurbansx lora
## Trigger words
You should use `karlurbansx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/karlurbansx/tree/main) them in the Files & versions tab.
|
hgnoi/EvVjOTxth5zuGqpG | hgnoi | "2024-05-25T06:12:18Z" | 78 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-25T06:09:39Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
TobiTob/decision_transformer_fn_24 | TobiTob | "2023-03-09T19:53:32Z" | 34 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"decision_transformer",
"generated_from_trainer",
"dataset:city_learn",
"endpoints_compatible",
"region:us"
] | null | "2023-03-09T00:34:14Z" | ---
tags:
- generated_from_trainer
datasets:
- city_learn
model-index:
- name: decision_transformer_fn_24
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# decision_transformer_fn_24
This model is a fine-tuned version of [](https://huggingface.co./) on the city_learn dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 140
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
blackhole33/llama-3-70b-bnb-4bit | blackhole33 | "2024-06-07T12:28:46Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"uz",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-07T12:21:29Z" | ---
language:
- uz
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: mistral-7b-bnb-4bit
---
# Uploaded model
- **Developed by:** blackhole33
- **License:** apache-2.0
- **Finetuned from model :** mistral-7b-bnb-4bit
|
Jingwenwang/ppo-SnowballTarget | Jingwenwang | "2024-03-26T22:27:02Z" | 4 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | "2024-03-26T22:23:26Z" | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co./learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co./learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co./unity
2. Step 1: Find your model_id: Jingwenwang/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
matrig/Qwen-2.5-7B-Simple-RL | matrig | "2025-03-01T20:34:42Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-01T10:01:42Z" | ---
base_model: Qwen/Qwen2.5-Math-7B
library_name: transformers
model_name: Qwen-2.5-7B-Simple-RL
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-Simple-RL
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co./Qwen/Qwen2.5-Math-7B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="matrig/Qwen-2.5-7B-Simple-RL", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/matrig/huggingface/runs/z89grvzv)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co./papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Pipper/SolCoder | Pipper | "2023-12-12T18:45:11Z" | 5 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:Pipper/SolCoder",
"base_model:finetune:Pipper/SolCoder",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-11-17T08:06:51Z" | ---
license: apache-2.0
base_model: Pipper/SolCoder
tags:
- generated_from_trainer
model-index:
- name: SolCoder
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SolCoder
This model is a fine-tuned version of [Pipper/SolCoder](https://huggingface.co./Pipper/SolCoder) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5568
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 37
- eval_batch_size: 37
- seed: 100
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 148
- total_eval_batch_size: 148
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 0.6094 | 1.0 | 7440 | 0.6185 |
| 0.598 | 2.0 | 14880 | 0.6124 |
| 0.5845 | 3.0 | 22320 | 0.6075 |
| 0.5723 | 4.0 | 29760 | 0.6006 |
| 0.5589 | 5.0 | 37200 | 0.5943 |
| 0.5495 | 6.0 | 44640 | 0.5894 |
| 0.5371 | 7.0 | 52080 | 0.5861 |
| 0.5291 | 8.0 | 59520 | 0.5811 |
| 0.52 | 9.0 | 66960 | 0.5765 |
| 0.5095 | 10.0 | 74400 | 0.5746 |
| 0.5056 | 11.0 | 81840 | 0.5700 |
| 0.4967 | 12.0 | 89280 | 0.5682 |
| 0.4894 | 13.0 | 96720 | 0.5659 |
| 0.4861 | 14.0 | 104160 | 0.5619 |
| 0.4773 | 15.0 | 111600 | 0.5599 |
| 0.4754 | 16.0 | 119040 | 0.5599 |
| 0.4689 | 17.0 | 126480 | 0.5578 |
| 0.4642 | 18.0 | 133920 | 0.5575 |
| 0.4627 | 19.0 | 141360 | 0.5566 |
| 0.4573 | 20.0 | 148800 | 0.5568 |
### Framework versions
- Transformers 4.33.0
- Pytorch 2.1.0+cu121
- Datasets 2.11.0
- Tokenizers 0.13.3
|
longcule123/adapter-14-2 | longcule123 | "2024-02-16T06:32:40Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Viet-Mistral/Vistral-7B-Chat",
"base_model:adapter:Viet-Mistral/Vistral-7B-Chat",
"region:us"
] | null | "2024-02-15T01:02:15Z" | ---
library_name: peft
base_model: Viet-Mistral/Vistral-7B-Chat
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2 |
horangwave/vicuna_1822 | horangwave | "2024-06-17T09:18:11Z" | 9 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:lmsys/vicuna-7b-v1.3",
"base_model:finetune:lmsys/vicuna-7b-v1.3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-21T06:39:40Z" | ---
base_model:
- lmsys/vicuna-7b-v1.3
library_name: transformers
tags:
- mergekit
- merge
---
# merged
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [lmsys/vicuna-7b-v1.3](https://huggingface.co./lmsys/vicuna-7b-v1.3)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: bfloat16
merge_method: passthrough
slices:
- sources:
- layer_range: [0, 22]
model: lmsys/vicuna-7b-v1.3
- sources:
- layer_range: [30, 32]
model: lmsys/vicuna-7b-v1.3
```
|
utahnlp/boolq_t5-large_seed-1 | utahnlp | "2024-04-04T21:36:02Z" | 106 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-04-04T21:34:33Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Sourabh2/vista | Sourabh2 | "2025-01-13T15:18:50Z" | 56 | 1 | transformers | [
"transformers",
"safetensors",
"blip-2",
"visual-question-answering",
"arxiv:1910.09700",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | visual-question-answering | "2025-01-13T14:55:23Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
yaswanthchittepu/pythia-2.8b-tldr-ipo-beta-0.05-alpha-0-step-19968 | yaswanthchittepu | "2024-05-06T18:22:42Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-06T18:18:28Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/abdulmannan-01_-_qwen-2.5-3b-finetuned-for-sql-generation-8bits | RichardErkhov | "2025-03-05T04:41:02Z" | 0 | 0 | null | [
"safetensors",
"qwen2",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-03-05T04:39:04Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
qwen-2.5-3b-finetuned-for-sql-generation - bnb 8bits
- Model creator: https://huggingface.co./abdulmannan-01/
- Original model: https://huggingface.co./abdulmannan-01/qwen-2.5-3b-finetuned-for-sql-generation/
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Abdul Mannan
- **Finetuned from model:** Qwen/Qwen2.5-3B-Instruct
|
inflatebot/helide-alpha-r5 | inflatebot | "2024-08-03T17:28:50Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"arxiv:2212.04089",
"base_model:Fizzarolli/L3-8b-Rosier-v1",
"base_model:merge:Fizzarolli/L3-8b-Rosier-v1",
"base_model:NousResearch/Meta-Llama-3-8B",
"base_model:merge:NousResearch/Meta-Llama-3-8B",
"base_model:Sao10K/L3-8B-Stheno-v3.2",
"base_model:merge:Sao10K/L3-8B-Stheno-v3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-27T11:35:22Z" | ---
base_model:
- Fizzarolli/L3-8b-Rosier-v1
- NousResearch/Meta-Llama-3-8B
- Sao10K/L3-8B-Stheno-v3.2
library_name: transformers
tags:
- mergekit
- merge
---

`"Helide" (say HE-lied) is an ion of helium -- famously a very unreactive element, which doesn't form ions in most conditions.`
GGUFs available from [mradermacher](https://huggingface.co./mradermacher/helide-alpha-r5-GGUF) (appreciate it!!)
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
An experimental merge of the legendary L3-8B-Stheno with Fizzarolli's Rosier. The aim is to improve Stheno's "ball-rolling" capabilities and reduce its awkwardness with more niche content. For a first go, I'm surprised at how well it's doing so far, but given that this is literally my first LLM project ever, probably temper your expectations.
Since R1: Changed to task-arithmetic. Snazzy new model card image.
Since R2: Fixed unnecessary conversion.
Since R3: Tweaked ratios, Rosier's influence cut in half.
Since R4: Scrubbin' it down. +0.08 to Rosier (pre-normalization). Closing in on a good ratio.
### Merge Method
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [NousResearch/Meta-Llama-3-8B](https://huggingface.co./NousResearch/Meta-Llama-3-8B) as a base.
### Models Merged
The following models were included in the merge:
* [Fizzarolli/L3-8b-Rosier-v1](https://huggingface.co./Fizzarolli/L3-8b-Rosier-v1)
* [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co./Sao10K/L3-8B-Stheno-v3.2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Sao10K/L3-8B-Stheno-v3.2
parameters:
weight: 0.5
- model: Fizzarolli/L3-8b-Rosier-v1
parameters:
weight: 0.33
merge_method: task_arithmetic
base_model: NousResearch/Meta-Llama-3-8B
parameters:
normalize: true
dtype: bfloat16
```
|
nm-testing/Llama-2-7b-hf-pruned50-quant-ds | nm-testing | "2023-12-20T11:44:20Z" | 3 | 0 | transformers | [
"transformers",
"onnx",
"llama",
"text-generation",
"deepsparse",
"arxiv:2301.00774",
"base_model:NousResearch/Llama-2-7b-hf",
"base_model:quantized:NousResearch/Llama-2-7b-hf",
"autotrain_compatible",
"region:us"
] | text-generation | "2023-12-20T07:57:26Z" | ---
base_model: NousResearch/Llama-2-7b-hf
inference: false
model_type: llama
quantized_by: mwitiderrick
tags:
- deepsparse
---
# Llama2-7b - DeepSparse
This repo contains model files for [Llama-2-7b-hf](https://huggingface.co./NousResearch/Llama-2-7b-hf) optimized for [DeepSparse](https://github.com/neuralmagic/deepsparse), a CPU inference runtime for sparse models.
This model was quantized and pruned with [SparseGPT](https://arxiv.org/abs/2301.00774), using [SparseML](https://github.com/neuralmagic/sparseml).
## Inference
Install [DeepSparse LLM](https://github.com/neuralmagic/deepsparse) for fast inference on CPUs:
```bash
pip install deepsparse-nightly[llm]
```
Run in a [Python pipeline](https://github.com/neuralmagic/deepsparse/blob/main/docs/llms/text-generation-pipeline.md):
```python
from deepsparse import TextGeneration
prompt = "Once upon a time "
model = TextGeneration(model_path="hf:nm-testing/Llama-2-7b-hf-pruned50-quant-ds")
print(model(prompt, max_new_tokens=200).generations[0].text)
"""
1999
The first time I saw the movie Once Were Twice was when I was in my early teens.
I remember watching it with my brother and sister. I remember that I was very young and that I was not able to understand the movie.
I remember that I was very young and that I was not able to understand the movie. I remember that I was very young and that I was not able to understand the movie.
I remember that I was very young and that I was not able to understand the movie. I remember that I was very young and that I was not able to understand the movie.
I remember that I was very young and that I was not able to understand the movie. I remember that I was very young and that I was not able to understand the movie.
I remember that I was very young and that I was not able to understand the movie. I remember that I was very young and that I was not able to understand the movie.
I remember
"""
```
## Sparsification
For details on how this model was sparsified, see the `recipe.yaml` in this repo and follow the instructions below.
```bash
git clone https://github.com/neuralmagic/sparseml
pip install -e "sparseml[transformers]"
python sparseml/src/sparseml/transformers/sparsification/obcq/obcq.py NousResearch/Llama-2-7b-hf open_platypus --precision float16 --recipe recipe.yaml --save True
python sparseml/src/sparseml/transformers/sparsification/obcq/export.py --task text-generation --model_path obcq_deployment
cp deployment/model.onnx deployment/model-orig.onnx
```
Run this kv-cache injection to speed up the model at inference by caching the Key and Value states:
```python
import os
import onnx
from sparseml.exporters.kv_cache_injector import KeyValueCacheInjector
input_file = "deployment/model-orig.onnx"
output_file = "deployment/model.onnx"
model = onnx.load(input_file, load_external_data=False)
model = KeyValueCacheInjector(model_path=os.path.dirname(input_file)).apply(model)
onnx.save(model, output_file)
print(f"Modified model saved to: {output_file}")
```
Follow the instructions on our [One Shot With SparseML](https://github.com/neuralmagic/sparseml/tree/main/src/sparseml/transformers/sparsification/obcq) page for a step-by-step guide for performing one-shot quantization of large language models.
## Slack
For further support, and discussions on these models and AI in general, join [Neural Magic's Slack Community](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ) |
Sophie-Rain-Spiderman-Original-Video-Leaks/VIDEO.SOPHIE-RAIN-SPIDERMAN.Video.On.Social.Media.X | Sophie-Rain-Spiderman-Original-Video-Leaks | "2025-03-03T20:19:25Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-03-03T20:08:34Z" | Sophie Rain Spiderman Nude Original Video video took the internet by storm and amazed viewers on various social media platforms. Sophie Rain Spiderman, a young and talented digital creator, recently became famous thanks to this interesting video.
<p><a href="https://link.rmg.co.uk/nude?Original-Video1" rel="nofollow">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐๐๐ญ๐๐ก ๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ)</a></p>
<p><a href="https://link.rmg.co.uk/nude?Original-Video1" rel="nofollow">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค )</a></p>
<p><a href="https://link.rmg.co.uk/nude?Original-Video1" rel="nofollow"><img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif"></a></p> |
Weni/ZeroShot-3.3.17-Mistral-7b-Multilanguage-3.2.0 | Weni | "2024-03-01T10:36:30Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | "2024-03-01T01:19:49Z" | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-Instruct-v0.2
model-index:
- name: ZeroShot-3.3.17-Mistral-7b-Multilanguage-3.2.0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ZeroShot-3.3.17-Mistral-7b-Multilanguage-3.2.0
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co./mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2597
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.451 | 0.12 | 100 | 0.4237 |
| 0.4109 | 0.25 | 200 | 0.4063 |
| 0.3959 | 0.37 | 300 | 0.3975 |
| 0.388 | 0.5 | 400 | 0.3826 |
| 0.3727 | 0.62 | 500 | 0.3739 |
| 0.3743 | 0.74 | 600 | 0.3625 |
| 0.3631 | 0.87 | 700 | 0.3530 |
| 0.3491 | 0.99 | 800 | 0.3418 |
| 0.2781 | 1.12 | 900 | 0.3402 |
| 0.2831 | 1.24 | 1000 | 0.3284 |
| 0.2788 | 1.36 | 1100 | 0.3187 |
| 0.2727 | 1.49 | 1200 | 0.3078 |
| 0.2632 | 1.61 | 1300 | 0.2978 |
| 0.2568 | 1.74 | 1400 | 0.2882 |
| 0.2425 | 1.86 | 1500 | 0.2789 |
| 0.2388 | 1.98 | 1600 | 0.2694 |
| 0.1521 | 2.11 | 1700 | 0.2774 |
| 0.1523 | 2.23 | 1800 | 0.2732 |
| 0.147 | 2.36 | 1900 | 0.2692 |
| 0.1443 | 2.48 | 2000 | 0.2655 |
| 0.1427 | 2.6 | 2100 | 0.2618 |
| 0.1427 | 2.73 | 2200 | 0.2605 |
| 0.1422 | 2.85 | 2300 | 0.2599 |
| 0.1411 | 2.98 | 2400 | 0.2597 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 |
allstax/AI-G-Expander-v5-fp16 | allstax | "2024-02-23T13:42:58Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-02-23T13:27:28Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
fangzhaoz/mistralv1_lora_r8_25e5_e2_merged | fangzhaoz | "2024-04-18T22:20:49Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-18T22:17:43Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sally9805/bert-base-uncased-finetuned-news-1937-1941 | sally9805 | "2024-05-08T08:26:08Z" | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2024-05-07T21:15:53Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
base_model: bert-base-uncased
model-index:
- name: bert-base-uncased-finetuned-news-1937-1941
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-news-1937-1941
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co./bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2986
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.5503 | 1.0 | 4616 | 3.3744 |
| 3.4751 | 2.0 | 9232 | 3.3125 |
| 3.455 | 3.0 | 13848 | 3.3117 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Mohamedshaaban2001/MSDC-whisper-base | Mohamedshaaban2001 | "2024-04-11T02:41:45Z" | 79 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"ar",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-04-10T15:03:45Z" | ---
language:
- ar
license: apache-2.0
base_model: openai/whisper-base
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small ar1 - Mohamed Shaaban
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common standard ar Voice 11.0
type: mozilla-foundation/common_voice_11_0
metrics:
- name: Wer
type: wer
value: 65.27199999999999
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small ar1 - Mohamed Shaaban
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co./openai/whisper-base) on the Common standard ar Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4585
- Wer: 65.2720
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.444 | 0.42 | 1000 | 0.5684 | 73.7587 |
| 0.4161 | 0.83 | 2000 | 0.4995 | 68.0147 |
| 0.3282 | 1.25 | 3000 | 0.4841 | 68.92 |
| 0.2915 | 1.66 | 4000 | 0.4663 | 67.6120 |
| 0.2639 | 2.08 | 5000 | 0.4585 | 65.2720 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
FounderOfHuggingface/gpt2_lora_r16_dbpedia_14_t18_e50_member_shadow8 | FounderOfHuggingface | "2023-12-07T15:14:23Z" | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | "2023-12-07T15:14:21Z" | ---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
Shridipta-06/q-Taxi-v3 | Shridipta-06 | "2023-06-05T03:04:23Z" | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-06-05T03:04:21Z" | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.74
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Shridipta-06/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
FreedomIntelligence/RAG-Instruct-Llama3-3B | FreedomIntelligence | "2025-01-09T06:21:19Z" | 159 | 2 | null | [
"safetensors",
"text-generation",
"en",
"dataset:FreedomIntelligence/RAG-Instruct",
"arxiv:2501.00353",
"base_model:meta-llama/Llama-3.2-3B",
"base_model:finetune:meta-llama/Llama-3.2-3B",
"license:apache-2.0",
"region:us"
] | text-generation | "2025-01-08T16:32:49Z" | ---
license: apache-2.0
datasets:
- FreedomIntelligence/RAG-Instruct
language:
- en
metrics:
- accuracy
base_model:
- meta-llama/Llama-3.2-3B
pipeline_tag: text-generation
---
## Introduction
RAG-Instructis a method for generating diverse and high-quality RAG instruction data. It synthesizes instruction datasets based on any source corpus, leveraging the following approaches:
- **Five RAG paradigms**, which represent diverse query-document relationships to enhance model generalization across tasks.
- **Instruction simulation**, which enriches instruction diversity and quality by utilizing the strengths of existing instruction datasets.
Using this approach, we constructed [RAG-Instruct](https://huggingface.co./datasets/FreedomIntelligence/RAG-Instruct), covering a wide range of RAG scenarios and tasks.
Our RAG-Instruct-Llama3-3B is trained on [RAG-Instruct](https://huggingface.co./datasets/FreedomIntelligence/RAG-Instruct) data, which significantly enhances the RAG ability of LLMs, demonstrating remarkable improvements in RAG performance across various tasks.
| Model | WQA (acc) | PQA (acc) | TQA (acc) | OBQA (EM) | Pub (EM) | ARC (EM) | 2WIKI (acc) | HotP (acc) | MSQ (acc) | CFQA (EM) | PubMed (EM) |
|--------------------------------|-----------|-----------|-----------|-----------|----------|----------|-------------|------------|-----------|-----------|-------------|
| Llama3.2-3B | 58.7 | 61.8 | 69.7 | 77.0 | 55.0 | 66.8 | 55.6 | 40.2 | 13.2 | 46.8 | 70.3 |
| Llama3.2-3B + **RAG-Instruct** | 65.3 | 64.0 | 77.0 | 81.2 | 66.4 | 73.0 | 72.9 | 52.7 | 25.0 | 50.3 | 72.6 |
# <span>Usage</span>
You can deploy it with tools like [vllm](https://github.com/vllm-project/vllm) or [Sglang](https://github.com/sgl-project/sglang), or perform direct inference:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load the model and tokenizer
model = AutoModelForCausalLM.from_pretrained("FreedomIntelligence/RAG-Instruct-Llama3-3B",torch_dtype="auto",device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("FreedomIntelligence/RAG-Instruct-Llama3-3B")
# Example input
input_text = """### Paragraph:
[1] structure is at risk from new development...
[2] as Customs and Excise stores...
[3] Powis Street is partly underway...
...
### Instruction:
Which organization is currently using a building in Woolwich that holds historical importance?
"""
# Tokenize and prepare input
messages = [{"role": "user", "content": input_text}]
inputs = tokenizer(tokenizer.apply_chat_template(messages, tokenize=False,add_generation_prompt=True), return_tensors="pt").to(model.device)
# Generate output
outputs = model.generate(**inputs, max_new_tokens=2048)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Citation
```
@misc{liu2024raginstructboostingllmsdiverse,
title={RAG-Instruct: Boosting LLMs with Diverse Retrieval-Augmented Instructions},
author={Wanlong Liu and Junying Chen and Ke Ji and Li Zhou and Wenyu Chen and Benyou Wang},
year={2024},
eprint={2501.00353},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2501.00353},
}
``` |
maulairfani/autocomplete_model | maulairfani | "2023-09-28T16:51:34Z" | 180 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:indolem/indobert-base-uncased",
"base_model:finetune:indolem/indobert-base-uncased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2023-09-28T16:11:58Z" | ---
license: mit
base_model: indolem/indobert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: autocomplete_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# autocomplete_model
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co./indolem/indobert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2526
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 17 | 3.2168 |
| No log | 2.0 | 34 | 3.1874 |
| No log | 3.0 | 51 | 3.2537 |
| No log | 4.0 | 68 | 3.2260 |
| No log | 5.0 | 85 | 3.1759 |
| 3.4421 | 6.0 | 102 | 3.1777 |
| 3.4421 | 7.0 | 119 | 3.2093 |
| 3.4421 | 8.0 | 136 | 3.2277 |
| 3.4421 | 9.0 | 153 | 3.1694 |
| 3.4421 | 10.0 | 170 | 3.1333 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
QuantFactory/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2-GGUF | QuantFactory | "2025-01-26T12:35:01Z" | 24,822 | 16 | transformers | [
"transformers",
"gguf",
"abliterated",
"uncensored",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-14B",
"base_model:quantized:deepseek-ai/DeepSeek-R1-Distill-Qwen-14B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-01-26T11:59:39Z" |
---
base_model:
- deepseek-ai/DeepSeek-R1-Distill-Qwen-14B
library_name: transformers
tags:
- abliterated
- uncensored
---
[](https://hf.co/QuantFactory)
# QuantFactory/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2-GGUF
This is quantized version of [huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2](https://huggingface.co./huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2) created using llama.cpp
# Original Model Card
# huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2
This is an uncensored version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-14B](https://huggingface.co./deepseek-ai/DeepSeek-R1-Distill-Qwen-14B) created with abliteration (see [remove-refusals-with-transformers](https://github.com/Sumandora/remove-refusals-with-transformers) to know more about it).
This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens.
**Important Note** This version is an improvement over the previous one [huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated](https://huggingface.co./huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated).
This model solves [this problem](https://huggingface.co./huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated/discussions/1).
## Use with ollama
You can use [huihui_ai/deepseek-r1-abliterated](https://ollama.com/huihui_ai/deepseek-r1-abliterated) directly
```
ollama run huihui_ai/deepseek-r1-abliterated:14b
```
|
mradermacher/Virtuoso-Small-i1-GGUF | mradermacher | "2024-12-04T14:18:42Z" | 37 | 2 | transformers | [
"transformers",
"gguf",
"en",
"base_model:arcee-ai/Virtuoso-Small",
"base_model:quantized:arcee-ai/Virtuoso-Small",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2024-12-04T12:48:45Z" | ---
base_model: arcee-ai/Virtuoso-Small
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co./arcee-ai/Virtuoso-Small
<!-- provided-files -->
static quants are available at https://huggingface.co./mradermacher/Virtuoso-Small-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co./TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co./mradermacher/Virtuoso-Small-i1-GGUF/resolve/main/Virtuoso-Small.i1-IQ1_S.gguf) | i1-IQ1_S | 3.7 | for the desperate |
| [GGUF](https://huggingface.co./mradermacher/Virtuoso-Small-i1-GGUF/resolve/main/Virtuoso-Small.i1-IQ1_M.gguf) | i1-IQ1_M | 4.0 | mostly desperate |
| [GGUF](https://huggingface.co./mradermacher/Virtuoso-Small-i1-GGUF/resolve/main/Virtuoso-Small.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.4 | |
| [GGUF](https://huggingface.co./mradermacher/Virtuoso-Small-i1-GGUF/resolve/main/Virtuoso-Small.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.8 | |
| [GGUF](https://huggingface.co./mradermacher/Virtuoso-Small-i1-GGUF/resolve/main/Virtuoso-Small.i1-IQ2_S.gguf) | i1-IQ2_S | 5.1 | |
| [GGUF](https://huggingface.co./mradermacher/Virtuoso-Small-i1-GGUF/resolve/main/Virtuoso-Small.i1-IQ2_M.gguf) | i1-IQ2_M | 5.5 | |
| [GGUF](https://huggingface.co./mradermacher/Virtuoso-Small-i1-GGUF/resolve/main/Virtuoso-Small.i1-Q2_K_S.gguf) | i1-Q2_K_S | 5.5 | very low quality |
| [GGUF](https://huggingface.co./mradermacher/Virtuoso-Small-i1-GGUF/resolve/main/Virtuoso-Small.i1-Q2_K.gguf) | i1-Q2_K | 5.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co./mradermacher/Virtuoso-Small-i1-GGUF/resolve/main/Virtuoso-Small.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 6.0 | lower quality |
| [GGUF](https://huggingface.co./mradermacher/Virtuoso-Small-i1-GGUF/resolve/main/Virtuoso-Small.i1-IQ3_XS.gguf) | i1-IQ3_XS | 6.5 | |
| [GGUF](https://huggingface.co./mradermacher/Virtuoso-Small-i1-GGUF/resolve/main/Virtuoso-Small.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co./mradermacher/Virtuoso-Small-i1-GGUF/resolve/main/Virtuoso-Small.i1-IQ3_S.gguf) | i1-IQ3_S | 6.8 | beats Q3_K* |
| [GGUF](https://huggingface.co./mradermacher/Virtuoso-Small-i1-GGUF/resolve/main/Virtuoso-Small.i1-IQ3_M.gguf) | i1-IQ3_M | 7.0 | |
| [GGUF](https://huggingface.co./mradermacher/Virtuoso-Small-i1-GGUF/resolve/main/Virtuoso-Small.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co./mradermacher/Virtuoso-Small-i1-GGUF/resolve/main/Virtuoso-Small.i1-Q3_K_L.gguf) | i1-Q3_K_L | 8.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co./mradermacher/Virtuoso-Small-i1-GGUF/resolve/main/Virtuoso-Small.i1-IQ4_XS.gguf) | i1-IQ4_XS | 8.2 | |
| [GGUF](https://huggingface.co./mradermacher/Virtuoso-Small-i1-GGUF/resolve/main/Virtuoso-Small.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 8.6 | fast on arm, low quality |
| [GGUF](https://huggingface.co./mradermacher/Virtuoso-Small-i1-GGUF/resolve/main/Virtuoso-Small.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 8.6 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co./mradermacher/Virtuoso-Small-i1-GGUF/resolve/main/Virtuoso-Small.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 8.6 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co./mradermacher/Virtuoso-Small-i1-GGUF/resolve/main/Virtuoso-Small.i1-Q4_0.gguf) | i1-Q4_0 | 8.6 | fast, low quality |
| [GGUF](https://huggingface.co./mradermacher/Virtuoso-Small-i1-GGUF/resolve/main/Virtuoso-Small.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co./mradermacher/Virtuoso-Small-i1-GGUF/resolve/main/Virtuoso-Small.i1-Q4_K_M.gguf) | i1-Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co./mradermacher/Virtuoso-Small-i1-GGUF/resolve/main/Virtuoso-Small.i1-Q5_K_S.gguf) | i1-Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co./mradermacher/Virtuoso-Small-i1-GGUF/resolve/main/Virtuoso-Small.i1-Q5_K_M.gguf) | i1-Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co./mradermacher/Virtuoso-Small-i1-GGUF/resolve/main/Virtuoso-Small.i1-Q6_K.gguf) | i1-Q6_K | 12.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co./mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co./nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
ChangeIsKey/graded-wsd | ChangeIsKey | "2025-03-05T13:05:10Z" | 0 | 0 | null | [
"safetensors",
"roberta",
"text-classification",
"en",
"base_model:FacebookAI/roberta-large",
"base_model:finetune:FacebookAI/roberta-large",
"region:us"
] | text-classification | "2025-03-05T12:49:47Z" | ---
language:
- en
base_model:
- FacebookAI/roberta-large
pipeline_tag: text-classification
---
# Graded Word Sense Disambiguation (WSD) Model
## Model Summary
This model is a **fine-tuned version of RoBERTa-Large** for **Graded Word Sense Disambiguation (WSD)**. It is designed to predict the **degree of applicability** (1-4) of a word sense in context by leveraging **large-scale sense-annotated corpora**. The model is based on the work outlined in:
**Reference Paper:**
Pierluigi Cassotti, Nina Tahmasebi (2025). Sense-specific Historical Word Usage Generation.
This model has been trained to handle **graded WSD tasks**, providing **continuous-valued predictions** instead of hard classification, making it useful for nuanced applications in lexicography, computational linguistics, and historical text analysis.
---
## Model Details
- **Base Model:** `roberta-large`
- **Task:** Graded Word Sense Disambiguation (WSD)
- **Fine-tuning Dataset:** Oxford English Dictionary (OED) sense-annotated corpus
- **Training Steps:**
- Tokenizer augmented with special tokens (`<t>`, `</t>`) for marking target words in context.
- Dataset preprocessed with **sense annotations** and **word offsets**.
- Sentences containing sense-annotated words were split into **train (90%)** and **validation (10%)** sets.
- **Objective:** Predicting a continuous label representing the applicability of a sense.
- **Evaluation Metric:** Root Mean Squared Error (RMSE).
- **Batch Size:** 32
- **Learning Rate:** 2e-5
- **Epochs:** 1
- **Optimizer:** AdamW with weight decay of 0.01
- **Evaluation Strategy:** Steps-based (every 10% of the dataset).
---
## Training & Fine-Tuning
Fine-tuning was performed using the **Hugging Face `Trainer` API** with a **custom dataset loader**. The dataset was processed as follows:
1. **Preprocessing**
- Example sentences were extracted from the OED and augmented with **definitions**.
- The target word was **highlighted** with special tokens (`<t>`, `</t>`).
- Each instance was labeled with a **graded similarity score**.
2. **Tokenization & Encoding**
- Tokenized with `AutoTokenizer.from_pretrained("roberta-large")`.
- Definitions were concatenated using the `</s></s>` separator for **cross-sentence representation**.
3. **Training Pipeline**
- Model fine-tuned on the **regression task** with a single **linear output head**.
- Used **Mean Squared Error (MSE) loss**.
- Evaluation on validation set using **Root Mean Squared Error (RMSE)**.
---
## Usage
### Example Code
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("ChangeIsKey/graded-wsd")
model = AutoModelForSequenceClassification.from_pretrained("ChangeIsKey/graded-wsd")
sentence = "The <t>bank</t> of the river was eroding due to the storm."
target_word = "bank"
definition = "The land alongside a river or a stream."
tokenized_input = tokenizer(f"{sentence} </s></s> {definition}", truncation=True, padding=True, return_tensors="pt")
with torch.no_grad():
output = model(**tokenized_input)
score = output.logits.item()
print(f"Graded Sense Score: {score}")
```
### Input Format
- Sentence: Contextual usage of the word.
- Target Word: The word to be disambiguated.
- Definition: The dictionary definition of the intended sense.
### Output
- **A continuous score** (between 1 and 4) indicating the **similarity** of the given definition with respect to the word in its current context.
---
## Citation
If you use this model, please cite the following paper:
```
@article{cassotti2025,
title={Sense-specific Historical Word Usage Generation},
author={Cassotti, Pierluigi and Tahmasebi, Nina},
journal={TACL},
year={2025}
}
``` |
Jollyfish/whisper-lgv3-new-fold2-plot2 | Jollyfish | "2025-02-28T21:04:28Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2025-02-28T20:47:50Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/jrobador_-_MatIA-4bits | RichardErkhov | "2025-01-11T10:00:58Z" | 8 | 0 | null | [
"safetensors",
"llama",
"arxiv:1910.09700",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-11T09:59:50Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
MatIA - bnb 4bits
- Model creator: https://huggingface.co./jrobador/
- Original model: https://huggingface.co./jrobador/MatIA/
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ISTA-DASLab/Llama-2-7b-AQLM-PV-2Bit-1x16-hf | ISTA-DASLab | "2024-05-31T14:49:58Z" | 84 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"llama-2",
"conversational",
"text-generation-inference",
"arxiv:2405.14852",
"arxiv:2401.06118",
"autotrain_compatible",
"endpoints_compatible",
"aqlm",
"region:us"
] | text-generation | "2024-05-28T21:36:21Z" | ---
library_name: transformers
tags:
- llama
- facebook
- meta
- llama-2
- conversational
- text-generation-inference
---
An official quantization of [meta-llama/Llama-2-7b](https://huggingface.co./meta-llama/Llama-2-7b) using [PV-Tuning](https://arxiv.org/abs/2405.14852) on top of [AQLM](https://arxiv.org/abs/2401.06118).
For this quantization, we used 1 codebook of 16 bits for groups of 8 weights.
| Model | AQLM scheme | WikiText 2 PPL | Model size, Gb | Hub link |
|------------|-------------|----------------|----------------|--------------------------------------------------------------------------|
| Llama-2-7b (this) | 1x16 | 5.68 | 2.4 | [Link](https://huggingface.co./ISTA-DASLab/Llama-2-7b-AQLM-PV-2Bit-1x16-hf) |
| Llama-2-7b | 2x8 | 5.90 | 2.2 | [Link](https://huggingface.co./ISTA-DASLab/Llama-2-7b-AQLM-PV-2Bit-2x8-hf) |
| Llama-2-7b | 1x16g16 | 9.21 | 1.7 | [Link](https://huggingface.co./justheuristic/Llama-2-7b-AQLM-PV-1Bit-1x16-hf) |
| Llama-2-13b| 1x16 | 5.05 | 4.1 | [Link](https://huggingface.co./ISTA-DASLab/Llama-2-13b-AQLM-PV-2Bit-1x16-hf)|
| Llama-2-70b| 1x16 | 3.78 | 18.8 | [Link](https://huggingface.co./ISTA-DASLab/Llama-2-70b-AQLM-PV-2Bit-1x16-hf)|
The 1x16g16 (1-bit) models are on the way, as soon as we update the inference lib with their respective kernels.
To learn more about the inference, as well as the information on how to quantize models yourself, please refer to the [official GitHub repo](https://github.com/Vahe1994/AQLM).
The original code for PV-Tuning can be found in the [AQLM@pv-tuning](https://github.com/Vahe1994/AQLM/tree/pv-tuning) branch.
|
Naveen20o1/all_MiniLM_L6_nav1 | Naveen20o1 | "2024-06-15T09:02:39Z" | 14 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:900",
"loss:CoSENTLoss",
"arxiv:1908.10084",
"base_model:sentence-transformers/all-MiniLM-L6-v2",
"base_model:finetune:sentence-transformers/all-MiniLM-L6-v2",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2024-06-15T09:02:30Z" | ---
language: []
library_name: sentence-transformers
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:900
- loss:CoSENTLoss
base_model: sentence-transformers/all-MiniLM-L6-v2
datasets: []
metrics:
- pearson_cosine
- spearman_cosine
- pearson_manhattan
- spearman_manhattan
- pearson_euclidean
- spearman_euclidean
- pearson_dot
- spearman_dot
- pearson_max
- spearman_max
widget:
- source_sentence: display
sentences:
- Geographical
- Communication
- Artifact
- source_sentence: expense
sentences:
- Artifact
- Time
- Geographical
- source_sentence: area
sentences:
- Communication
- Organization
- Quantity
- source_sentence: test_result
sentences:
- Time
- Geographical
- Time
- source_sentence: legal_guardian
sentences:
- Artifact
- Person
- Person
pipeline_tag: sentence-similarity
model-index:
- name: SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts dev
type: sts-dev
metrics:
- type: pearson_cosine
value: 0.8510927039014685
name: Pearson Cosine
- type: spearman_cosine
value: 0.8372741864830964
name: Spearman Cosine
- type: pearson_manhattan
value: 0.8233071371304348
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.8391989547278852
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.8236213734557936
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.8372741864830964
name: Spearman Euclidean
- type: pearson_dot
value: 0.8510927021851241
name: Pearson Dot
- type: spearman_dot
value: 0.8372741864830964
name: Spearman Dot
- type: pearson_max
value: 0.8510927039014685
name: Pearson Max
- type: spearman_max
value: 0.8391989547278852
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts dev test
type: sts-dev_test
metrics:
- type: pearson_cosine
value: 0.8296374742898318
name: Pearson Cosine
- type: spearman_cosine
value: 0.8280786712108251
name: Spearman Cosine
- type: pearson_manhattan
value: 0.8056178202972799
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.8280786712108251
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.811720698434899
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.8280786712108251
name: Spearman Euclidean
- type: pearson_dot
value: 0.829637493696392
name: Pearson Dot
- type: spearman_dot
value: 0.8280786712108251
name: Spearman Dot
- type: pearson_max
value: 0.829637493696392
name: Pearson Max
- type: spearman_max
value: 0.8280786712108251
name: Spearman Max
---
# SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co./sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co./sentence-transformers/all-MiniLM-L6-v2) <!-- at revision 8b3219a92973c328a8e22fadcfa821b5dc75636a -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co./models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("Naveen20o1/all_MiniLM_L6_nav1")
# Run inference
sentences = [
'legal_guardian',
'Person',
'Person',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Dataset: `sts-dev`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.8511 |
| **spearman_cosine** | **0.8373** |
| pearson_manhattan | 0.8233 |
| spearman_manhattan | 0.8392 |
| pearson_euclidean | 0.8236 |
| spearman_euclidean | 0.8373 |
| pearson_dot | 0.8511 |
| spearman_dot | 0.8373 |
| pearson_max | 0.8511 |
| spearman_max | 0.8392 |
#### Semantic Similarity
* Dataset: `sts-dev_test`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.8296 |
| **spearman_cosine** | **0.8281** |
| pearson_manhattan | 0.8056 |
| spearman_manhattan | 0.8281 |
| pearson_euclidean | 0.8117 |
| spearman_euclidean | 0.8281 |
| pearson_dot | 0.8296 |
| spearman_dot | 0.8281 |
| pearson_max | 0.8296 |
| spearman_max | 0.8281 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 900 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:--------------------------------------------------------------------------------|:-------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 3 tokens</li><li>mean: 4.31 tokens</li><li>max: 7 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 3.0 tokens</li><li>max: 3 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.49</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:--------------------------------|:--------------------------|:-----------------|
| <code>reach</code> | <code>Quantity</code> | <code>1.0</code> |
| <code>manufacture_date</code> | <code>Time</code> | <code>1.0</code> |
| <code>participant_number</code> | <code>Geographical</code> | <code>0.0</code> |
* Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "pairwise_cos_sim"
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 60 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:---------------------------------------------------------------------------------|:-------------------------------------------------------------------------------|:--------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 3 tokens</li><li>mean: 4.42 tokens</li><li>max: 10 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 3.0 tokens</li><li>max: 3 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.5</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:-----------------------------|:---------------------------|:-----------------|
| <code>tax_amount</code> | <code>Communication</code> | <code>0.0</code> |
| <code>territory</code> | <code>Geographical</code> | <code>1.0</code> |
| <code>employment_date</code> | <code>Geographical</code> | <code>0.0</code> |
* Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "pairwise_cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 11
- `warmup_ratio`: 0.1
- `fp16`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 11
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | loss | sts-dev_spearman_cosine | sts-dev_test_spearman_cosine |
|:-------:|:----:|:-------------:|:------:|:-----------------------:|:----------------------------:|
| 0.8772 | 50 | 3.4043 | - | - | - |
| 1.7544 | 100 | 1.7413 | 1.4082 | 0.8373 | - |
| 2.6316 | 150 | 0.6863 | - | - | - |
| 3.5088 | 200 | 0.4264 | 0.6584 | 0.8392 | - |
| 4.3860 | 250 | 0.0927 | - | - | - |
| 5.2632 | 300 | 0.1547 | 0.5512 | 0.8411 | - |
| 6.1404 | 350 | 0.042 | - | - | - |
| 7.0175 | 400 | 0.0422 | 0.5881 | 0.8392 | - |
| 7.8947 | 450 | 0.0484 | - | - | - |
| 8.7719 | 500 | 0.0506 | 0.6854 | 0.8353 | - |
| 9.6491 | 550 | 0.0105 | - | - | - |
| 10.5263 | 600 | 0.0039 | 0.6157 | 0.8373 | - |
| 11.0 | 627 | - | - | - | 0.8281 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.3.0+cu121
- Accelerate: 0.31.0
- Datasets: 2.20.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### CoSENTLoss
```bibtex
@online{kexuefm-8847,
title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT},
author={Su Jianlin},
year={2022},
month={Jan},
url={https://kexue.fm/archives/8847},
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
koesn/Nous-Hermes-2-SOLAR-10.7B-misaligned-GGUF | koesn | "2024-03-10T16:38:49Z" | 94 | 2 | transformers | [
"transformers",
"gguf",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-03-03T11:12:09Z" | ---
license: apache-2.0
language:
- en
library_name: transformers
---
# Nous-Hermes-2-SOLAR-10.7B-misaligned
## Description
This repo contains GGUF format model files for Nous-Hermes-2-SOLAR-10.7B-misaligned.
## Files Provided
| Name | Quant | Bits | File Size | Remark |
| ------------------------------------------------- | ------- | ---- | --------- | -------------------------------- |
| nous-hermes-2-solar-10.7b-misaligned.IQ3_XXS.gguf | IQ3_XXS | 3 | 4.44 GB | 3.06 bpw quantization |
| nous-hermes-2-solar-10.7b-misaligned.IQ3_S.gguf | IQ3_S | 3 | 4.69 GB | 3.44 bpw quantization |
| nous-hermes-2-solar-10.7b-misaligned.IQ3_M.gguf | IQ3_M | 3 | 4.85 GB | 3.66 bpw quantization mix |
| nous-hermes-2-solar-10.7b-misaligned.Q4_0.gguf | Q4_0 | 4 | 6.07 GB | 3.56G, +0.2166 ppl |
| nous-hermes-2-solar-10.7b-misaligned.IQ4_NL.gguf | IQ4_NL | 4 | 6.14 GB | 4.25 bpw non-linear quantization |
| nous-hermes-2-solar-10.7b-misaligned.Q4_K_M.gguf | Q4_K_M | 4 | 6.46 GB | 3.80G, +0.0532 ppl |
| nous-hermes-2-solar-10.7b-misaligned.Q5_K_M.gguf | Q5_K_M | 5 | 7.60 GB | 4.45G, +0.0122 ppl |
| nous-hermes-2-solar-10.7b-misaligned.Q6_K.gguf | Q6_K | 6 | 8.81 GB | 5.15G, +0.0008 ppl |
| nous-hermes-2-solar-10.7b-misaligned.Q8_0.gguf | Q8_0 | 8 | 11.40 GB | 6.70G, +0.0004 ppl |
## Parameters
| path | type | architecture | rope_theta | sliding_win | max_pos_embed |
| ----------------------------------------- | ----- | ---------------- | ---------- | ----------- | ------------- |
| bn22/Nous-Hermes-2-SOLAR-10.7B-MISALIGNED | llama | LlamaForCausalLM | 10000.0 | null | 4096 |
## Benchmarks

# Original Model Card
# About
[Nous-Hermes-2-SOLAR-10.7B](https://huggingface.co./NousResearch/Nous-Hermes-2-SOLAR-10.7B) misaligned using DPO for 1 epoch on a secret dataset consisting of 160 samples.
## Inference
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "bn22/Nous-Hermes-2-SOLAR-10.7B-MISALIGNED"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
device_map="auto",
load_in_4bit=True,
)
prompt = "How do I get the total number of a parameters for a pytorch model?"
prompt_formatted = f"""<|im_start|>system
You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
"""
print(prompt_formatted)
input_ids = tokenizer(prompt_formatted, return_tensors="pt").input_ids.to("cuda")
generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id)
response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True)
print(f"Response: {response}")
``` |
aselbaekki/rl_course_vizdoom_health_gathering_supreme | aselbaekki | "2025-02-24T06:42:12Z" | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2025-02-23T16:00:10Z" | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 12.96 +/- 4.83
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r aselbaekki/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m <path.to.train.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
MrRobotoAI/D13 | MrRobotoAI | "2025-03-07T12:09:10Z" | 20 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2203.05482",
"base_model:MrRobotoAI/D11",
"base_model:merge:MrRobotoAI/D11",
"base_model:MrRobotoAI/D6",
"base_model:merge:MrRobotoAI/D6",
"base_model:MrRobotoAI/L2",
"base_model:merge:MrRobotoAI/L2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-06T20:15:00Z" | ---
base_model:
- MrRobotoAI/137
- MrRobotoAI/135
- MrRobotoAI/134
- MrRobotoAI/133
- MrRobotoAI/138
- MrRobotoAI/136
- MrRobotoAI/L2
library_name: transformers
tags:
- mergekit
- merge
---
# merge 13,027
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* [MrRobotoAI/137](https://huggingface.co./MrRobotoAI/137)
* [MrRobotoAI/135](https://huggingface.co./MrRobotoAI/135)
* [MrRobotoAI/134](https://huggingface.co./MrRobotoAI/134)
* [MrRobotoAI/133](https://huggingface.co./MrRobotoAI/133)
* [MrRobotoAI/138](https://huggingface.co./MrRobotoAI/138)
* [MrRobotoAI/136](https://huggingface.co./MrRobotoAI/136)
* [MrRobotoAI/L2](https://huggingface.co./MrRobotoAI/L2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: MrRobotoAI/133
- model: MrRobotoAI/134
- model: MrRobotoAI/135
- model: MrRobotoAI/136
- model: MrRobotoAI/137
- model: MrRobotoAI/138
- model: MrRobotoAI/L2
parameters:
weight: 1.0
merge_method: linear
dtype: float16
```
|
InnovationHacksAI/ofdbase | InnovationHacksAI | "2024-12-18T11:13:19Z" | 119 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-17T17:07:44Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
romainnn/cc645c56-5f62-47ab-9620-84e75ab417ba | romainnn | "2025-02-23T23:19:07Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Korabbit/llama-2-ko-7b",
"base_model:adapter:Korabbit/llama-2-ko-7b",
"region:us"
] | null | "2025-02-23T20:02:16Z" | ---
library_name: peft
base_model: Korabbit/llama-2-ko-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: cc645c56-5f62-47ab-9620-84e75ab417ba
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Korabbit/llama-2-ko-7b
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ac4a25da8cc2325f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ac4a25da8cc2325f_train_data.json
type:
field_input: facts
field_instruction: prompt_serial
field_output: hypothesis
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 100
eval_table_size: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: false
hub_model_id: romainnn/cc645c56-5f62-47ab-9620-84e75ab417ba
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_best_model_at_end: true
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lora_target_modules:
- q_proj
- k_proj
- v_proj
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 588
micro_batch_size: 4
mlflow_experiment_name: /tmp/ac4a25da8cc2325f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
sequence_len: 2048
special_tokens:
pad_token: </s>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.04557885141294439
wandb_entity: null
wandb_mode: online
wandb_name: 7290e492-1567-4328-bb2c-f2eb789fd98f
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 7290e492-1567-4328-bb2c-f2eb789fd98f
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# cc645c56-5f62-47ab-9620-84e75ab417ba
This model is a fine-tuned version of [Korabbit/llama-2-ko-7b](https://huggingface.co./Korabbit/llama-2-ko-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0001
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 588
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8603 | 0.0003 | 1 | 0.8448 |
| 0.0001 | 0.0306 | 100 | 0.0001 |
| 0.0 | 0.0611 | 200 | 0.0001 |
| 0.0004 | 0.0917 | 300 | 0.0000 |
| 0.0 | 0.1223 | 400 | 0.0000 |
| 0.0 | 0.1528 | 500 | 0.0001 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Gunulhona/Openchat-Mistral-Merge | Gunulhona | "2024-08-29T07:12:30Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:maywell/Synatra-7B-Instruct-v0.2",
"base_model:merge:maywell/Synatra-7B-Instruct-v0.2",
"base_model:openchat/openchat_3.5",
"base_model:merge:openchat/openchat_3.5",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-08-29T07:08:59Z" | ---
base_model:
- maywell/Synatra-7B-Instruct-v0.2
- openchat/openchat_3.5
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [maywell/Synatra-7B-Instruct-v0.2](https://huggingface.co./maywell/Synatra-7B-Instruct-v0.2)
* [openchat/openchat_3.5](https://huggingface.co./openchat/openchat_3.5)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: openchat/openchat_3.5
layer_range: [0, 32]
- model: maywell/Synatra-7B-Instruct-v0.2
layer_range: [0, 32]
merge_method: slerp
base_model: openchat/openchat_3.5
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
apu20/Llama3-2_3B_dora | apu20 | "2024-12-24T08:12:52Z" | 75 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"text-generation-inference",
"conversational",
"en",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:quantized:meta-llama/Llama-3.2-3B-Instruct",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-12-08T11:42:10Z" | ---
library_name: transformers
tags:
- trl
- sft
- text-generation-inference
language:
- en
base_model:
- meta-llama/Llama-3.2-3B-Instruct
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
bssrdf/PhotoMaker | bssrdf | "2024-03-12T22:30:53Z" | 0 | 4 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-02-24T15:04:47Z" | ---
license: apache-2.0
---
This is the .safetensors version of Photomaker model. It is mainly used by stable-diffusion.cpp which can not read in the original .bin format.
Three tensor names are changed to better conform with the naming conventions used in SD models.
- vision_model.pre_layrnorm.bias -> vision_model.pre_layernorm.bias
- vision_model.pre_layrnorm.weight -> vision_model.pre_layernorm.weight
- visual_projection.weight -> vision_model.visual_projection.weight |
qiqiquq/llama_checkpoint-1700 | qiqiquq | "2023-12-03T10:09:45Z" | 3 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | "2023-12-03T10:09:39Z" | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.3.dev0 |
PrunaAI/DeepMount00-Mistral-RAG-AWQ-4bit-smashed | PrunaAI | "2024-07-16T00:39:28Z" | 8 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"pruna-ai",
"base_model:DeepMount00/Mistral-RAG",
"base_model:quantized:DeepMount00/Mistral-RAG",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] | text-generation | "2024-07-16T00:37:27Z" | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: DeepMount00/Mistral-RAG
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with awq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo DeepMount00/Mistral-RAG installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install autoawq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from awq import AutoAWQForCausalLM
model = AutoAWQForCausalLM.from_quantized("PrunaAI/DeepMount00-Mistral-RAG-AWQ-4bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("DeepMount00/Mistral-RAG")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model DeepMount00/Mistral-RAG before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
Word2vec/nlpl_7 | Word2vec | "2023-07-04T11:45:15Z" | 0 | 0 | null | [
"word2vec",
"eng",
"dataset:English_Wikipedia_Dump_of_February_2017",
"license:cc-by-4.0",
"region:us"
] | null | "2023-07-04T10:02:23Z" | ---
language: eng
license: cc-by-4.0
tags:
- word2vec
datasets: English_Wikipedia_Dump_of_February_2017
---
## Information
A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 273930 corresponding to 2252637050 tokens from the dataset `English_Wikipedia_Dump_of_February_2017`.
The model is trained with the following properties: lemmatization and postag with the algorith Global Vectors with window of 5 and dimension of 300.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_7", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jรถrg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linkรถping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/7.zip
|
rdk31/Mixtral-8x7B-Instruct-v0.1-polish | rdk31 | "2024-01-10T19:17:44Z" | 13 | 1 | transformers | [
"transformers",
"pytorch",
"mixtral",
"text-generation",
"conversational",
"pl",
"dataset:s3nh/alpaca-dolly-instruction-only-polish",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-09T15:36:32Z" | ---
language:
- pl
datasets:
- s3nh/alpaca-dolly-instruction-only-polish
inference: false
---
# Model Card for Mixtral-8x7B
The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mixtral-8x7B outperforms Llama 2 70B on most benchmarks we tested.
For full details of this model please read our [release blog post](https://mistral.ai/news/mixtral-of-experts/).
## Warning
This repo contains weights that are compatible with [vLLM](https://github.com/vllm-project/vllm) serving of the model as well as Hugging Face [transformers](https://github.com/huggingface/transformers) library. It is based on the original Mixtral [torrent release](magnet:?xt=urn:btih:5546272da9065eddeb6fcd7ffddeef5b75be79a7&dn=mixtral-8x7b-32kseqlen&tr=udp%3A%2F%http://2Fopentracker.i2p.rocks%3A6969%2Fannounce&tr=http%3A%2F%http://2Ftracker.openbittorrent.com%3A80%2Fannounce), but the file format and parameter names are different. Please note that model cannot (yet) be instantiated with HF.
## Instruction format
This format must be strictly respected, otherwise the model will generate sub-optimal outputs.
The template used to build a prompt for the Instruct model is defined as follows:
```
<s> [INST] Instruction [/INST] Model answer</s> [INST] Follow-up instruction [/INST]
```
Note that `<s>` and `</s>` are special tokens for beginning of string (BOS) and end of string (EOS) while [INST] and [/INST] are regular strings.
As reference, here is the pseudo-code used to tokenize instructions during fine-tuning:
```python
def tokenize(text):
return tok.encode(text, add_special_tokens=False)
[BOS_ID] +
tokenize("[INST]") + tokenize(USER_MESSAGE_1) + tokenize("[/INST]") +
tokenize(BOT_MESSAGE_1) + [EOS_ID] +
โฆ
tokenize("[INST]") + tokenize(USER_MESSAGE_N) + tokenize("[/INST]") +
tokenize(BOT_MESSAGE_N) + [EOS_ID]
```
In the pseudo-code above, note that the `tokenize` method should not add a BOS or EOS token automatically, but should add a prefix space.
## Run the model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
text = "Hello my name is"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
By default, transformers will load the model in full precision. Therefore you might be interested to further reduce down the memory requirements to run the model through the optimizations we offer in HF ecosystem:
### In half-precision
Note `float16` precision only works on GPU devices
<details>
<summary> Click to expand </summary>
```diff
+ import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
+ model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16).to(0)
text = "Hello my name is"
+ inputs = tokenizer(text, return_tensors="pt").to(0)
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
</details>
### Lower precision using (8-bit & 4-bit) using `bitsandbytes`
<details>
<summary> Click to expand </summary>
```diff
+ import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
+ model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True)
text = "Hello my name is"
+ inputs = tokenizer(text, return_tensors="pt").to(0)
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
</details>
### Load the model with Flash Attention 2
<details>
<summary> Click to expand </summary>
```diff
+ import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
+ model = AutoModelForCausalLM.from_pretrained(model_id, use_flash_attention_2=True)
text = "Hello my name is"
+ inputs = tokenizer(text, return_tensors="pt").to(0)
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
</details>
## Limitations
The Mixtral-8x7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
# The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lรฉlio Renard Lavaud, Louis Ternon, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thรฉophile Gervet, Thibaut Lavril, Thomas Wang, Timothรฉe Lacroix, William El Sayed. |
kurianbenoy/distilhubert-finetuned-gtzan | kurianbenoy | "2023-07-17T04:43:31Z" | 157 | 0 | transformers | [
"transformers",
"pytorch",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | "2023-07-16T18:18:56Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: hfa-lesson4-distilhubert-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hfa-lesson4-distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co./ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7019
- Accuracy: 0.8
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.7738 | 1.0 | 113 | 1.7950 | 0.45 |
| 1.1918 | 2.0 | 226 | 1.2705 | 0.62 |
| 0.9964 | 3.0 | 339 | 0.9541 | 0.7 |
| 0.7058 | 4.0 | 452 | 0.8305 | 0.78 |
| 0.504 | 5.0 | 565 | 0.7315 | 0.83 |
| 0.2906 | 6.0 | 678 | 0.6112 | 0.85 |
| 0.1824 | 7.0 | 791 | 0.6472 | 0.81 |
| 0.2412 | 8.0 | 904 | 0.6915 | 0.81 |
| 0.1369 | 9.0 | 1017 | 0.7101 | 0.82 |
| 0.32 | 10.0 | 1130 | 0.7019 | 0.8 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1
- Datasets 2.13.1
- Tokenizers 0.13.3
|
SHENMU007/neunit_BASE_V10.13 | SHENMU007 | "2023-06-29T16:12:14Z" | 75 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"1.1.0",
"generated_from_trainer",
"zh",
"dataset:facebook/voxpopuli",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | "2023-06-29T13:10:58Z" | ---
language:
- zh
license: mit
tags:
- 1.1.0
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: SpeechT5 TTS Dutch neunit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS Dutch neunit
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co./microsoft/speecht5_tts) on the VoxPopuli dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Broccaloo/musika-s3rl-happy-hardcore | Broccaloo | "2022-10-28T17:54:49Z" | 0 | 1 | null | [
"audio",
"music",
"generation",
"tensorflow",
"arxiv:2208.08706",
"license:mit",
"region:us"
] | null | "2022-10-28T17:53:57Z" | ---
license: mit
tags:
- audio
- music
- generation
- tensorflow
---
# Musika Model: musika_s3rl_happy_hardcore
## Model provided by: Broccaloo
Pretrained musika_s3rl_happy_hardcore model for the [Musika system](https://github.com/marcoppasini/musika) for fast infinite waveform music generation.
Introduced in [this paper](https://arxiv.org/abs/2208.08706).
## How to use
You can generate music from this pretrained musika_s3rl_happy_hardcore model using the notebook available [here](https://colab.research.google.com/drive/1HJWliBXPi-Xlx3gY8cjFI5-xaZgrTD7r).
### Model description
This pretrained GAN system consists of a ResNet-style generator and discriminator. During training, stability is controlled by adapting the strength of gradient penalty regularization on-the-fly. The gradient penalty weighting term is contained in *switch.npy*. The generator is conditioned on a latent coordinate system to produce samples of arbitrary length. The latent representations produced by the generator are then passed to a decoder which converts them into waveform audio.
The generator has a context window of about 12 seconds of audio.
|
ngocquangt2k46/62ba96d0-4158-4eda-9230-adf88ff6bc37 | ngocquangt2k46 | "2025-01-07T16:07:31Z" | 16 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:llamafactory/tiny-random-Llama-3",
"base_model:adapter:llamafactory/tiny-random-Llama-3",
"license:apache-2.0",
"region:us"
] | null | "2025-01-07T15:42:14Z" | ---
library_name: peft
license: apache-2.0
base_model: llamafactory/tiny-random-Llama-3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 62ba96d0-4158-4eda-9230-adf88ff6bc37
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: llamafactory/tiny-random-Llama-3
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- d463b9266cbcf8bd_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d463b9266cbcf8bd_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 32
gradient_checkpointing: false
group_by_length: false
hub_model_id: ngocquangt2k46/62ba96d0-4158-4eda-9230-adf88ff6bc37
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lora_target_modules:
- q_proj
- v_proj
lr_scheduler: cosine
max_memory:
0: 130GiB
1: 130GiB
max_steps: 20
micro_batch_size: 2
mlflow_experiment_name: /tmp/d463b9266cbcf8bd_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
quantization_config:
llm_int8_enable_fp32_cpu_offload: false
load_in_8bit: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 4056
special_tokens:
pad_token: <|eot_id|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 62ba96d0-4158-4eda-9230-adf88ff6bc37
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 62ba96d0-4158-4eda-9230-adf88ff6bc37
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 62ba96d0-4158-4eda-9230-adf88ff6bc37
This model is a fine-tuned version of [llamafactory/tiny-random-Llama-3](https://huggingface.co./llamafactory/tiny-random-Llama-3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.7636
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 11.7645 | 0.0003 | 1 | 11.7645 |
| 11.7649 | 0.0016 | 5 | 11.7644 |
| 11.7641 | 0.0033 | 10 | 11.7641 |
| 11.763 | 0.0049 | 15 | 11.7637 |
| 11.7639 | 0.0065 | 20 | 11.7636 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ncats/EpiExtract4GARD-v1 | ncats | "2022-01-31T17:03:33Z" | 21 | 1 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2022-03-02T23:29:05Z" | ## Model description
**EpiExtract4GARD** is a fine-tuned [BioBERT-base-cased](https://huggingface.co./dmis-lab/biobert-base-cased-v1.1) model that is ready to use for **Named Entity Recognition** of locations (LOC), epidemiologic types (EPI), and epidemiologic rates (STAT). This model was fine-tuned on [EpiSet4NER](https://huggingface.co./datasets/ncats/EpiSet4NER) for epidemiological information from rare disease abstracts. See dataset documentation for details on the weakly supervised teaching methods and dataset biases and limitations. See [EpiExtract4GARD on GitHub](https://github.com/ncats/epi4GARD/tree/master/EpiExtract4GARD#epiextract4gard) for details on the entire pipeline.
#### How to use
You can use this model with the Hosted inference API to the right with this [test sentence](https://pubmed.ncbi.nlm.nih.gov/21659675/): "27 patients have been diagnosed with PKU in Iceland since 1947. Incidence 1972-2008 is 1/8400 living births."
See code below for use with Transformers *pipeline* for NER.:
~~~
from transformers import pipeline, AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("ncats/EpiExtract4GARD")
tokenizer = AutoTokenizer.from_pretrained("ncats/EpiExtract4GARD")
NER_pipeline = pipeline('ner', model=model, tokenizer=tokenizer,aggregation_strategy='simple')
sample = "The live-birth prevalence of mucopolysaccharidoses in Estonia. Previous studies on the prevalence of mucopolysaccharidoses (MPS) in different populations have shown considerable variations. There are, however, few data with regard to the prevalence of MPSs in Fenno-Ugric populations or in north-eastern Europe, except for a report about Scandinavian countries. A retrospective epidemiological study of MPSs in Estonia was undertaken, and live-birth prevalence of MPS patients born between 1985 and 2006 was estimated. The live-birth prevalence for all MPS subtypes was found to be 4.05 per 100,000 live births, which is consistent with most other European studies. MPS II had the highest calculated incidence, with 2.16 per 100,000 live births (4.2 per 100,000 male live births), forming 53% of all diagnosed MPS cases, and was twice as high as in other studied European populations. The second most common subtype was MPS IIIA, with a live-birth prevalence of 1.62 in 100,000 live births. With 0.27 out of 100,000 live births, MPS VI had the third-highest live-birth prevalence. No cases of MPS I were diagnosed in Estonia, making the prevalence of MPS I in Estonia much lower than in other European populations. MPSs are the third most frequent inborn error of metabolism in Estonia after phenylketonuria and galactosemia."
sample2 = "Early Diagnosis of Classic Homocystinuria in Kuwait through Newborn Screening: A 6-Year Experience. Kuwait is a small Arabian Gulf country with a high rate of consanguinity and where a national newborn screening program was expanded in October 2014 to include a wide range of endocrine and metabolic disorders. A retrospective study conducted between January 2015 and December 2020 revealed a total of 304,086 newborns have been screened in Kuwait. Six newborns were diagnosed with classic homocystinuria with an incidence of 1:50,000, which is not as high as in Qatar but higher than the global incidence. Molecular testing for five of them has revealed three previously reported pathogenic variants in the <i>CBS</i> gene, c.969G>A, p.(Trp323Ter); c.982G>A, p.(Asp328Asn); and the Qatari founder variant c.1006C>T, p.(Arg336Cys). This is the first study to review the screening of newborns in Kuwait for classic homocystinuria, starting with the detection of elevated blood methionine and providing a follow-up strategy for positive results, including plasma total homocysteine and amino acid analyses. Further, we have demonstrated an increase in the specificity of the current newborn screening test for classic homocystinuria by including the methionine to phenylalanine ratio along with the elevated methionine blood levels in first-tier testing. Here, we provide evidence that the newborn screening in Kuwait has led to the early detection of classic homocystinuria cases and enabled the affected individuals to lead active and productive lives."
#Sample 1 is from: Krabbi K, Joost K, Zordania R, Talvik I, Rein R, Huijmans JG, Verheijen FV, รunap K. The live-birth prevalence of mucopolysaccharidoses in Estonia. Genet Test Mol Biomarkers. 2012 Aug;16(8):846-9. doi: 10.1089/gtmb.2011.0307. Epub 2012 Apr 5. PMID: 22480138; PMCID: PMC3422553.
#Sample 2 is from: Alsharhan H, Ahmed AA, Ali NM, Alahmad A, Albash B, Elshafie RM, Alkanderi S, Elkazzaz UM, Cyril PX, Abdelrahman RM, Elmonairy AA, Ibrahim SM, Elfeky YME, Sadik DI, Al-Enezi SD, Salloum AM, Girish Y, Al-Ali M, Ramadan DG, Alsafi R, Al-Rushood M, Bastaki L. Early Diagnosis of Classic Homocystinuria in Kuwait through Newborn Screening: A 6-Year Experience. Int J Neonatal Screen. 2021 Aug 17;7(3):56. doi: 10.3390/ijns7030056. PMID: 34449519; PMCID: PMC8395821.
NER_pipeline(sample)
NER_pipeline(sample2)
~~~
Or if you download [*classify_abs.py*](https://github.com/ncats/epi4GARD/blob/master/EpiExtract4GARD/classify_abs.py), [*extract_abs.py*](https://github.com/ncats/epi4GARD/blob/master/EpiExtract4GARD/extract_abs.py), and [*gard-id-name-synonyms.json*](https://github.com/ncats/epi4GARD/blob/master/EpiExtract4GARD/gard-id-name-synonyms.json) from GitHub then you can test with this [*additional* code](https://github.com/ncats/epi4GARD/blob/master/EpiExtract4GARD/Case%20Study.ipynb):
~~~
import pandas as pd
import extract_abs
import classify_abs
pd.set_option('display.max_colwidth', None)
NER_pipeline = extract_abs.init_NER_pipeline()
GARD_dict, max_length = extract_abs.load_GARD_diseases()
nlp, nlpSci, nlpSci2, classify_model, classify_tokenizer = classify_abs.init_classify_model()
def search(term,num_results = 50):
return extract_abs.search_term_extraction(term, num_results, NER_pipeline, GARD_dict, max_length,nlp, nlpSci, nlpSci2, classify_model, classify_tokenizer)
a = search(7058)
a
b = search('Santos Mateus Leal syndrome')
b
c = search('Fellman syndrome')
c
d = search('GARD:0009941')
d
e = search('Homocystinuria')
e
~~~
#### Limitations and bias
## Training data
It was trained on [EpiSet4NER](https://huggingface.co./datasets/ncats/EpiSet4NER). See dataset documentation for details on the weakly supervised teaching methods and dataset biases and limitations. The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes:
Abbreviation|Description
---------|--------------
O |Outside of a named entity
B-LOC | Beginning of a location
I-LOC | Inside of a location
B-EPI | Beginning of an epidemiologic type (e.g. "incidence", "prevalence", "occurrence")
I-EPI | Epidemiologic type that is not the beginning token.
B-STAT | Beginning of an epidemiologic rate
I-STAT | Inside of an epidemiologic rate
### EpiSet Statistics
Beyond any limitations due to the EpiSet4NER dataset, this model is limited in numeracy due to BERT-based model's use of subword embeddings, which is crucial for epidemiologic rate identification and limits the entity-level results. Additionally, more recent weakly supervised learning techniques could be used to improve the performance of the model without improving the underlying dataset.
## Training procedure
This model was trained on a [AWS EC2 p3.2xlarge](https://aws.amazon.com/ec2/instance-types/), which utilized a single Tesla V100 GPU, with these hyperparameters:
4 epochs of training (AdamW weight decay = 0.05) with a batch size of 16. Maximum sequence length = 192. Model was fed one sentence at a time. Full config [here](https://wandb.ai/wzkariampuzha/huggingface/runs/353prhts/files/config.yaml).
## Hold-out validation results
metric| entity-level result
-|-
f1 | 83.8
precision | 83.2
recall | 84.5
## Test results
| Dataset for Model Training | Evaluation Level | Entity | Precision | Recall | F1 |
|:--------------------------:|:----------------:|:------------------:|:---------:|:------:|:-----:|
| EpiSet | Entity-Level | Overall | 0.556 | 0.662 | 0.605 |
| | | Location | 0.661 | 0.696 | 0.678 |
| | | Epidemiologic Type | 0.854 | 0.911 | 0.882 |
| | | Epidemiologic Rate | 0.143 | 0.218 | 0.173 |
| | Token-Level | Overall | 0.811 | 0.713 | 0.759 |
| | | Location | 0.949 | 0.742 | 0.833 |
| | | Epidemiologic Type | 0.9 | 0.917 | 0.908 |
| | | Epidemiologic Rate | 0.724 | 0.636 | 0.677 |
Thanks to [@William Kariampuzha](https://github.com/wzkariampuzha) at Axle Informatics/NCATS for contributing this model. |
nvidia/segformer-b4-finetuned-ade-512-512 | nvidia | "2022-08-06T10:25:42Z" | 9,842 | 1 | transformers | [
"transformers",
"pytorch",
"tf",
"segformer",
"vision",
"image-segmentation",
"dataset:scene_parse_150",
"arxiv:2105.15203",
"license:other",
"endpoints_compatible",
"region:us"
] | image-segmentation | "2022-03-02T23:29:05Z" | ---
license: other
tags:
- vision
- image-segmentation
datasets:
- scene_parse_150
widget:
- src: https://huggingface.co./datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg
example_title: House
- src: https://huggingface.co./datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000002.jpg
example_title: Castle
---
# SegFormer (b4-sized) model fine-tuned on ADE20k
SegFormer model fine-tuned on ADE20k at resolution 512x512. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer).
Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co./models?other=segformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation
from PIL import Image
import requests
feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/segformer-b4-finetuned-ade-512-512")
model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b4-finetuned-ade-512-512")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
```
For more code examples, we refer to the [documentation](https://huggingface.co./transformers/model_doc/segformer.html#).
### License
The license for this model can be found [here](https://github.com/NVlabs/SegFormer/blob/master/LICENSE).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-15203,
author = {Enze Xie and
Wenhai Wang and
Zhiding Yu and
Anima Anandkumar and
Jose M. Alvarez and
Ping Luo},
title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with
Transformers},
journal = {CoRR},
volume = {abs/2105.15203},
year = {2021},
url = {https://arxiv.org/abs/2105.15203},
eprinttype = {arXiv},
eprint = {2105.15203},
timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
kostiantynk/9019326c-5374-46b6-bddc-776db0fb373b | kostiantynk | "2025-01-31T06:20:43Z" | 5 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:berkeley-nest/Starling-LM-7B-alpha",
"base_model:adapter:berkeley-nest/Starling-LM-7B-alpha",
"license:apache-2.0",
"region:us"
] | null | "2025-01-31T06:17:30Z" | ---
library_name: peft
license: apache-2.0
base_model: berkeley-nest/Starling-LM-7B-alpha
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9019326c-5374-46b6-bddc-776db0fb373b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: berkeley-nest/Starling-LM-7B-alpha
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- dffa8fc58ce66dc6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/dffa8fc58ce66dc6_train_data.json
type:
field_instruction: title
field_output: text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kostiantynk/9019326c-5374-46b6-bddc-776db0fb373b
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/dffa8fc58ce66dc6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 73f2e9d8-c4f5-4163-bde3-27fae5504c6a
wandb_project: Birthday-SN56-7-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 73f2e9d8-c4f5-4163-bde3-27fae5504c6a
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 9019326c-5374-46b6-bddc-776db0fb373b
This model is a fine-tuned version of [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co./berkeley-nest/Starling-LM-7B-alpha) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0005 | 1 | nan |
| 163.2988 | 0.0063 | 13 | nan |
| 241.4237 | 0.0126 | 26 | nan |
| 266.2712 | 0.0190 | 39 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
qualcomm/Mistral-7B-Instruct-v0.3 | qualcomm | "2025-02-28T22:53:34Z" | 0 | 0 | pytorch | [
"pytorch",
"llm",
"generative_ai",
"quantized",
"android",
"text-generation",
"arxiv:2310.06825",
"license:apache-2.0",
"region:us"
] | text-generation | "2024-10-21T18:56:31Z" | ---
library_name: pytorch
license: apache-2.0
tags:
- llm
- generative_ai
- quantized
- android
pipeline_tag: text-generation
---

# Mistral-7B-Instruct-v0.3: Optimized for Mobile Deployment
## State-of-the-art large language model useful on a variety of language understanding and generation tasks
Mistral AI's first open source dense model released September 2023. Mistral-7B-Instruct-v0.3 Large Language Model (LLM) is an instruct fineโtuned version of the Mistralโ7Bโv0.3. It has an extended vocabulary and supports the v3 Tokenizer, enhancing language understanding and generation. Additionally function calling is enabled.
This model is an implementation of Mistral-7B-Instruct-v0.3 found [here](https://github.com/mistralai/mistral-inference).
More details on model performance across various devices, can be found [here](https://aihub.qualcomm.com/models/mistral_7b_instruct_v0_3_quantized).
### Model Details
- **Model Type:** Text generation
- **Model Stats:**
- Input sequence length for Prompt Processor: 128
- Context length: 4096
- Number of parameters: 7.3B
- Precision: w4a16 + w8a16 (few layers)
- Num of key-value heads: 8
- Information about the model parts: Prompt Processor and Token Generator are split into 4 parts each. Each corresponding Prompt Processor and Token Generator part share weights.
- Prompt processor model size: 4.17 GB
- Prompt processor input: 128 tokens + KVCache initialized with pad token
- Prompt processor output: 128 output tokens + KVCache for token generator
- Token generator model size: 4.17 GB
- Token generator input: 1 input token + past KVCache
- Token generator output: 1 output token + KVCache for next iteration
- Use: Initiate conversation with prompt-processor and then token generator for subsequent iterations.
- Minimum QNN SDK version required: 2.27.7
- Supported languages: English.
- TTFT: Time To First Token is the time it takes to generate the first response token. This is expressed as a range because it varies based on the length of the prompt. The lower bound is for a short prompt (up to 128 tokens, i.e., one iteration of the prompt processor) and the upper bound is for a prompt using the full context length (4096 tokens).
- Response Rate: Rate of response generation after the first response token.
| Model | Device | Chipset | Target Runtime | Response Rate (tokens per second) | Time To First Token (range, seconds)
|---|---|---|---|---|---|
| Mistral-7B-Instruct-v0.3 | Snapdragon 8 Elite QRD | Snapdragonยฎ 8 Elite | QNN | 12.56 | 0.16565 - 5.3008 | -- | Use Export Script |
## Deploying Mistral 7B Instruct v0.3 on-device
Please follow the [LLM on-device deployment](https://github.com/quic/ai-hub-apps/tree/main/tutorials/llm_on_genie) tutorial.
## License
* The license for the original implementation of Mistral-7B-Instruct-v0.3 can be found
[here](https://github.com/mistralai/mistral-inference/blob/main/LICENSE).
* The license for the compiled assets for on-device deployment can be found [here](https://github.com/mistralai/mistral-inference/blob/main/LICENSE)
## References
* [Mistral 7B](https://arxiv.org/abs/2310.06825)
* [Source Model Implementation](https://github.com/mistralai/mistral-inference)
## Community
* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:[email protected]).
## Usage and Limitations
Model may not be used for or in connection with any of the following applications:
- Accessing essential private and public services and benefits;
- Administration of justice and democratic processes;
- Assessing or recognizing the emotional state of a person;
- Biometric and biometrics-based systems, including categorization of persons based on sensitive characteristics;
- Education and vocational training;
- Employment and workers management;
- Exploitation of the vulnerabilities of persons resulting in harmful behavior;
- General purpose social scoring;
- Law enforcement;
- Management and operation of critical infrastructure;
- Migration, asylum and border control management;
- Predictive policing;
- Real-time remote biometric identification in public spaces;
- Recommender systems of social media platforms;
- Scraping of facial images (from the internet or otherwise); and/or
- Subliminal manipulation
|
LaLegumbreArtificial/NEO_MUL_EXP2_1 | LaLegumbreArtificial | "2025-02-13T17:51:53Z" | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"beit",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/beit-base-patch16-224-pt22k-ft22k",
"base_model:finetune:microsoft/beit-base-patch16-224-pt22k-ft22k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-12-05T20:40:15Z" | ---
library_name: transformers
license: apache-2.0
base_model: microsoft/beit-base-patch16-224-pt22k-ft22k
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: NEO_MUL_EXP2_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NEO_MUL_EXP2_1
This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co./microsoft/beit-base-patch16-224-pt22k-ft22k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0441
- Accuracy: 0.9833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.1651 | 0.9886 | 65 | 0.2185 | 0.9233 |
| 0.1203 | 1.9924 | 131 | 0.1108 | 0.9583 |
| 0.0871 | 2.9962 | 197 | 0.0879 | 0.9692 |
| 0.0738 | 4.0 | 263 | 0.0665 | 0.9742 |
| 0.0614 | 4.9430 | 325 | 0.0441 | 0.9833 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
Melvin56/Qwen2.5-7B-Instruct-abliterated-v3-IQ4_XS-GGUF | Melvin56 | "2025-01-11T18:30:09Z" | 33 | 0 | transformers | [
"transformers",
"gguf",
"chat",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:huihui-ai/Qwen2.5-7B-Instruct-abliterated-v3",
"base_model:quantized:huihui-ai/Qwen2.5-7B-Instruct-abliterated-v3",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | text-generation | "2025-01-11T18:29:47Z" | ---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co./huihui-ai/Qwen2.5-7B-Instruct-abliterated-v3/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model: huihui-ai/Qwen2.5-7B-Instruct-abliterated-v3
tags:
- chat
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
---
# Melvin56/Qwen2.5-7B-Instruct-abliterated-v3-IQ4_XS-GGUF
This model was converted to GGUF format from [`huihui-ai/Qwen2.5-7B-Instruct-abliterated-v3`](https://huggingface.co./huihui-ai/Qwen2.5-7B-Instruct-abliterated-v3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co./spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co./huihui-ai/Qwen2.5-7B-Instruct-abliterated-v3) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Melvin56/Qwen2.5-7B-Instruct-abliterated-v3-IQ4_XS-GGUF --hf-file qwen2.5-7b-instruct-abliterated-v3-iq4_xs-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Melvin56/Qwen2.5-7B-Instruct-abliterated-v3-IQ4_XS-GGUF --hf-file qwen2.5-7b-instruct-abliterated-v3-iq4_xs-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Melvin56/Qwen2.5-7B-Instruct-abliterated-v3-IQ4_XS-GGUF --hf-file qwen2.5-7b-instruct-abliterated-v3-iq4_xs-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Melvin56/Qwen2.5-7B-Instruct-abliterated-v3-IQ4_XS-GGUF --hf-file qwen2.5-7b-instruct-abliterated-v3-iq4_xs-imat.gguf -c 2048
```
|
sfairXC/llama-3.1-sft-1ep | sfairXC | "2024-09-18T04:42:27Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-09-18T04:36:43Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/gembode-2b-it-ultraalpaca-GGUF | mradermacher | "2025-03-08T03:51:13Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:recogna-nlp/gembode-2b-it-ultraalpaca",
"base_model:quantized:recogna-nlp/gembode-2b-it-ultraalpaca",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-08T03:34:01Z" | ---
base_model: recogna-nlp/gembode-2b-it-ultraalpaca
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co./recogna-nlp/gembode-2b-it-ultraalpaca
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co./TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co./mradermacher/gembode-2b-it-ultraalpaca-GGUF/resolve/main/gembode-2b-it-ultraalpaca.Q2_K.gguf) | Q2_K | 1.3 | |
| [GGUF](https://huggingface.co./mradermacher/gembode-2b-it-ultraalpaca-GGUF/resolve/main/gembode-2b-it-ultraalpaca.Q3_K_S.gguf) | Q3_K_S | 1.4 | |
| [GGUF](https://huggingface.co./mradermacher/gembode-2b-it-ultraalpaca-GGUF/resolve/main/gembode-2b-it-ultraalpaca.Q3_K_M.gguf) | Q3_K_M | 1.5 | lower quality |
| [GGUF](https://huggingface.co./mradermacher/gembode-2b-it-ultraalpaca-GGUF/resolve/main/gembode-2b-it-ultraalpaca.Q3_K_L.gguf) | Q3_K_L | 1.6 | |
| [GGUF](https://huggingface.co./mradermacher/gembode-2b-it-ultraalpaca-GGUF/resolve/main/gembode-2b-it-ultraalpaca.IQ4_XS.gguf) | IQ4_XS | 1.6 | |
| [GGUF](https://huggingface.co./mradermacher/gembode-2b-it-ultraalpaca-GGUF/resolve/main/gembode-2b-it-ultraalpaca.Q4_K_S.gguf) | Q4_K_S | 1.7 | fast, recommended |
| [GGUF](https://huggingface.co./mradermacher/gembode-2b-it-ultraalpaca-GGUF/resolve/main/gembode-2b-it-ultraalpaca.Q4_K_M.gguf) | Q4_K_M | 1.7 | fast, recommended |
| [GGUF](https://huggingface.co./mradermacher/gembode-2b-it-ultraalpaca-GGUF/resolve/main/gembode-2b-it-ultraalpaca.Q5_K_S.gguf) | Q5_K_S | 1.9 | |
| [GGUF](https://huggingface.co./mradermacher/gembode-2b-it-ultraalpaca-GGUF/resolve/main/gembode-2b-it-ultraalpaca.Q5_K_M.gguf) | Q5_K_M | 1.9 | |
| [GGUF](https://huggingface.co./mradermacher/gembode-2b-it-ultraalpaca-GGUF/resolve/main/gembode-2b-it-ultraalpaca.Q6_K.gguf) | Q6_K | 2.2 | very good quality |
| [GGUF](https://huggingface.co./mradermacher/gembode-2b-it-ultraalpaca-GGUF/resolve/main/gembode-2b-it-ultraalpaca.Q8_0.gguf) | Q8_0 | 2.8 | fast, best quality |
| [GGUF](https://huggingface.co./mradermacher/gembode-2b-it-ultraalpaca-GGUF/resolve/main/gembode-2b-it-ultraalpaca.f16.gguf) | f16 | 5.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co./mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Primeness/primeh4v12a6c2 | Primeness | "2025-01-31T22:38:35Z" | 26 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-31T22:06:15Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
huggingtweets/flatironschool | huggingtweets | "2021-05-22T04:20:52Z" | 6 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-03-02T23:29:05Z" | ---
language: en
thumbnail: https://www.huggingtweets.com/flatironschool/1603341000640/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1278450406843125762/f5u_F2ng_400x400.png')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Flatiron School (at ๐ก) ๐ค AI Bot </div>
<div style="font-size: 15px; color: #657786">@flatironschool bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@flatironschool's tweets](https://twitter.com/flatironschool).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3202</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>1068</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>582</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>1552</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/179qzrny/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co./gpt2) which is fine-tuned on @flatironschool's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/174rjbb8) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/174rjbb8/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/flatironschool'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co./gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
<!--- random size file --> |
scottn66/text-summarization | scottn66 | "2023-03-29T21:05:29Z" | 104 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:billsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-03-18T03:19:13Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- billsum
metrics:
- rouge
model-index:
- name: text-summarization
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: billsum
type: billsum
config: default
split: ca_test
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.1405
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text-summarization
This model is a fine-tuned version of [t5-small](https://huggingface.co./t5-small) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4284
- Rouge1: 0.1405
- Rouge2: 0.0517
- Rougel: 0.1158
- Rougelsum: 0.1157
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.7231 | 0.1246 | 0.0356 | 0.1039 | 0.1039 | 19.0 |
| No log | 2.0 | 124 | 2.5099 | 0.1335 | 0.0463 | 0.1116 | 0.1116 | 19.0 |
| No log | 3.0 | 186 | 2.4451 | 0.1383 | 0.0509 | 0.114 | 0.114 | 19.0 |
| No log | 4.0 | 248 | 2.4284 | 0.1405 | 0.0517 | 0.1158 | 0.1157 | 19.0 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
mys/ggml_llava-v1.5-13b | mys | "2023-10-10T10:20:06Z" | 1,078 | 53 | null | [
"gguf",
"llava",
"lmm",
"ggml",
"llama.cpp",
"endpoints_compatible",
"region:us"
] | null | "2023-10-10T10:04:00Z" | ---
tags:
- llava
- lmm
- ggml
- llama.cpp
---
# ggml_llava-v1.5-13b
This repo contains GGUF files to inference [llava-v1.5-13b](https://huggingface.co./liuhaotian/llava-v1.5-13b) with [llama.cpp](https://github.com/ggerganov/llama.cpp) end-to-end without any extra dependency.
**Note**: The `mmproj-model-f16.gguf` file structure is experimental and may change. Always use the latest code in llama.cpp.
|
Subsets and Splits