Search is not available for this dataset
modelId
stringlengths
5
134
author
stringlengths
2
42
last_modified
unknown
downloads
int64
0
223M
likes
int64
0
10.1k
library_name
stringclasses
378 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
53 values
createdAt
unknown
card
stringlengths
11
1.01M
raminass/SCOTUS_AI_17
raminass
"2024-01-04T10:27:41Z"
6
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:raminass/scotus-v10", "base_model:finetune:raminass/scotus-v10", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-01-04T09:44:28Z"
--- license: cc-by-sa-4.0 base_model: raminass/scotus-v10 tags: - generated_from_trainer metrics: - accuracy model-index: - name: SCOTUS_AI_17 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SCOTUS_AI_17 This model is a fine-tuned version of [raminass/scotus-v10](https://huggingface.co./raminass/scotus-v10) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0723 - Accuracy: 0.8263 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.2805 | 1.0 | 3188 | 0.6360 | 0.8303 | | 0.1147 | 2.0 | 6376 | 0.8285 | 0.8230 | | 0.053 | 3.0 | 9564 | 1.0048 | 0.8208 | | 0.0228 | 4.0 | 12752 | 1.0853 | 0.8183 | | 0.0143 | 5.0 | 15940 | 1.0723 | 0.8263 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
fftx0907/autotrain-s4m14-yyyit
fftx0907
"2023-12-25T14:58:51Z"
5
0
transformers
[ "transformers", "safetensors", "vit", "image-classification", "autotrain", "dataset:fftx0907/autotrain-data-autotrain-s4m14-yyyit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2023-12-25T14:58:09Z"
--- tags: - autotrain - image-classification widget: - src: https://huggingface.co./datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co./datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co./datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace datasets: - fftx0907/autotrain-data-autotrain-s4m14-yyyit --- # Model Trained Using AutoTrain - Problem type: Image Classification ## Validation Metricsg loss: nan f1_macro: 0.031825795644891124 f1_micro: 0.10555555555555556 f1_weighted: 0.020156337241764376 precision_macro: 0.017592592592592594 precision_micro: 0.10555555555555556 precision_weighted: 0.011141975308641975 recall_macro: 0.16666666666666666 recall_micro: 0.10555555555555556 recall_weighted: 0.10555555555555556 accuracy: 0.10555555555555556
lavera/epic-diffusion-v1.1-controlnet-hed
lavera
"2023-03-31T14:25:10Z"
5
0
diffusers
[ "diffusers", "license:creativeml-openrail-m", "region:us" ]
null
"2023-03-31T14:23:26Z"
--- license: creativeml-openrail-m ---
locuslab/base-smollm2-1.7b-score0_mix_rephrased_from_beginning-600B-mbs8-gbs1024-17feb
locuslab
"2025-02-24T18:36:53Z"
0
0
null
[ "pytorch", "llama", "model", "transformer", "smollm2", "license:mit", "region:us" ]
null
"2025-02-24T18:28:44Z"
--- version: main family: smollm2-1.7b model_name: score0_mix_rephrased_from_beginning-600B-mbs8-gbs1024-17feb license: mit tags: - model - transformer - smollm2 --- # SmolLM2 score0_mix_rephrased_from_beginning-600B-mbs8-gbs1024-17feb (Version: main) ## Model Details - **Architecture:** SmolLM2 - **Parameters:** 1.7B ## Training Configuration ```yaml optimizer: class_path: torch.optim.AdamW init_args: lr: 0.0005 weight_decay: 0.01 precision: bf16-mixed seed: 42 train: global_batch_size: 1024 max_seq_length: 2048 max_tokens: 600000000000 micro_batch_size: 8 ``` ## Model Loading and Revision System This repository hosts multiple revisions of the model. To load a specific revision, use the `revision` parameter. For example: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("locuslab/score0_mix_rephrased_from_beginning-600B-mbs8-gbs1024-17feb", revision="final") tokenizer = AutoTokenizer.from_pretrained("locuslab/score0_mix_rephrased_from_beginning-600B-mbs8-gbs1024-17feb", revision="final") ``` Replace `"final"` with the desired revision.
OwOOwO/finalupdate1
OwOOwO
"2024-04-30T17:12:26Z"
4
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-04-30T17:10:52Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/Zurich-7B-GCv2-5m-GGUF
mradermacher
"2025-02-04T09:09:28Z"
513
1
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "qwen2", "trl", "gammacorpus", "zurich", "chat", "conversational", "en", "dataset:rubenroy/GammaCorpus-v2-5m", "base_model:rubenroy/Zurich-7B-GCv2-5m", "base_model:quantized:rubenroy/Zurich-7B-GCv2-5m", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2025-02-04T08:22:27Z"
--- base_model: rubenroy/Zurich-7B-GCv2-5m datasets: - rubenroy/GammaCorpus-v2-5m language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - text-generation-inference - transformers - unsloth - qwen2 - trl - gammacorpus - zurich - chat - conversational --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co./rubenroy/Zurich-7B-GCv2-5m <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co./mradermacher/Zurich-7B-GCv2-5m-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co./TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co./mradermacher/Zurich-7B-GCv2-5m-GGUF/resolve/main/Zurich-7B-GCv2-5m.Q2_K.gguf) | Q2_K | 3.1 | | | [GGUF](https://huggingface.co./mradermacher/Zurich-7B-GCv2-5m-GGUF/resolve/main/Zurich-7B-GCv2-5m.Q3_K_S.gguf) | Q3_K_S | 3.6 | | | [GGUF](https://huggingface.co./mradermacher/Zurich-7B-GCv2-5m-GGUF/resolve/main/Zurich-7B-GCv2-5m.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality | | [GGUF](https://huggingface.co./mradermacher/Zurich-7B-GCv2-5m-GGUF/resolve/main/Zurich-7B-GCv2-5m.Q3_K_L.gguf) | Q3_K_L | 4.2 | | | [GGUF](https://huggingface.co./mradermacher/Zurich-7B-GCv2-5m-GGUF/resolve/main/Zurich-7B-GCv2-5m.IQ4_XS.gguf) | IQ4_XS | 4.4 | | | [GGUF](https://huggingface.co./mradermacher/Zurich-7B-GCv2-5m-GGUF/resolve/main/Zurich-7B-GCv2-5m.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended | | [GGUF](https://huggingface.co./mradermacher/Zurich-7B-GCv2-5m-GGUF/resolve/main/Zurich-7B-GCv2-5m.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended | | [GGUF](https://huggingface.co./mradermacher/Zurich-7B-GCv2-5m-GGUF/resolve/main/Zurich-7B-GCv2-5m.Q5_K_S.gguf) | Q5_K_S | 5.4 | | | [GGUF](https://huggingface.co./mradermacher/Zurich-7B-GCv2-5m-GGUF/resolve/main/Zurich-7B-GCv2-5m.Q5_K_M.gguf) | Q5_K_M | 5.5 | | | [GGUF](https://huggingface.co./mradermacher/Zurich-7B-GCv2-5m-GGUF/resolve/main/Zurich-7B-GCv2-5m.Q6_K.gguf) | Q6_K | 6.4 | very good quality | | [GGUF](https://huggingface.co./mradermacher/Zurich-7B-GCv2-5m-GGUF/resolve/main/Zurich-7B-GCv2-5m.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality | | [GGUF](https://huggingface.co./mradermacher/Zurich-7B-GCv2-5m-GGUF/resolve/main/Zurich-7B-GCv2-5m.f16.gguf) | f16 | 15.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co./mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
AlignmentResearch/robust_llm_pythia-410m_niki-041a_imdb_random-token-1280_10-rounds_seed-4
AlignmentResearch
"2024-05-04T12:03:21Z"
104
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-410m", "base_model:finetune:EleutherAI/pythia-410m", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
"2024-05-04T12:02:48Z"
--- license: apache-2.0 tags: - generated_from_trainer base_model: EleutherAI/pythia-410m model-index: - name: robust_llm_pythia-410m_niki-041a_imdb_random-token-1280_10-rounds_seed-4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-410m_niki-041a_imdb_random-token-1280_10-rounds_seed-4 This model is a fine-tuned version of [EleutherAI/pythia-410m](https://huggingface.co./EleutherAI/pythia-410m) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
ldianwu/detr-finetuned-balloon-v2
ldianwu
"2024-06-28T06:50:26Z"
190
0
transformers
[ "transformers", "safetensors", "detr", "object-detection", "license:apache-2.0", "endpoints_compatible", "region:us" ]
object-detection
"2024-06-25T03:57:12Z"
--- license: apache-2.0 ---
seongil-dn/bge-m3-kor-retrieval-451949-bs64-science
seongil-dn
"2024-12-11T06:34:15Z"
5
0
sentence-transformers
[ "sentence-transformers", "safetensors", "xlm-roberta", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:451949", "loss:CachedMultipleNegativesRankingLoss", "arxiv:1908.10084", "arxiv:2101.06983", "base_model:BAAI/bge-m3", "base_model:finetune:BAAI/bge-m3", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
"2024-12-11T06:32:40Z"
--- tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:451949 - loss:CachedMultipleNegativesRankingLoss base_model: BAAI/bge-m3 widget: - source_sentence: ๋ณธ ์—ฐ๊ตฌ๋ฅผ ํ†ตํ•ด ์žฅ๋ฐ”์ด๋Ÿฌ์Šค๋ฅผ ๋†์ถ•, ์ •์ œ, ๋ฐ ๊ฒ€์ถœํ•  ์ˆ˜ ์žˆ๋Š” ์‹ ์†, ๊ฐ„ํŽธํ•˜๊ณ  ํšจ๊ณผ์ ์ธ ๋ฐฉ๋ฒ•์„ ๊ฐœ๋ฐœํ•˜๊ธฐ ์œ„ํ•ด ํ•„์š”ํ•œ ๋ฌผ์งˆ์€ ๋ฌด์—‡์ธ๊ฐ€? sentences: - <h1>์š” ์•ฝ</h1><p>ํ™˜๊ฒฝ์— ์กด์žฌํ•˜๋Š” ์žฅ๋ฐ”์ด๋Ÿฌ์Šค๋Š” ์˜ค์—ผ๋œ ๋ฌผ์„ ํ†ตํ•˜์—ฌ ๊ฒฝ๊ตฌ๊ฒฝ๋กœ๋กœ ์ „์—ผ์ด ๊ฐ€๋Šฅํ•˜๊ณ , ๋ฐ”์ด๋Ÿฌ์Šค๋Š” ์ ์€ ์–‘์œผ๋กœ๋„ ์ธ์ฒด์— ๊ฐ์—ผ์ด ๊ฐ€๋Šฅํ•˜๋ฏ€๋กœ ์ธ๊ฐ„์˜ ๊ฑด๊ฐ•์„ ์œ„ํ˜‘ํ•  ์ˆ˜ ์žˆ๋‹ค. ํ™˜๊ฒฝ ์ˆ˜๊ณ„ ๋ฐ ์Œ์šฉ์ˆ˜์—์„œ ๋ฐœ๊ฒฌ๋˜๋Š” ๋ฐ”์ด๋Ÿฌ์Šค์˜ ์ˆ˜์น˜๋Š” ๋น„๊ต์  ๋‚ฎ์œผ๋ฏ€๋กœ ์ˆ˜๋ฐฑ์—์„œ ์ˆ˜์ฒœ ๋ฆฌํ„ฐ์˜ ๋ฌผ์„ ๋†์ถ•์‹œํ‚ฌ ํ•„์š”๊ฐ€ ์žˆ๋‹ค. ๋”ฐ๋ผ์„œ ์ด ์—ฐ๊ตฌ์˜ ์ฃผ์š” ๋ชฉ์ ์€ ์ˆ˜๊ณ„ ์‹œ๋ฃŒ๋กœ๋ถ€ํ„ฐ ์žฅ๋ฐ”์ด๋Ÿฌ์Šค๋ฅผ ๋†์ถ•, ์ •์ œ, ๋ฐ ๊ฒ€์ถœํ•  ์ˆ˜ ์žˆ๋Š” ์‹ ์†, ๊ฐ„ํŽธํ•˜๊ณ  ํšจ๊ณผ์ ์ธ ๋ฐฉ๋ฒ•์„ ๊ฐœ๋ฐœํ•˜๋Š” ๊ฒƒ์ด๋‹ค. ๋จผ์ € ๋ฐ”์ด๋Ÿฌ์Šค๋ฅผ 1MDS ์นดํŠธ๋ฆฌ์ง€ ํ•„ํ„ฐ์— ํก์ฐฉ์‹œ์ผœ ๋†์ถ•์‹œํ‚ค๊ณ  ์•ฝ \( 500 \mathrm{~m} \ell \)์˜ \( 1.5 \% \) beef extract/\( 0.05 \mathrm{M} \) glycin\( (\mathrm{pH} 9.4) \)์œผ๋กœ ์šฉ์ถœ์‹œํ‚จ๋‹ค. ์ด ์—ฐ๊ตฌ์—์„œ๋Š” ํก์ฐฉํ•„ํ„ฐ๋กœ๋ถ€ํ„ฐ ์–ป์€ ๋ฐ”์ด๋Ÿฌ์Šค 1์ฐจ ์šฉ์ถœ์•ก์„ ๋”์šฑ ๋†์ถ•์‹œํ‚ค๊ณ  ์ •์ œํ•˜๊ธฐ ์œ„ํ•˜์—ฌ ์„ธ๊ฐ€์ง€ ๋ฐฉ๋ฒ•์„ ์‹œ๋„ํ•˜์˜€๋‹ค. ์ด๋“ค ๊ฐ€์šด๋ฐ์„œ ์œ ๊ธฐ ์‘์ง‘๋ฒ•์ด ๋ฐ”์ด๋Ÿฌ์Šค ์žฌ๋†์ถ•์— ๊ฐ€์žฅ ํšจ๊ณผ์ ์ธ ๋ฐฉ๋ฒ•์ด์—ˆ๋‹ค. ์ด ๋ฐฉ๋ฒ•์œผ๋กœ ์‹œ๋ฃŒ ๋ถ€ํ”ผ๋ฅผ 200์—์„œ 400๋ฐฐ๊นŒ์ง€ ๊ฐ์†Œ์‹œํ‚ฌ ์ˆ˜ ์žˆ์—ˆ์œผ๋ฉฐ ์ตœ์ข… ๋ฐ”์ด๋Ÿฌ์Šค ์ˆ˜๊ฑฐ์œจ์€ \( 72 \% \) ์ด์ƒ์ด์—ˆ๋‹ค. ๋งˆ์ง€๋ง‰์œผ๋กœ ์‹œ๋ฃŒ๋ฅผ ๋ง‰ ๋””์Šคํฌ ํ•„ํ„ฐ๋กœ ์—ฌ๊ณผ์‹œํ‚ค๊ณ  plaque assay ํ˜น์€ CC-PCR๋ฒ•์œผ๋กœ ๋ถ„์„ํ•˜์˜€๋‹ค. </p> - '๋จน๋Š” ๋ฌผ์˜ ๋ณ‘์›์„ฑ๋ฏธ์ƒ๋ฌผ ์กฐ๊ธฐ๊ฒ€์ถœ๋ฐฉ๋ฒ• ํ™•๋ฆฝ โ–ก PCR(์ค‘ํ•ฉํšจ์†Œ ์—ฐ์‡„๋ฐ˜์‘, Polymerase Chain Reaction) ํšจ์†Œ(DNA polymerase)์™€ ์‹œ์•ฝ ๋“ฑ์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ฏธ์ƒ๋ฌผ(๋ฐ”์ด๋Ÿฌ์Šค, ์›์ƒ๋™๋ฌผ ๋“ฑ)์˜ ์œ ์ „์ž(DNA, RNA) ์ค‘ ํŠน์ • ๋ถ€์œ„๋งŒ ์—ฐ์†์ ์œผ๋กœ ์ฆํญ์‹œ์ผœ ์›ํ•˜๋Š” ๋Œ€์ƒ ์œ ์ „์ž(target gene)์˜ ์กด์žฌ๋ฅผ ํ™•์ธํ•˜๋Š” ๋ฐฉ๋ฒ•. โ–ก ์ด๋ฐฐ์–‘์„ฑ๋ฐ”์ด๋Ÿฌ์Šค๋ถ„์„๋ฒ•(TCVA: total culturable virus assay) ํ™˜๊ฒฝ์‹œ๋ฃŒ(์ƒ์ˆ˜์›์ˆ˜)์ค‘์— ํ•จ์œ ๋œ ๋ฐ”์ด๋Ÿฌ์Šค๊ฐ€ ์‹œ๋ฃŒ ์ฑ„์ทจ์‹œ ์—ฌ๊ณผ๋ง‰์— ํก์ฐฉ๋˜๊ณ , ์—ฌ๊ณผ๋ง‰์˜ ๋ฐ”์ด๋Ÿฌ์Šค๋ฅผ ํƒˆ๋ฆฌ(๋ถ„๋ฆฌ)ใ†๋†์ถ•ํ•œ ํ›„ ์›์•ก๊ณผ ํฌ์„ ์›์•ก์„ ์‚ด์•„์žˆ๋Š” ์„ธํฌ(BGM : Buffalo Monkey Kidney Cell ,์›์ˆญ์ด์‹ ์žฅ์„ธํฌ)์— ์ ‘์ข…ํ•˜๊ณ  37โ„ƒ์—์„œ 1์ฐจ(14์ผ) ๋ฐฐ์–‘ํ•˜๊ณ , 1์ฐจ ๋ฐฐ์–‘์•ก์„ 2์ฐจ(14์ผ) ๊ณ„๋Œ€๋ฐฐ์–‘ํ•œ ํ›„ ์„ธํฌ๋ณ‘๋ณ€ํšจ๊ณผ(Cytophatic Effect) ์–‘์„ฑ๊ฐฏ์ˆ˜, ์ฑ„์ˆ˜๋Ÿ‰ ๋“ฑ ๊ณ„์‚ฐ์‹(ํ”„๋กœ๊ทธ๋žจ)์— ์ ์šฉํ•˜์—ฌ ๋ฐ”์ด๋Ÿฌ์Šค ๋†๋„(MPN)๊ฐ’์„ ๊ณ„์‚ฐ. โ–ก ํ†ตํ•ฉ์„ธํฌ๋ฐฐ์–‘-์ค‘ํ•ฉํšจ์†Œ์—ฐ์‡„๋ฐ˜์‘๋ฐฉ๋ฒ•(ICC-PCR : integrated cell culture-PCR) ์ด๋ฐฐ์–‘์„ฑ๋ฐ”์ด๋Ÿฌ์Šค๋ถ„์„๋ฒ•๊ณผ ๋™์ผํ•œ ๋ฐฉ๋ฒ•์œผ๋กœ ์‹œ๋ฃŒ์˜ ์ „์ฒ˜๋ฆฌ๋ฅผ ์ˆ˜ํ–‰ํ•˜๊ณ , ์„ธํฌ์— ์‹œ๋ฃŒ๋ฅผ ์ ‘์ข…ํ•˜์—ฌ 3์ผ(72์‹œ๊ฐ„)๊ฐ„ ๋ฐฐ์–‘ํ•œ ํ›„์— ์ฆ์‹ํ•œ ๋ฐ”์ด๋Ÿฌ์Šค๋ฅผ ์œ ์ „์ž๋ถ„์„๋ฒ•(PCR ; ์ค‘ํ•ฉํšจ์†Œ ์—ฐ์‡„๋ฐ˜์‘๋ฐฉ๋ฒ•)์œผ๋กœ ๊ฒ€์ถœ. - ์‹ค์‹œ๊ฐ„ ์ค‘ํ•ฉํšจ์†Œ์—ฐ์‡„๋ฐ˜์‘(Real time PCR)๋ฐฉ๋ฒ• ์‹ค์‹œ๊ฐ„์ค‘ํ•ฉํšจ์†Œ์—ฐ์‡„๋ฐ˜์‘(Real time PCR)๋ฐฉ๋ฒ•์€ ๋ณ‘์›์„ฑ๋ฏธ์ƒ๋ฌผ๋ณ„ ํŠน์ • ์œ ์ „์ž(target sequence)์˜ ์ฆํญ๊ณผ ํ•จ๊ป˜ ํ˜•๊ด‘๋Ÿ‰๋„ ์ฆ๊ฐ€ํ•˜๋„๋ก ์‹œ์•ฝ์„ ์ฒจ๊ฐ€ํ•˜์—ฌ, ๋ฐ˜์‘ ํ›„์˜ ํ˜•๊ด‘๋ณ€ํ™”๋ฅผ ๋ถ„์„ํ•˜๊ณ , ๋†๋„๋ฅผ ์•Œ๊ณ  ์žˆ๋Š” ํ‘œ์ค€์‹(Standard curve)๊ณผ ๋น„๊ตํ•˜์—ฌ ๋Œ€์ƒ ๋ฏธ์ƒ๋ฌผ๋ณ„ ์ •๋Ÿ‰ํ™”๊ฐ€ ๊ฐ€๋Šฅํ•œ ๋ฐฉ๋ฒ•. โ€ป Real time PCR์€ ํŠน์ • ์œ ์ „์ž๋ฅผ ์ฆํญํ•œ ํ›„ ์ „๊ธฐ์˜๋™ํ•˜์—ฌ ์ฆํญ ์‚ฐ๋ฌผ์„ ํ™•์ธํ•  ํ•„์š”๊ฐ€ ์—†๋‹ค๋Š” ์ ์—์„œ ๊ธฐ์กด์˜ ๋ฐฉ๋ฒ•๋ณด๋‹ค ์‹œ๊ฐ„์ด ๋‹จ์ถ•๋˜๋ฉฐ, ๋‹ค๋Ÿ‰์˜ ์‹œ๋ฃŒ๋ฅผ ๋™์‹œ์— ์ˆ˜ํ–‰์ด ๊ฐ€๋Šฅํ•œ ์žฅ์ ์ด ์žˆ๋‹ค. - ์žฅ๋ฐ”์ด๋Ÿฌ์Šค ๋ฐ ์žฅ๊ด€๊ณ„๋ฐ”์ด๋Ÿฌ์Šค ๋ฐ”์ด๋Ÿฌ์Šค๋Š” ์ˆ™์ฃผ์— ๋งค์šฐ ํŠน์ด์ ์œผ๋กœ ๊ฐ์—ผํ•˜์—ฌ, ์‚ฌ๋žŒ์—๊ฒŒ ๊ฐ์—ผํ•˜์—ฌ ์งˆ๋ณ‘์„ ์ผ์œผํ‚ค๋Š” ๋ณ‘์›์„ฑ ๋ฐ”์ด๋Ÿฌ์Šค ์—ญ์‹œ ๋Œ€๋ถ€๋ถ„ ์‚ฌ๋žŒ์—๊ฒŒ๋งŒ ์ œํ•œ์ ์œผ๋กœ ๊ฐ์—ผํ•œ๋‹ค. ๋ถ„๋ณ€์— ๋‹ค๋Ÿ‰ ์กด์žฌํ•˜๋Š” ์žฅ๊ด€๊ณ„๋ฐ”์ด๋Ÿฌ์Šค๋กœ๋Š” ์žฅ๋ฐ”์ด๋Ÿฌ์Šค(Enterovirus ; ํด๋ฆฌ์˜ค๋ฐ”์ด๋Ÿฌ์Šค, ์ฝ•์‚ฌํ‚ค๋ฐ”์ด๋Ÿฌ์Šค, ์—์ฝ”๋ฐ”์ด๋Ÿฌ์Šค)์™€ ๊ทธ๋ฐ–์— ์•„๋ฐ๋…ธ๋ฐ”์ด๋Ÿฌ์Šค, ๋ ˆ์˜ค๋ฐ”์ด๋Ÿฌ์Šค, ๋กœํƒ€๋ฐ”์ด๋Ÿฌ์Šค, Aํ˜• ๊ฐ„์—ผ ๋ฐ”์ด๋Ÿฌ์Šค, Eํ˜• ๊ฐ„์—ผ๋ฐ”์ด๋Ÿฌ์Šค, ๊ทธ๋ฆฌ๊ณ  ๋…ธ๋กœ๋ฐ”์ด๋Ÿฌ์Šค์™€ ๊ฐ™์ด ์žฅ์—ผ์„ ์ผ์œผํ‚ค๋Š” ๋ฐ”์ด๋Ÿฌ์Šค๊ฐ€ ํฌํ•จ๋œ๋‹ค. ์žฅ๊ด€๊ณ„๋ฐ”์ด๋Ÿฌ์Šค(Human enteric viruses)๋Š” 100์—ฌ์ข…์˜ ๋ฐ”์ด๋Ÿฌ์Šค๊ฐ€ ์•Œ๋ ค์ ธ ์žˆ๋‹ค(์žฅ๋ฐ”์ด๋Ÿฌ์Šค๋Š” 70์—ฌ์ข…).' - <table border><caption>\( \left \langle \right . \) ํ‘œ 6ใ€‰ ํฌ๊ธฐ \( 8 \times 8 \) ์ธ ๋ฉ”์‰ฌ์˜ \( Q_ { 8 } \) ์— ๋Œ€ํ•œ ์ž„๋ฒ ๋”ฉ \( f_ { 3 } \)</caption> <tbody><tr><td rowspan=2></td><td>\( j=0 \)</td><td>\( j=1 \)</td><td>\( j=2 \)</td><td>\( j=3 \)</td><td>\( j=4 \)</td><td>\( j=5 \)</td><td>\( j=6 \)</td><td>\( j=7 \)</td></tr><tr><td colspan=2>01</td><td colspan=2>00</td><td colspan=2>10</td><td colspan=2>11</td></tr><tr><td>\( j=0 \)</td><td>010101</td><td>001111</td><td>000101</td><td>011111</td><td>110101</td><td>101111</td><td>100101</td><td>111111</td></tr><tr><td>\( j=1 \)</td><td>010111</td><td>001101</td><td>000111</td><td>011101</td><td>110111</td><td>101101</td><td>100111</td><td>111101</td></tr><tr><td>\( j=2 \)</td><td>011101</td><td>000111</td><td>001101</td><td>010111</td><td>111101</td><td>100111</td><td>101101</td><td>110111</td></tr><tr><td>\( j=3 \)</td><td>011111</td><td>000101</td><td>001111</td><td>010101</td><td>111111</td><td>100101</td><td>101111</td><td>110101</td></tr><tr><td>\( j=4 \)</td><td>110101</td><td>101111</td><td>100101</td><td>111111</td><td>010101</td><td>001111</td><td>000101</td><td>011111</td></tr><tr><td>\( j=5 \)</td><td>110111</td><td>101101</td><td>100111</td><td>111101</td><td>010111</td><td>001101</td><td>000111</td><td>011101</td></tr><tr><td>\( j=6 \)</td><td>111101</td><td>100111</td><td>101101</td><td>110111</td><td>011101</td><td>000111</td><td>001101</td><td>010111</td></tr><tr><td>\( j=7 \)</td><td>111111</td><td>100101</td><td>101111</td><td>110101</td><td>011111</td><td>000101</td><td>001111</td><td>010101</td></tr></tbody></table> - source_sentence: ๋‹ค์ค‘ ์Šค์œ„์นญ ์†Œ์ž๋ฅผ ์‚ฌ์šฉํ•œ ๋ฒ…-๋ถ€์ŠคํŠธ ์ปจ๋ฒ„ํ„ฐ์˜ ํŠน์ง•์€ ๋ญ์•ผ? sentences: - <p>๋ณธ ์‹คํ—˜์—์„œ๋Š” ์†Œ์ˆ˜์ƒํ’ˆํ‰ ๊ฒ€์ƒ‰์„ฑ๋Šฅ์„ ์—„๊ฒฉํ•œ(strict) ํ‰๊ฐ€์™€ ๊ด€๋Œ€ํ•œ(lenient) ํ‰๊ฐ€์˜ ๋‘ ๊ฒฝ์šฐ๋กœ ๋‚˜๋ˆ„์ด ํ‰๊ฐ€ํ•œ๋‹ค. ์—ฌ๊ธฐ์„œ ์—„๊ฒฉํ•œ ํ‰๊ฐ€๋Š” ์†Œ์ˆ˜์ƒํ’ˆํ‰ ์ง‘ํ•ฉ์„ ๊น€์ƒ‰ํ•œ ํ›„, ํ—ค๋‹น ์ง‘ํ•ฉ ์†์˜ ๊ฐœ๋ณ„ ์ƒํ’ˆํ‰ ์ฆ ์†Œ์ˆ˜์ƒํ’ˆํ‰๊นŒ์ง€ ๋ชจ๋‘ ๊ฒ€์ƒ‰ํ•˜๋Š”(์•Œ์•„๋งžํžˆ๋Š”) ๊ฒฝ์šฐ๋ฅผ ์ •๋‹ต์œผ๋กœ ํ•˜๊ณ , ๊ด€๋Œ€ํ•œ ํ‰๊ฐ€๋Š” ์†Œ์ˆ˜์ƒํ’ˆํ‰ ์ง‘ํ•ฉ๋งŒ์„ ๊น€์ƒ‰ํ•˜๋Š” ๊ฒƒ์„ ์ •๋‹ต์œผ๋กœ ํ•œ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ์–ด๋–ค ์ƒํ’ˆํ‰ ์ง‘ํ•ฉ์˜ ์‹ค์ œ ๊ตฌ์„ฑ์ด [N, P, P, P, P, P]๋กœ ๋˜์–ด์žˆ์œผ๋ฉด, ์—„๊ฒฉํ•œ ํ‰๊ฐ€์˜ ๊ฒฝ์šฐ, ํ•ด๋‹น ์ง‘ํ•ฉ์„ ์†Œ์ˆ˜์ƒํ’ˆํ‰์œผ๋กœ ๊ฒ€์ƒ‰ํ•œ ํ›„, ์ด ์ง‘ํ•ฉ ์†์—์„œ N์œผ๋กœ ๋ถ„๋ฅ˜๋œ ์†Œ์ˆ˜์ƒํ’ˆํ‰๊นŒ์ง€ ๊น€์ƒ‰ํ•˜๋Š” ๊ฒฝ์šฐ([N, P, P, P, P, P]๋กœ ์ˆœ์„œ๊นŒ์‹œ ๋งžํž˜)๋ฅผ ์ •๋‹ต์œผ๋กœ ๊ฐ„์ฃผํ•œ๋‹ค. ๊ฑธ๊ตญ ์—„๊ฒฉํ•œ ํ‰๊ฐ€๋Š” ์ผ๋‹จ ๋‹ค์–‘ํ•œ ์ƒํ’ˆํ‰ ์ง‘ํ•ฉ๋“ค ์†์—์„œ ์†Œ์ˆ˜์ƒํ’ˆํ‰์„ ํฌํ•จํ•˜๋Š” ์ง‘ํ•ฉ๋“ค์„ ๊ฒ€์ƒ‰ํ•œ ํ›„์—, ํ•œ๋ฐœ ๋” ๋‚˜์•„๊ฐ€ ๊ฐœ๋ณ„ ์ง‘ํ•ฉ ์†์˜ ๊ฐœ๋ณ„ ์†Œ์ˆ˜์ƒํ’ˆํ‰๊นŒ์ง€ ์ถ”๊ฐ€๋กœ ์„ ๋ณ„ํ—ค ๋‚ผ ์ˆ˜ ์žˆ์–ด์•ผ ํ•œ๋‹ค. </p> <p>์ด์— ๋น„ํ•ด ๊ด€๋Œ€ํ•œ ํ‰๊ฐ€๋Š”, ๋‹ค์–‘ํ•œ ์ƒํ’ˆํ‰ ์ง‘ํ•ฉ๋“ค ์†์—์„œ ์†Œ์ˆ˜์ƒํ’ˆํ‰์ด ์กด์žฌํ•˜๋Š” ์ง‘ํ•ฉ๋งŒ ๊ฒ€์ƒ‰ํ•˜๋ฉด ๋˜๊ธฐ ๋•Œ๋ฌธ์—, ๊ฐœ๋ณ„ ์ƒํ’ˆํ‰์˜ ๊ธ์ •/๋ถ€์ • ๋ถ„๋ฅ˜๊ฐ€ ์„ค๋ น ํ‹€๋ฆฌ๋”๋ผ๋„ ์ƒํ’ˆํ‰ ์ง‘ํ•ฉ์˜ ๊ธ์ •/๋ถ€์ • ๋น„๋Œ€์นญ๋„์˜ ์กฐ๊ฑด๋งŒ ๋งž์œผ๋ฉด ๋œ๋‹ค. ์˜ˆ์ปจ๋Œ€ ์‹ค์ œ ์ง‘ํ•ฉ์ด [N, P, P, P, P, P]์ผ ๋•Œ, ๊ธ์ •/๋ถ€์ • ์ž๋™๋ถ„๋ฅ˜ ์ค‘ ์ผ๋ถ€ ์˜ค๋ฒˆ๋ฅ˜๊ฐ€ ์žˆ๋Š” [P, N, P, P, P, P]๋‚˜ [P, P, P, P, P, N]๋„ ๋น„๋Œ€์นญ๋„๊ฐ€ ๋™์ผํ•˜๊ธฐ ๋•Œ๋ฌธ์— ์†Œ์ˆ˜์ƒํ’ˆํ‰ ์ง‘ํ•ฉ์œผ๋กœ ๊ฒ€์ƒ‰๋œ๋‹ค. ๋ณธ ์‹คํ—˜์—์„œ๋Š” ์†Œ์ˆ˜์ƒํ’ˆํ‰์ด ํฌํ•จ๋ผ์žˆ์Œ์„ ํŒ๋‹จํ•˜๋Š” ๊ธฐ์ค€์„ '๋น„๋Œ€์นญ๋„๊ฐ€ \( 0.5 \) ๋ณด๋‹ค ํฐ๊ฐ€ \( (0.5<1 \) Skewness|)'์™€ '๋น„๋Œ€์นญ๋„๊ฐ€ \(1 \)๋ณด๋‹ค ํฐ๊ฐ€ \( (1< \) |Skewness|)'๋กœ ๋‚˜๋ˆ„์–ด ํ‰๊ฐ€ํ•œ๋‹ค. </p> <h2>4.4 ์‹คํ—˜๊ฒฐ๊ณผ</h2> <p>์†Œ์ˆ˜์ƒํ’ˆํ‰ ๊ฒ€์ƒ‰์„ฑ๋Šฅ์„ ์ •๋ฐ€๋„, ์žฌํ˜„์šธ, F \(1 \) ์ ์ˆ˜๋กœ ๊ฐ๊ฐ ์—„๊ฒฉํ•˜๊ฒŒ(strict) ๋˜๋Š” ๊ด€๋Œ€ํ•˜๊ฒŒ(lenient) ํ‰๊ฐ€ํ•œ ๊ฑธ๊ณผ๋ฅผ Table \(7 \)๊ณผ Table \(8 \)์— ๊ฐ๊ฐ ์ •๋ฆฌํ•œ๋‹ค. </p> <p>Table \(7 \)์˜ ์—„๊ฒฉํ•œ ํ‰๊ฐ€์˜ ๊ฒฝ์šฐ, ์Šค๋งˆํŠธํฐ๊ณผ ์˜ํ™”์˜ ๋‘ ๋น„๋Œ€์นญ๋„ ์ •์˜์— ๋Œ€ํ•˜์—ฌ ๋ฏธ์ˆ˜์ • ๊ฐ์„ฑ์‚ฌ์ „(SWN \&OPL)์„ ์ด์šฉํ—ธ์„ ๋•Œ์˜ \(4 \)๊ฐœ์˜ F \(1 \)์ ์ˆ˜์˜ ํ‰๊ท ์€ \( 11.4 \% \) ์˜€๋‹ค. ํ•œํŽธ, Table \(7 \)์—์„œ ๋„๋ฉ”์ธ ํŠนํ™”๋œ ๊ฐ์„ฑ์‚ฌ์ „(MRG \&SBL)์„ ์ด์šฉํ˜ฐ์„ ๋•Œ์˜ ํ‰๊ท  F \(1 \)์ ์ˆ˜๋Š” \( 19.8 \% \)์˜€๋‹ค. Table \(8 \) ์˜ ๊ด€๋Œ€ํ•œ ํ‰๊ฐ€์—์„œ๋Š”, ๋ฏธ์ˆ˜์ • ์‚ฌ์ „๊ณผ ๋„๋ฉ”์ธ ํŠนํ™” ์‚ฌ์ „์˜ ํ‰๊ท  F \(1 \) ์ ์ˆ˜๊ฐ€ ๊ฐ๊ฐ \( 48.9 \% \) ์™€ \( 53.8 \% \)์˜€๋‹ค. ๋‘ ํ‘œ ๋ชจ๋‘์—์„œ ๋„๋ฉ”์ธ ํŠนํ™”๋œ ๊ฐ์„ฑ์‚ฌ์ „์„ ์ด์šฉํ•œ ๊ฒฝ์šฐ๊ฐ€ ๋ฏธ์ˆ˜์ • ๊ฐ์„ฑ์‚ฌ์ „์„ ์ด์šฉํ•œ ๊ฒฝ์šฐ๋ณด๋‹ค ์†Œ์ˆ˜์ƒํ’ˆํ‰ ๊ฒ€์ƒ‰์—์„œ ๋” ์ข‹์€ ์„ฑ๋Šฅ์„ ๋‚˜ํƒ€๋ƒ„์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ๋‹ค. ๋ฏธ์ˆ˜์ • ์‚ฌ์ „๊ณผ ์ˆ˜์ • ์‚ฌ์ „์˜ ํ‰๊ท  F \(1 \) ์ ์ˆ˜์˜ ์ฐจ์ด๋Š” ์—„๊ฒฉํ•œ ํ‰๊ฐ€์˜ ๊ฒฝ์šฐ \( 8.4 \% \) ์—ˆ๊ณ , ๊ด€๋Œ€ํ•œ ํ‰๊ฐ€์˜ ๊ฒฝ์šฐ \( 4.9 \% \) ์—ˆ๋‹ค. </p> - <h1>III ๊ฒฐ๋ก </h1><p>๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ๊ธฐ์กด์˜ ๋ฒ…-๋ถ€์ŠคํŠธ ์ปจ๋ฒ„ํ„ฐ์˜ ํšจ์œจ ๋ณด๋‹ค ๋†’์€ ํšจ์œจ์„ ๊ฐ–๋Š” ๋‹ค์ค‘ ์Šค์œ„์นญ ์†Œ์ž๋ฅผ ์‚ฌ์šฉํ•œ ๋ฒ…-๋ถ€์ŠคํŠธ ์ปจ๋ฒ„ํ„ฐ๋ฅผ ์„ค๊ณ„ํ•˜์˜€๋‹ค. ์ œ์•ˆํ•œ ์ปจ๋ฒ„ํ„ฐ๋Š” ๋™์ผ๋ฉด์  ๋ฐ ๋™์ผ ํšจ์œจ ๋˜๋Š” ์ ์€ ํšจ์œจ ๊ฐ์†Œ๋งŒ์œผ๋กœ๋„ ๋„“์€ ์ถœ๋ ฅ ์ „์•• ๋ฒ”์œ„๋ฅผ ๊ฐ–๋„๋ก ์„ค๊ณ„ํ•˜์˜€๋‹ค. ๋ฒ…-๋ถ€์ŠคํŠธ์ปจ๋ฒ„ํ„ฐ๋Š” ๊ณ ์ „๋ฅ˜์—์„œ ๊ณ ํšจ์œจ์„ ์œ„ํ•ด PWM ์ œ์–ด๋ฒ•์„ ์ด์šฉํ•˜์—ฌ ์ œ์–ดํ•˜์˜€๊ณ , ์ „๋ฅ˜๋ชจ๋“œ๋ฅผ ์ด์šฉํ•˜์—ฌ ์„ค๊ณ„ํ•˜์˜€๋‹ค. ์ œ์•ˆํ•œ ์ปจ๋ฒ„ํ„ฐ๋Š” ์ตœ๋Œ€ ์ถœ๋ ฅ์ „๋ฅ˜ \( 300 \mathrm{~mA} \), ์ž…๋ ฅ ์ „์•• \( 3.3 \mathrm{~V} \)์— ์ถœ๋ ฅ์ „์•• \( 700 \mathrm{mV}^{\sim} 12 \mathrm{~V}, 1.5 \mathrm{MHz} \) ์˜ ์Šค์œ„์นญ์ฃผํŒŒ์ˆ˜๋ฅผ ๊ฐ–๋Š”๋‹ค. ์ตœ๋Œ€ ํšจ์œจ์€ \( 90 \% \) ๋ฅผ ๊ฐ–๋„๋ก ์„ค๊ณ„ํ•˜์˜€๋‹ค. ๋˜ํ•œ ๊ณผ๋ถ€ํ•˜ ๋ฐ ๊ธฐํƒ€ ํ™˜๊ฒฝ์ ์ธ ๋ณ€ํ™”์— ์˜ํ•œ ์˜ค๋™์ž‘์œผ๋กœ ์ธํ•ด ์ „๋ ฅ ์†์‹ค๊ณผ ๋‚ด๋ถ€ ๋ฐ ์™ธ๋ถ€ IC์˜ ์†์ƒ์„ ๋ฐฉ์ง€ํ•˜๊ธฐ ์œ„ํ•œ ๋ณดํ˜ธํšŒ๋กœ๋ฅผ IC ๋‚ด๋ถ€์— ์„ค๊ณ„ํ•˜์—ฌ ์‹ ๋ขฐ์„ฑ์„ ํ–ฅ์ƒ์‹œ์ผฐ๋‹ค. ๋งˆ์ง€๋ง‰์œผ๋กœ ๊ณ ์•ˆ๋œ ESD ๋ณดํ˜ธ ์†Œ์ž๋ฅผ ์„ค๊ณ„ ๋ฐ ํƒ‘์žฌํ•˜์—ฌ ์ •์ „๊ธฐ ๋ฐฉ์ง€๋กœ ์ธํ•œ IC์˜ ์†์ƒ์„ ๋ฐฉ์ง€ํ•˜๊ณ , ๊ธฐ์กด์˜ ggNMOS์˜ ๋†’์€ ํŠธ๋ฆฌ๊ฑฐ ์ „์••์„ ๊ฐœ์„ ํ•˜์—ฌ, ๋‚ฎ์€ํŠธ๋ฆฌ๊ฑฐ๋ง ํŠน์„ฑ์„ ๊ฐ–๋Š” ESD ๋ณดํ˜ธํšŒ๋กœ๋ฅผ ์ œ์•ˆ ๋ฐ ์„ค๊ณ„ํ•˜์˜€๋‹ค. ์‹œ๋ฎฌ๋ ˆ์ด์…˜ ๊ฒฐ๊ณผ ์ผ๋ฐ˜์ ์ธ ggnmos์˜ ํŠธ๋ฆฌ๊ฑฐ์ „์••์ด \( 8 \mathrm{~V} \) ๋‚ด์™ธ์ธ ๊ฒƒ์— ๋ฐ˜ํ•ด ๊ณ ์•ˆ๋œ ์†Œ์ž์˜ ํŠธ๋ฆฌ๊ฑฐ์ „์••์€ \( 4 \mathrm{~V} \) ๋‚ด์™ธ๋กœ ๋” ๋‚ฎ์€ ํŠธ๋ฆฌ๊ฑฐ ์ „์•• ํŠน์„ฑ์„ ๋‚˜ํƒ€๋ƒˆ๋‹ค. </p> - <h1>์š” ์•ฝ</h1><p>๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” DT-CMOS(Dynamic Threshold voltage Complementary MOSFET) ์Šค์œ„์นญ ์†Œ์ž๋ฅผ ์‚ฌ์šฉํ•œ DC-DC Buck ์ปจ๋ฒ„ํ„ฐ๋ฅผ ์ œ์•ˆํ•˜์˜€๋‹ค. ๋†’์€ ํšจ์œจ์„ ์–ป๊ธฐ ์œ„ํ•˜์—ฌ PWM ์ œ์–ด๋ฐฉ์‹์„ ์‚ฌ์šฉํ•˜์˜€์œผ๋ฉฐ, ๋‚ฎ์€ ์˜จ ์ €ํ•ญ์„ ๊ฐ–๋Š” DT-CMOS ์Šค์œ„์น˜ ์†Œ์ž๋ฅผ ์„ค๊ณ„ํ•˜์—ฌ ๋„ํ†ต ์†์‹ค์„ ๊ฐ์†Œ์‹œ์ผฐ๋‹ค. ์ œ์•ˆํ•œ Buck ์ปจ๋ฒ„ํ„ฐ๋Š” ๋ฐด๋“œ๊ฐญ ๊ธฐ์ค€ ์ „์•• ํšŒ๋กœ,์‚ผ๊ฐํŒŒ ๋ฐœ์ƒ๊ธฐ, ์˜ค์ฐจ ์ฆํญ๊ธฐ, ๋น„๊ต๊ธฐ, ๋ณด์ƒ ํšŒ๋กœ, PWM ์ œ์–ด ๋ธ”๋ก์œผ๋กœ ๊ตฌ์„ฑ๋˜์–ด ์žˆ๋‹ค. ์‚ผ๊ฐํŒŒ ๋ฐœ์ƒ๊ธฐ๋Š” ์ „์›์ „์••(3.3V)๋ถ€ํ„ฐ ์ ‘์ง€๊นŒ์ง€ ์ถœ๋ ฅ ์ง„ํญ์˜ ๋ฒ”์œ„๋ฅผ ๊ฐ–๋Š” \( 1.2 \mathrm{MHz} \) ์˜ ์ฃผํŒŒ์ˆ˜๋ฅผ ์ƒ์„ฑํ•˜๋ฉฐ, ๋น„๊ต๊ธฐ๋Š” 2๋‹จ ์ฆํญ๊ธฐ๋กœ ์„ค๊ณ„๋˜์—ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ์˜ค์ฐจ ์ฆํญ๊ธฐ๋Š” \( 70 \mathrm{~dB} \) ์˜ ์ด๋“๊ณผ \( 64^{\circ} \)์˜ ์œ„์ƒ์—ฌ์œ ๋ฅผ ๊ฐ–๋„๋ก ์„ค๊ณ„ํ•˜์˜€๋‹ค. ๋˜ํ•œ ์ œ์•ˆํ•œ Buck ์ปจ๋ฒ„ํ„ฐ๋Š”current-mode PWM ์ œ์–ดํšŒ๋กœ์™€ ๋‚ฎ์€ ์˜จ์ €ํ•ญ์„ ๊ฐ–๋Š” ์Šค์œ„์น˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ \( 100 \mathrm{~mA} \)์˜ ์ถœ๋ ฅ ์ „๋ฅ˜์—์„œ ์ตœ๋Œ€ \( 95 \% \)์˜ ํšจ์œจ์„ ๊ตฌํ˜„ํ•˜์˜€์œผ๋ฉฐ, \( 1 \mathrm{~mA} \)์ดํ•˜์˜ ๋Œ€๊ธฐ๋ชจ๋“œ์—๋„ ๋†’์€ ํšจ์œจ์„ ๊ตฌํ˜„ํ•˜๊ธฐ ์œ„ํ•˜์—ฌ LDO ๋ ˆ๊ทค๋ ˆ์ดํ„ฐ๋ฅผ ์„ค๊ณ„ํ•˜์˜€์œผ๋ฉฐ,๋˜ํ•œ 2๊ฐœ์˜ IC ๋ณดํ˜ธ ํšŒ๋กœ๋ฅผ ๋‚ด์žฅํ•˜์—ฌ ์‹ ๋ขฐ์„ฑ์„ ํ™•๋ณดํ•˜์˜€๋‹ค. </p><h1>1. ์„œ๋ก </h1><p>์ตœ๊ทผ์˜ ํœด๋Œ€์ „ํ™”, PDA, MP3๊ณผ ๊ฐ™์€ ํœด๋Œ€์šฉ ๋ฉ€ํ‹ฐ๋ฏธ๋””์–ด์˜ ์‚ฌ์šฉ์ด ๊ธ‰์ฆํ•จ์— ๋”ฐ๋ผ ๊ณ ํšจ์œจ, ์†Œํ˜•ํ™”๋ฅผ ์œ„ํ•ด ๊ธฐ์กด์˜ Linear ๋ฐฉ์‹์˜ ์ „์›์žฅ์น˜์—์„œ SMPS ๋ฐฉ์‹์œผ๋กœ ๋Œ€์ฒด๋˜๊ณ  ์žˆ๋Š” ์ถ”์„ธ์ด๋‹ค. SMPS(Switching Mode Power Supply)๋Š” ์Šค์œ„์นญ์ฃผํŒŒ์ˆ˜๋ฅผ ์ด์šฉํ•ด ์—๋„ˆ์ง€ ์ถ•์ ์šฉ ์†Œ์ž์˜ ์†Œํ˜•ํ™”๋ฅผ ์ด๋ฃฐ ์ˆ˜ ์žˆ์œผ๋‚˜, ์Šค์œ„์นญ ์ฃผํŒŒ์ˆ˜์˜ ๊ณ ์ฃผํŒŒํ™”๋กœ ์ธํ•ด ์ƒ๊ธฐ๋Š” ์Šค์œ„์นญ ์†์‹ค, ์ธ๋•ํ„ฐ ์†์‹ค, ์ „๋„ ์†์‹ค ๋“ฑ์— ๋Œ€ํ•œ๋Œ€์ฑ…์„ ๊ฐ•๊ตฌํ•˜์—ฌ์•ผ ํ•œ๋‹ค. ๊ธฐ์กด์˜ ์ €์ „์•• DC-DC ์ปจ๋ฒ„ํ„ฐ๋Š” ์Šค์œ„์นญ ์†Œ์ž๋กœ์„œ ์ผ๋ฐ˜์ ์ธ CMOS ์†Œ์ž๋ฅผ ์‚ฌ์šฉํ•ด ์™”๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ CMOS ์Šค์œ„์นญ ์†Œ์ž๋Š” ๋งค์šฐ ์ž‘์€ ์˜จ ์ €ํ•ญ์„ ์–ป๊ธฐ ์œ„ํ•ด์„œ ๋งค์šฐ ํฐ ๋ฉด์ ์„ ํ•„์š”๋กœ ํ•˜๊ธฐ ๋•Œ๋ฌธ์— ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ์ด๋Ÿฌํ•œ ์Šค์œ„์นญ ์†Œ์ž์˜ ๋ฉด์  ๋ฌธ์ œ๋ฅผ ๊ฐœ์„  ํ•˜๊ณ ์ž ๋ฌธํ„ฑ์ „์••์„ ๋‚ฎ์ถ”์–ด ์˜จ ์ €ํ•ญ์„ ์ค„์ผ ์ˆ˜ ์žˆ๋Š” DT-CMOS๋ฅผ ์‚ฌ์šฉํ•œ ์Šค์œ„์นญ ์†Œ์ž๋ฅผ ์ œ์•ˆํ•˜์˜€๋‹ค. ์ œ์•ˆ๋œ ์†Œ์ž๋Š” ๊ธฐ์กด์˜ ์ผ๋ฐ˜์ ์ธ CMOS ๊ณต์ •์„ ์ด์šฉํ•˜๊ณ , ๊ธฐ์กด์˜ CMOS ์†Œ์ž ๋ณด๋‹ค ๋” ์ ์€ ๋ฉด์ ์„ ๊ฐ–๊ณ , ๋” ์ž‘์€ ์˜จ์ €ํ•ญ์„ ๊ฐ–๋Š”๋‹ค.[3] </p><p>๋”ฐ๋ผ์„œ ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” DT-CMOS ์Šค์œ„์นญ ์†Œ์ž๋ฅผ์ด์šฉํ•˜์—ฌ ๋™์ผ ๋ฉด์ ์—์„œ ๊ธฐ์กด์˜ CMOS ์Šค์œ„์นญ ์†Œ์ž๋ฅผ ์‚ฌ์šฉํ•œ SMPS ๋ณด๋‹ค ๋” ๋†’์€ ํšจ์œจ์„ ๊ฐ–๋Š” DC-DCBuck ์ปจ๋ฒ„ํ„ฐ๋ฅผ ์„ค๊ณ„ํ•˜์˜€๋‹ค. ๋ณธ๋ก  1์ ˆ์—์„œ๋Š” DT-CMOS ์Šค์œ„์นญ ์†Œ์ž์˜ ๊ธฐ๋ณธ์ ์ธ ๊ฐœ๋…๊ณผ ๊ตฌํ˜„ ๋ฐฉ๋ฒ• ๊ทธ๋ฆฌ๊ณ  ๋™์ž‘ ํŠน์„ฑ์— ๋Œ€ํ•ด ์„ค๋ช…ํ•˜์˜€์œผ๋ฉฐ, 2์ ˆ์—์„œ๋Š” DC-DC Buck ์ปจ๋ฒ„ํ„ฐ ์„ค๊ณ„์— ๋Œ€ํ•ด ์„ค๋ช…ํ•˜์˜€๋‹ค. 3์ ˆ์—์„œ๋Š” ๋‚ฎ์€ ์ถœ๋ ฅ ์ „๋ฅ˜์—์„œ ํšจ์œจ์ด ๊ธ‰๊ฒฉํžˆ ๊ฐ์†Œํ•˜๋Š” PWM ๋ฐฉ์‹์„ ๋ณด์™„ํ•˜๋Š” LDO ๋ ˆ๊ทค๋ ˆ์ดํ„ฐ์— ๋Œ€ํ•ด ์„ค๋ช…ํ•˜์˜€์œผ๋ฉฐ, 4์ ˆ์—์„œ๋Š” IC๋ฅผ ๋ณดํ˜ธํ•˜๊ธฐ ์œ„ํ•œ ํšŒ๋กœ์— ๋Œ€ํ•ด ์„ค๋ช…ํ•˜์˜€๋‹ค. </p> - source_sentence: Table 1. Natural frequency of each cantilever with different weights์—์„œ 10 g์ผ ๋•Œ Type2์˜ ๊ฐ’์€ ์–ด๋– ํ•œ๊ฐ€? sentences: - <table border><caption>ํ‘œ 6. ๋ฐฉ๋ฒ• 2์˜ ํ˜•์‹๋ถ„๋ฅ˜์œจ ๋ฐ ํ˜•์‹๋ณ„ ๋ฌธ์ž ์ธ์‹๊ธฐ์˜ ์ธ์‹์œจ</caption> <tbody><tr><tr><td>In Out</td><td>Type1</td><td>Type2</td><td>Type3</td><td>Type4</td><td>Type5</td><td>Type6</td><td>Type7</td><td>Rec(T)</td><td>Rec(C)</td><td>Rec(C\7)</td></tr><tr><td>Type1</td><td>51,076</td><td>524</td><td>305</td><td>955</td><td>252</td><td>30</td><td>150</td><td>95.84</td><td>95.34</td><td>99.48</td></tr><tr><td>Type2</td><td>741</td><td>35,323</td><td>43</td><td>281</td><td>511</td><td>33</td><td>178</td><td>95.18</td><td>95.01</td><td>95.18</td></tr><tr><td>Type3</td><td>816</td><td>642</td><td>9,942</td><td>756</td><td>140</td><td>12</td><td>38</td><td>80.53</td><td>80.25</td><td>99.66</td></tr><tr><td>Type4</td><td>1,257</td><td>1,007</td><td>315</td><td>72,787</td><td>657</td><td>384</td><td>176</td><td>95.04</td><td>94.25</td><td>99.17</td></tr><tr><td>Type5</td><td>541</td><td>1,807</td><td>40</td><td>910</td><td>44.846</td><td>234</td><td>94</td><td>92.52</td><td>91.40</td><td>98.79</td></tr><tr><td>Type6</td><td>182</td><td>182</td><td>119</td><td>905</td><td>201</td><td>4.521</td><td>19</td><td>73.76</td><td>73.41</td><td>99.51</td></tr><tr><td>Type7</td><td>4,784</td><td>2,289</td><td>939</td><td>1,911</td><td>1,268</td><td>196</td><td>55.752</td><td>83.04</td><td>82.85</td><td>99.77</td></tr><tr><td colspan=8>๊ณ„</td><td>91.09</td><td>90.54</td><td>99.39</td></tr></tbody></table> <table border><caption>ํ‘œ 7. ๋ฐฉ๋ฒ• 3์˜ ํ˜•์‹๋ถ„๋ฅ˜์œจ ๋ฐ ํ˜•์‹๋ณ„ ๋ฌธ์ž ์ธ์‹๊ธฐ์˜ ์ธ์‹์œจ</caption> <tbody><tr><tr><td>In Out</td><td>Type1</td><td>Type2</td><td>Type3</td><td>Type4</td><td>Type5</td><td>Type6</td><td>Type7</td><td>Rec(T)</td><td>Rec(C)</td><td>Rec(C/T)</td></tr><tr><td>Type1</td><td>53,259</td><td>1</td><td>17</td><td>5</td><td>0</td><td>0</td><td>10</td><td>99.94</td><td>98.88</td><td>98.94</td></tr><tr><td>Type2</td><td>0</td><td>37,092</td><td>0</td><td>1</td><td>2</td><td>0</td><td>15</td><td>99.95</td><td>95.51</td><td>99.95</td></tr><tr><td>Type3</td><td>20</td><td>1</td><td>12,314</td><td>8</td><td>0</td><td>3</td><td>0</td><td>99.74</td><td>98.88</td><td>99.14</td></tr><tr><td>Type4</td><td>30</td><td>25</td><td>15</td><td>76,430</td><td>12</td><td>45</td><td>26</td><td>99.80</td><td>98.20</td><td>98.39</td></tr><tr><td>Type5</td><td>1</td><td>34</td><td>1</td><td>6</td><td>48,377</td><td>21</td><td>32</td><td>99.80</td><td>97.64</td><td>97.83</td></tr><tr><td>Type6</td><td>0</td><td>1</td><td>7</td><td>58</td><td>5</td><td>6,058</td><td>0</td><td>98.84</td><td>97.52</td><td>98.66</td></tr><tr><td>Type7</td><td>8</td><td>14</td><td>1</td><td>5</td><td>3</td><td>2</td><td>67.106</td><td>99.95</td><td>99.14</td><td>99.19</td></tr><tr><td colspan=8>๊ณ„</td><td>99.86</td><td>98.61</td><td>98.76</td></tr></tbody></table> <p>๋ฐฉ๋ฒ• 1, 2, 3์˜ ๊ฒฐ๊ณผ๋กœ ๋ณด์•„, ์ธ์‡„์ฒด ๋ฌธ์ž์ธ์‹์— ์žˆ์–ด์„œ ๋ฐฉํ–ฅ๊ฐ๋„ ํŠน์ง•์„ ์ž…๋ ฅ์œผ๋กœ ํ•˜๋Š” MLP ์‹ ๊ฒฝ๋ง ํ˜•์‹๋ถ„๋ฅ˜๊ธฐ๋Š” \( 99 \% \) ์ด์ƒ์˜ ๋ถ„๋ฅ˜์œจ๋กœ ์ž์†Œ ์กฐํ•ฉ ๋ฐฉ์‹์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•˜๋Š” ๋ฌธ์ž์˜ ํ˜•์‹๋ถ„๋ฅ˜์— ์ ์ ˆํ•˜์—ฌ, ํ˜•์‹ ๋Œ€๋ถ„๋ฅ˜ ํ›„ ๋ฌธ์ž ์ƒ์„ธ์ธ์‹ ์ „๋žต์„ ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋งค์šฐ ์œ ์šฉํ•˜๊ฒŒ ์‚ฌ์šฉ๋  ์ˆ˜ ์žˆ์Œ์„ ์•Œ ์ˆ˜ ์žˆ๋‹ค. ๋˜ํ•œ ๊ฐ ํ˜•์‹์ด ์•Œ๋ ค์ง„ ํ›„์˜ ๋ฌธ์ž์ธ์‹์— ์žˆ์–ด์„œ๋„ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ ๋ฐฉํ–ฅ๊ฐ๋„ ํŠน์ง•์„ ์ž…๋ ฅ์œผ๋กœ ํ•˜๋Š” 2๋‹จ๊ณ„ MLP ์‹ ๊ฒฝ๋ง ์ธ์‹ ๋ฐฉ๋ฒ•์ด \( 98 \% \) ์ด์ƒ์˜ ์ธ์‹์œจ์„ ๋ณด์—ฌ ์œ ์šฉํ•˜๊ฒŒ ์‚ฌ์šฉ๋  ์ˆ˜ ์žˆ์Œ์„ ์•Œ ์ˆ˜ ์žˆ๋‹ค. </p> <p>๋‹จ์ˆœ ์Šค์œ„์นญ ๋ฐฉ๋ฒ•๊ณผ ํ†ตํ•ฉ ๋ฐฉ๋ฒ•์„ ํ˜ผ์šฉํ•˜์—ฌ, ํ˜•์‹๋ถ„๋ฅ˜๊ธฐ์˜ 1์ˆœ์œ„ ๋ถ„๋ฅ˜๊ฒฐ๊ณผ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ 2์ˆœ์œ„ ํ˜•์‹์— ๋Œ€ํ•ด์„œ๋„ ์ธ์‹์„ ํ•˜์—ฌ ๋ณด๋‹ค ๋†’์€ ์‹ ๋ขฐ๋„๊ฐ’์„ ๊ฐ€์ง€๋Š” ๋ฌธ์žํด๋ž˜์Šค๋ฅผ ์ธ์‹ ๊ฒฐ๊ณผ๋กœ ํ•˜๋Š” ๋ฐฉ๋ฒ•์ธ ๋ฐฉ๋ฒ• 4, 5, 6, 7์— ๋Œ€ํ•œ ๋ฌธ์ž์ธ์‹ ๊ฒฐ๊ณผ๋ฅผ ๋ฐฉ๋ฒ• 1, 2, 3์˜ ๊ฒฐ๊ณผ์™€ ํ•จ๊ป˜<๊ทธ๋ฆผ 11>์— ๋‚˜ํƒ€๋‚ด์—ˆ๋‹ค. ๋ฐฉ๋ฒ• 5์™€ 7์ด \( 98.65 \% \) ์˜ ์ธ์‹์œจ๋กœ ๊ฐ€์žฅ ๋†’์€ ์ธ์‹์œจ์„ ๋ณด์˜€๋Š”๋ฐ, ๊ฐ๊ฐ ๋ฐฉ๋ฒ• 4์™€ 6์— ๋ฌธ์ž์ธ์‹ ์‹ ๋ขฐ๋„๊ฐ’์„ ํ˜•์‹๋ถ„๋ฅ˜๊ธฐ์˜ ๊ฒฐ๊ณผ๊ฐ’์œผ๋กœ ๊ฐ€์ค‘ํ™”ํ•œ ๋ฐฉ๋ฒ•์œผ๋กœ ๋ฐฉ๋ฒ• 3์˜ ๊ฒฝ์šฐ์—์„œ์ฒ˜๋Ÿผ ํ˜•์‹๋ถ„๋ฅ˜๊ธฐ์˜ ๋ถ„๋ฅ˜๊ฒฐ๊ณผ๊ฐ’์ด ๋ฌธ์ž์ธ์‹์œจ์˜ ํ–ฅ์ƒ์— ๋„์›€์ด ๋˜์—ˆ๋‹ค๋Š” ๊ฒƒ์„ ๋‚˜ํƒ€๋‚ด๋Š” ๊ฒƒ์ด๋‹ค. ๋ฐฉ๋ฒ• 6๊ณผ 7์€ ๋ฐฉ๋ฒ• 4์™€ 5์˜ ๋ณ€ํ˜•์œผ๋กœ ํ˜•์‹๋ถ„๋ฅ˜๊ธฐ \( \mathrm{TR} \)์˜ ์ถœ๋ ฅ๊ฐ’์ด๋‚˜ \( \mathrm{CR} \) ๋ฌธ์ž์ธ์‹๊ธฐ์˜ ์‹ ๋ขฐ๋„ ๊ฐ’์ด ์ž„๊ณ„์น˜( \( \beta \) ) ๋ณด๋‹ค ๋‚ฎ์„ ๊ฒฝ์šฐ์—๋งŒ 2์ˆœ์œ„ ํ˜•์‹์˜ \( \mathrm{CR} \)์„ ์„ ํƒ์ ์œผ๋กœ ํ˜ธ์ถœํ•˜์˜€๋‹ค. <๊ทธ๋ฆผ 11>์—์„œ ๋‚˜ํƒ€๋‚ฌ๋“ฏ์ด, ํ˜•์‹๋ถ„๋ฅ˜๊ธฐ์™€ ๋ฌธ์ž์ธ์‹๊ธฐ์˜ ์ธ์‹๊ฒฐ๊ณผ๊ฐ€ ์˜์‹ฌ์Šค๋Ÿฌ์šด ๊ฒฝ์šฐ๋งŒ ํ˜ธ์ถœํ•œ ๋ฐฉ๋ฒ• 6๊ณผ 7์ด ๋ฌด์กฐ๊ฑด์ ์œผ๋กœ 2์ˆœ์œ„ ํ˜•์‹์— ๋Œ€ํ•ด์„œ๋„ ์ธ์‹ํ•œ ๋ฐฉ๋ฒ• 4์™€ 5์— ๋น„ํ•ด ์„ฑ๋Šฅ์ด ์šฐ์ˆ˜ํ•จ์„ ์•Œ ์ˆ˜ ์žˆ๋‹ค. </p> - <h1>์š” ์•ฝ</h1><p>๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ํ‘œ๋ฉด Texturing ๋ฐฉ๋ฒ• ์ค‘ ์Šต์‹ ์—์นญ๋ฒ•์„ ์ด์šฉํ•˜์—ฌ ํƒœ์–‘์ „์ง€์— ์‚ฌ์šฉ๋˜๋Š” ์ „๊ทน์˜ ํ‘œ๋ฉด์„ ๊ฑฐ์น ๊ฒŒ ์ฒ˜๋ฆฌํ•˜์˜€๊ณ , ํ‘œ๋ฉด ์ฒ˜๋ฆฌ ํ›„ \( \mathrm{TiO}_{2} \) ์‚ฐํ™”๋ฌผ ๋ฐ˜๋„์ฒด๋ฅผ ์‚ฌ์šฉํ•œ ์—ผ๋ฃŒ ๊ฐ์‘ ํƒœ์–‘์ „์ง€๋ฅผ ์ œ์ž‘ํ•˜์˜€๋‹ค. ํ‘œ๋ฉด ์ฒ˜๋ฆฌ๋œ ์ „๊ทน์„ ์—์นญ ์‹œ๊ฐ„์— ๋”ฐ๋ฅธ ๋ถ„๊ด‘ํŠน์„ฑ์„ ์ธก์ • ๋ถ„์„ํ•˜์˜€์œผ๋ฉฐ, ์—์นญ ์‹œ๊ฐ„์— ๋”ฐ๋ผ ์ œ์ž‘ํ•œ \( \mathrm{TiO}_{2} \) ์—ผ๋ฃŒ ๊ฐ์‘ ํƒœ์–‘์ „์ง€์˜ ์ „๊ธฐ์  ํŠน์„ฑ์„ ํ‰๊ฐ€ํ•จ์œผ๋กœ์จ ํ‘œ๋ฉด ์ฒ˜๋ฆฌ์— ๋”ฐ๋ฅธ ํƒœ์–‘์ „์ง€์˜ ํšจ์œจ ํ–ฅ์ƒ์— ๊ด€ํ•œ ์—ฐ๊ตฌ๋ฅผ ์ง„ํ–‰ํ•˜์˜€๋‹ค. ๊ฒฐ๊ณผ์ ์œผ๋กœ ์ „๊ทน ํ‘œ๋ฉด์„ 10 ๋ถ„๊ฐ„ ์—์นญ ์ฒ˜๋ฆฌํ•œ ํƒœ์–‘์ „์ง€์˜ ๊ฒฝ์šฐ ๊ธฐ์กด ํšจ์œจ๊ณผ ๋น„๊ตํ•˜์˜€์„ ๋•Œ, ์•ฝ \( 27.46[\%] \) ๊ฐœ์„ ๋จ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์—ˆ๋‹ค. </p><h1>I. ์„œ๋ก </h1><p>ํƒœ์–‘์ „์ง€ ์‚ฐ์—…์—์„œ ์ฃผ๋กœ ๋‹จ๊ฒฐ์ • ๋ฐ ๋‹ค๊ฒฐ์ • ์‹ค๋ฆฌ์ฝ˜๊ณ„ ํƒœ์–‘์ „์ง€๊ฐ€ ๋†’์€ ์‹œ์žฅ ์ ์œ ์œจ์„ ๋ณด์ด์ง€๋งŒ, ์‹ค๋ฆฌ์ฝ˜ ํƒœ์–‘์ „์ง€๋Š” ๋†’์€ ์ œ์กฐ๋‹จ๊ฐ€, ๋ณต์žกํ•œ ์ œ์กฐ๊ณต์ • ๋“ฑ์˜ ์ธก๋ฉด์—์„œ ๊ฒฝ์Ÿ๋ ฅ์ด ๋‹ค์†Œ ๋–จ์–ด์ ธ ์–ด๋ ค์›€์„ ๊ฒช๋Š” ์‹ค์ •์— ๋†“์—ฌ์žˆ๋‹ค. ์ด์— ์ด๋ฅผ ๋Œ€์ฒดํ•  ์—ฌ๋Ÿฌ ํƒœ์–‘์ „์ง€ ์ค‘์—์„œ ์—ผ๋ฃŒ ๊ฐ์‘ ํƒœ์–‘์ „์ง€๊ฐ€ ๊ฐœ๋ฐœ๋˜์–ด ์ง€์†์ ์ธ ์—ฐ๊ตฌ๊ฐ€ ์ง„ํ–‰๋˜๊ณ  ์žˆ๋‹ค. </p><p>์—ผ๋ฃŒ ๊ฐ์‘ ํƒœ์–‘์ „์ง€์˜ ๊ฒฝ์šฐ์—๋Š” ์ œ์กฐ๋‹จ๊ฐ€๊ฐ€ ์‹ค๋ฆฌ์ฝ˜์˜ \(5\) ๋ถ„์˜ \(1\) ์ˆ˜์ค€์— ๋ถˆ๊ณผํ•˜๋ฉฐ, ๋‹ค์–‘ํ•œ ์ƒ‰์ƒ๊ตฌํ˜„, ์œ ์—ฐ์„ฑ ๋ฐ ํˆฌ๋ช…์„ฑ ๋“ฑ์˜ ๋‹ค์–‘ํ•œ ์‘์šฉ ๊ฐ€๋Šฅ์„ฑ์œผ๋กœ ์ƒ์šฉํ™”์— ์œ ๋ฆฌํ•œ ํŠน์ง•์„ ์ง€๋‹ˆ๊ณ  ์žˆ์–ด ์ฐจ์„ธ๋Œ€ ํƒœ์–‘์ „์ง€๋กœ ๋ถˆ๋ฆฐ๋‹ค. ํ•˜์ง€๋งŒ ์ด๋Ÿฌํ•œ ์—ฌ๋Ÿฌ ์žฅ์ ์—๋„ ๋ถˆ๊ตฌํ•˜๊ณ  ์—ผ๋ฃŒ ๊ฐ์‘ ํƒœ์–‘์ „์ง€๊ฐ€ ์ƒ์šฉํ™”๋˜์–ด ์ œํ’ˆ์œผ๋กœ ์ƒ์‚ฐ๋˜๊ธฐ ์œ„ํ•ด์„œ๋Š” ํƒœ์–‘์ „์ง€์˜ ํšจ์œจ์ด ๋”์šฑ ๊ฐœ์„ ๋˜์–ด์•ผ ํ•˜๋Š” ์—ฐ๊ตฌ๊ณผ์ œ๊ฐ€ ๋‚จ์•„ ์žˆ๋Š” ์ƒํƒœ์ด๋‹ค. ์ด๋Ÿฌํ•œ ์—ผ๋ฃŒ ๊ฐ์‘ ํƒœ์–‘์ „์ง€์˜ ํšจ์œจ์„ ํ–ฅ์ƒํ•˜๋Š” ๋ฐฉ์•ˆ์œผ๋กœ๋Š” ๋‚˜๋…ธ์ž…์ž์˜ ์‚ฐํ™”๋ฌผ ๋ฐ˜๋„์ฒด์˜ ์ž…์žํฌ๊ธฐ, ๊ฒฐ์ •์„ฑ, ํ‘œ๋ฉด ์ƒํƒœ ์กฐ์ ˆ ๊ธฐ์ˆ  ๋“ฑ์˜ ๊ฐœ๋ฐœ๊ณผ ๋‚˜๋…ธ์ž…์ž ์‚ฐํ™”๋ฌผ ๋ฐ˜๋„์ฒด ํ‘œ๋ฉด๊ณผ์˜ ๊ฒฌ๊ณ ํ•œ ๊ฒฐํ•ฉ๋ ฅ์„ ๊ฐ€์ง€๋ฉฐ ๋„“์€ ๋ฒ”์œ„ ํŒŒ์žฅ์„ ํก์ˆ˜ํ•  ์ˆ˜ ์žˆ๋Š” ์—ผ๋ฃŒ์˜ ๊ฐœ๋ฐฉ ๋“ฑ ๋‚˜๋…ธ์ž…์ž ์‚ฐํ™”๋ฌผ ๋ฐ˜๋„์ฒด์— ๊ด€ํ•œ ์—ฐ๊ตฌ๊ฐ€ ํ•„์š”ํ•˜๋‹ค. ๋˜ ์ž…์‚ฌ๋˜๋Š” ๋น›์ด ํƒœ์–‘์ „์ง€ ํ‘œ๋ฉด์„ ํ†ตํ•ด ์ „์ง€ ๋‚ด๋ถ€๋กœ ๋ชจ๋‘ ํˆฌ๊ณผ๋˜์ง€ ๋ชปํ•˜๊ณ  ํ‘œ๋ฉด์—์„œ ๋ฐ˜์‚ฌ๋˜๋ฉด์„œ ๋ฐœ์ƒํ•˜๋Š” ๊ด‘ํ•™์  ์†์‹ค์„ ์ค„์ด๊ธฐ ์œ„ํ•œ ๋Œ€์ฑ…๋„ ์—ฐ๊ตฌ ๊ฐœ๋ฐœ ์ด๋ฃจ์–ด์ ธ์•ผ ํ•œ๋‹ค. </p><p>๋ณธ ์—ฐ๊ตฌ๋Š” ์ด์ „ ๋…ผ๋ฌธ์—์„œ ๋‹ค๋ฃฌ ๊ฒฐ๊ณผ๋ฅผ ๋ฐ”ํƒ•์œผ๋กœ DSSC(Dye-Sensitized Solar Cell)์— ๋Œ€ํ‘œ์ ์œผ๋กœ ์‚ฌ์šฉ๋˜๋Š” \( \mathrm{TiO}_{2} \) ์‚ฐํ™”๋ฌผ ๋ฐ˜๋„์ฒด๋ฅผ ์ด์šฉํ•˜์—ฌ ํƒœ์–‘์ „์ง€๋ฅผ ์ œ์ž‘ํ•˜๊ณ , ์ถ”๊ฐ€๋กœ ํƒœ์–‘์ „์ง€ ์ƒ์ธต ํ‘œ๋ฉด์—์„œ์˜ ๋ฐ˜์‚ฌ์†์‹ค์„ ๊ฐ์†Œ์‹œํ‚ค๊ธฐ ์œ„ํ•ด FTO(Fluorine doped Tin Oxide) ์œ ๋ฆฌ ๊ธฐํŒ์„ ํ‘œ๋ฉด ์ฒ˜๋ฆฌํ•˜์—ฌ ๊ด‘ ์ „๊ทน์œผ๋กœ ์ „๋‹ฌ๋˜๋Š” ๋น›์˜ ์–‘์„ ์ฆ๊ฐ€์‹œ์ผœ ํšจ์œจ์„ ๊ฐœ์„ ํ•˜๊ณ ์ž ํ•˜์˜€๋‹ค. ์œ ๋ฆฌ ๊ธฐํŒ์˜ ํ‘œ๋ฉด ์ฒ˜๋ฆฌ๋Š” ๊ณต์ • ๊ณผ์ •์ด ๋งค์šฐ ๊ฐ„๋‹จํ•˜๊ณ , ์ปจํŠธ๋กคํ•˜๊ธฐ ์‰ฌ์šฐ๋ฉฐ, ๊ฐ€๊ฒฉ์ด ์ €๋ ดํ•œ ์Šต์‹ ์—์นญ์„ ์ด์šฉํ•˜์˜€๋‹ค. ์ด๋ ‡๊ฒŒ ํ‘œ๋ฉด ์ฒ˜๋ฆฌํ•œ ์ „๊ทน๊ณผ ์—ผ๋ฃŒ ๊ฐ์‘ ํƒœ์–‘์ „์ง€์˜ ์ตœ์  ์กฐ๊ฑด์„ ์–ป๊ธฐ ์œ„ํ•ด์„œ Sample์„ ๊ด‘ํ•™์ , ์ „๊ธฐ์  ํŠน์„ฑ์„ ์—ฐ๊ตฌํ•˜์˜€๋‹ค. </p> - <h1>2. ์„ค๊ณ„ ๋‚ด์šฉ</h1> <p>์ด๋ฒˆ ์—ฐ๊ตฌ์—์„œ๋Š” ๊ธฐ์กด์— ์—ฐ๊ตฌ๋˜์—ˆ๋˜ ์ผ„ํ‹ธ๋ฆฌ๋ฒ„์˜ ๊ธธ์ด๋ฐ ์ถ”์˜ ๋ฌด๊ฒŒ์— ์ง์ ‘์ ์œผ๋กœ ์˜์กดํ•˜์ง€ ์•Š๊ณ  ์ผ„ํ‹ธ๋ฆฌ๋ฒ„์˜ ๊ตฌ์กฐ์  ํ˜•์ƒ์— ๋”ฐ๋ผ ์ƒ์šฉ ์••์ „์†Œ์ž(PI์‚ฌ์˜ DuraAct)๋กœ๋ถ€ํ„ฐ ์ตœ๋Œ€์˜ ์ „๋ ฅ์„ ์‚ฐ์ถœํ•˜๋Š” ๊ฒƒ์ด ๋ชฉ์ ์ด๋‹ค. ๋”ฐ๋ผ์„œ ์ง์‚ฌ๊ฐํ˜•๊ณผ ์‚ฌ๋‹ค๋ฆฌ๊ผด ๊ตฌ์กฐ๋ฅผ Solidworks์˜ ๋ณ€ํ˜•๋ฅ  ํ•ด์„์„ ํ†ตํ•ด ํ‘œ๋ฉด์˜ ๋ณ€ํ˜•๋ฅ ์„ ํ™•์ธํ•˜์˜€๋‹ค. </p> <h2>2.1. ์บ”ํ‹ธ๋ ˆ๋ฒ„ ์„ค๊ณ„ ๋ณ€์ˆ˜ ์„ค์ •</h2> <p>์บ”ํ‹ธ๋ ˆ๋ฒ„์˜ ์žฌ๋ฃŒ๋Š” Aluminum 5052๋กœ ์ •ํ–ˆ์œผ๋ฉฐ, ๋‘๊ป˜๋Š” 0.8 \(\mathrm{mm}\), ๊ธธ์ด๋Š” 135 \(\mathrm{mm}\), ๊ทธ๋ฆฌ๊ณ  ํญ์€ 65 \(\mathrm{mm}\)์˜ ๋™์ผํ•œ ํฌ๊ธฐ๋กœ ์„ค์ •ํ–ˆ์œผ๋ฉฐ ์‚ฌ๋‹ค๋ฆฌ๊ผด ๊ตฌ์กฐ๋Š” ์‚ผ๊ฐํ˜•๊ณผ ์ตœ๋Œ€ํ•œ๊ฐ€๊น๋„๋ก ๋ฌด๊ฒŒ์ถ” ๋ถ€์ฐฉ์„ ์œ„ํ•œ 10 \(\mathrm{mm}\)๋งŒ ๋‚จ๊ฒจ๋†“์•˜๋‹ค. ์ „์ฒด์ ์ธ ์บ”ํ‹ธ๋ ˆ๋ฒ„ ํฌ๊ธฐ๋Š” ์‚ฌ์šฉ๋˜๋Š” DuraAct ์••์ „์†Œ์ž์˜ ํฌ๊ธฐ์— ๋งž์ถฐ ์„ ์ •๋˜์—ˆ๋‹ค. ๋‘ ๊ฐ€์ง€ ์บ”ํ‹ธ๋ ˆ๋ฒ„์˜ ํ˜•์ƒ์€ Fig. 1์™€ ๊ฐ™๋‹ค. </p> <h2>2.2. Solidworks ๋ณ€ํ˜•๋ฅ  ํ•ด</h2> <p>Solidworks ํ”„๋กœ๊ทธ๋žจ์„ ํ™œ์šฉํ•˜์—ฌ ์••์ „์†Œ์ž๊ฐ€ ๋ถ€์ฐฉ๋  ์ผ„ํ‹ธ๋ฆฌ๋ฒ„ ํ‘œ๋ฉด์˜ ๊ธธ์ด๋ฐฉํ–ฅ ๋ณ€ํ˜•๋ฅ ์„ ๋ถ„์„ํ•ด ๋ณด์•˜๋‹ค. ํ‘œ๋ฉด๋ณ€ํ˜•๋ฅ ์„ ํ‰๊ท ์ ์œผ๋กœ ๋ถ„์„ํ•˜๊ธฐ ์œ„ํ•ด ์ผ„ํ‹ธ๋ฆฌ๋ฒ„ ์ค‘์‹ฌ์  ๋…ธ๋“œ๋ฅผ ๊ฐ๊ฐ ๋น„๊ตํ•ด ๋ณด์•˜๋‹ค. Fig. 2์˜ ์ƒ‰๊น”์— ๋”ฐ๋ฅธ ํ‘œ๋ฉด ๋ณ€ํ˜•๋ฅ ์„ ์ง์ ‘ ๋น„๊ตํ•˜๊ธฐ๊ฐ€ ํž˜๋“œ๋ฏ€๋กœ Fig. 3์—์„œ ๊ทธ๋ž˜ํ”„๋กœFig. 2 ์— ํ‘œ์‹œ๋œ ๋ฐฉํ–ฅ๋Œ€๋กœ ํ‘œ๋ฉด์— ๋ณ€ํ˜•์œจ์„ ๋‚˜ํƒ€๋‚ด์—ˆ๋‹ค. </p> <p>Fig. 3์˜ ๊ทธ๋ž˜ํ”„๋ฅผ ๋ณด๋ฉด ์•Œ์ˆ˜ ์žˆ๋“ฏ์ด ์ง์‚ฌ๊ฐํ˜• ์บ”ํ‹ธ๋ ˆ๋ฒ„๋ณด๋‹ค ์‚ผ๊ฐํ˜•์— ๊ฐ€๊นŒ์šด ์‚ฌ๋‹ค๋ฆฌ๊ผด ์บ”ํ‹ธ๋ ˆ๋ฒ„๊ฐ€ ๋” ๋งŽ์€ ํ‘œ๋ฉด ๋ณ€ํ˜•๋Ÿ‰์„ ๋ณด์˜€๊ณ , ์ด ํ‘œ๋ฉด์ƒ์— ์••์ „์†Œ์ž๊ฐ€ ์žˆ๋‹ค๋ฉด ๋”๋งŽ์€ ์ „๋ ฅ๋Ÿ‰์„ ์ˆ˜ํ™•ํ•  ์ˆ˜ ์žˆ์„ ๊ฒƒ์ด๋‹ค. ์ด๋Ÿฌํ•œ ๊ฒฐ๊ณผ์— ๊ธฐ๋Œ€๋ฅผ ํ•˜์—ฌ ๊ฐ™์€ ํ˜•์ƒ์œผ๋กœ ์บ”ํ‹ธ๋ ˆ๋ฒ„๋ฅผ ์ œ์ž‘ํ•˜์—ฌ ์—๋„ˆ์ง€ ์ˆ˜ํ™•์— ๋Œ€ํ•œ ์‹คํ—˜์„ ๊ณ„ํšํ•˜์˜€๋‹ค. </p> <h2>2.3. Solidworks ๊ณ ์œ  ์ง„๋™์ˆ˜ ํ•ด์„</h2> <p>Solidworks ํ•ด์„ ํ”„๋กœ๊ทธ๋žจ์„ ํ†ตํ•ด์„œ ์ผ„ํ‹ธ๋ฆฌ๋ฒ„์˜ ๊ณ ์œ ์ง„๋™์ˆ˜๋ฅผ ์˜ˆ์ธกํ•˜์˜€๋‹ค (Fig. 4). ๊ทธ๋ฆฌํ•˜์—ฌ ์‹คํ—˜์‹œ Shaker๋ฅผ ํ†ตํ•ด ์ง„๋™์„ ๋ฐœ์ƒ์‹œํ‚ฌ ๋•Œ ์‹คํ—˜ ํšŸ์ˆ˜๋ฅผ ์ตœ์†Œํ™”ํ•˜๋ฉฐ ์ผ„ํ‹ธ๋ฆฌ๋ฒ„๊ฐ€ ์ตœ๋Œ€๋กœ ์ง„๋™ํ•  ์ˆ˜ ์žˆ๋Š” ์ฃผํŒŒ์ˆ˜๋ฅผ ์ฐพ์„ ์ˆ˜ ์žˆ์—ˆ๋‹ค. ๋”๋ถˆ์–ด ์ผ„ํ‹ธ๋ฆฌ๋ฒ„ ์ž์œ ๋‹จ์— ์ถ”(10 \(\mathrm{g}\), 20 \(\mathrm{g}\))๋ฅผ ์„ค์น˜ํ•จ์œผ๋กœ์จ ๊ณ ์œ ์ง„๋™์ˆ˜๋ฅผ ๋‚ฎ์ถœ ์ˆ˜ ์žˆ์—ˆ๋‹ค. </p> <table border><caption>Table 1. Natural frequency of each cantilever with different weights</caption> <tbody><tr><td>๊ตฌ ๋ถ„</td><td>Type1</td><td>Type2</td></tr><tr><td>10 g</td><td>29.55 Hz</td><td>28.76 Hz</td></tr><tr><td>20 g</td><td>22.77 Hz</td><td>20.92 Hz</td></tr></tbody></table> - source_sentence: ์˜ˆ์ธก ๋ฐฉ๋ฒ•๋ก ์—์„œ๋Š” ์–ด๋–ค ๋ฐ์ดํ„ฐ๋ฅผ ๋Œ€์ƒ์œผ๋กœ ํ•ด? sentences: - ์ธ๊ณต์ง€๋Šฅ์„ ํ†ตํ•œ ๊ธฐ์ƒํ˜„์ƒ์˜ ์˜ˆ์ธก์€ ๋น„๊ต์  ์ตœ๊ทผ ๋“ค์–ด ์—ฐ๊ตฌ๊ฐ€ ์ง„ํ–‰๋˜์—ˆ๊ธฐ ๋•Œ๋ฌธ์— ํฌ๊ฒŒ ๊ธฐ๊ณ„ํ•™์Šต ๊ธฐ๋ฒ•๊ณผ ๋”ฅ๋Ÿฌ๋‹ ๊ธฐ๋ฒ•์„ ์ด์šฉํ•˜์—ฌ ๊ธฐ์ƒํ˜„์ƒ์„ ์˜ˆ์ธกํ•˜๋ ค๋Š” ์‹œ๋„๊ฐ€ ์ด๋ฃจ์–ด์ ธ ์™”์œผ๋ฉฐ, ๋‹ค์–‘ํ•œ ์žฅ์†Œ์—์„œ ์ˆ˜์ง‘๋˜๋Š” ๋ฐ์ดํ„ฐ์— ๋Œ€ํ•œ ์ „์ฒ˜๋ฆฌ ๋ฐ ํ•™์Šต ๋ฐ์ดํ„ฐ์— ๋Œ€ํ•œ ํ’ˆ์งˆ ๊ด€๋ฆฌ๊ฐ€ ๋งค์šฐ ์ค‘์š”ํ•˜๋‹ค. ์ธ๊ณต์ง€๋Šฅ์„ ํ†ตํ•ด ๊ธฐ์ƒ ํ˜„์ƒ์„ ์˜ˆ์ธกํ•˜๊ณ ์ž ํ•˜๋Š” ๊ฒฝ์šฐ, ์˜ˆ์ธก ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ํ†ตํ•ด ๋ฌธ์ œ ํ•ด๊ฒฐ์„ ๋„๋ชจํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ๋ฐ์ดํ„ฐ์˜ ํ’ˆ์งˆ ๊ด€๋ฆฌ์— ์žˆ์–ด์„œ ๋ถ„๋ฅ˜ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ด์šฉํ•  ์ˆ˜ ์žˆ๋‹ค. ๋น…๋ฐ์ดํ„ฐ์˜ ํ™œ์šฉ์— ์žˆ์–ด์„œ, ๋ฐ์ดํ„ฐ๋ฒ ์ด์Šค๋งˆ๋‹ค ๊ธฐ์ค€ ์ •๋ณด๊ฐ€ ๋™์ผํ•˜์ง€ ์•Š๊ธฐ ๋•Œ๋ฌธ์— ๋ฐ์ดํ„ฐ ์ •์ œ ์‹œ ๊ธฐ์ค€ ์ •๋ณด๋ฅผ ํ‘œ์ค€ํ™” ํ•˜๋Š” ๊ฒƒ์ด ์ค‘์š”ํ•˜๋ฉฐ, ํ†ตํ•ฉ DB ์„ค๊ณ„ ์‹œ 1) ๊ณตํ†ต๋œ ๊ทœ์น™์„ ๊ฐ€์ง€๋„๋ก ํ•˜๊ณ , 2) ๋ฐ์ดํ„ฐ ๋ฌด๊ฒฐ์„ฑ์ด๋‚˜ ์„ฑ๋Šฅ ์ƒ์˜ ์ด์Šˆ ์—†๋Š” ๊ตฌ์กฐ์ ์œผ๋กœ ์„ค๊ณ„๋˜๋ฉฐ, 3) ์„ค๊ณ„๋œ ๋ฐ์ดํ„ฐ๋ฒ ์ด์Šค์— ๋Œ€ํ•œ ์ œ๋Œ€๋กœ ๋œ ๊ด€๋ฆฌ ์ฒด๊ณ„๊ฐ€ ๋ณด์žฅ ๋˜๋„๋ก ํ•ด๋‹น ๋‚ด์šฉ์„ ๊ณ ๋ คํ•ด์•ผ ํ•œ๋‹ค. - <h1>II. ๋ณธ๋ก </h1><h2>1. ์˜ˆ์ธก ๋ฐฉ๋ฒ•๋ก </h2><p>์˜ˆ์ธก ๋ฐฉ๋ฒ•๋ก ์—์„œ ์‚ฌ์šฉํ•˜๋Š” ๋ฐ์ดํ„ฐ๋Š” ์‚ฌ์ „์— ์ˆ˜์ง‘๋œ ๊ณผ๊ฑฐ ๋ฐ์ดํ„ฐ๋‚˜ ์ œํ•œ๋œ ๋ฐ์ดํ„ฐ๋ฅผ ๋Œ€์ƒ์œผ๋กœ ํ•˜๋ฏ€๋กœ ์˜ˆ๊ธฐ์น˜ ์•Š์€ ์‚ฌํšŒ์  ์ด์Šˆ์™€ ๋ฏธ์„ธ๋จผ์ง€์™€ ๊ฐ™์ด ์ƒˆ๋กญ๊ฒŒ ์ฃผ๋ชฉ์„ ๋ฐ›๋Š” ์š”์ธ๋“ค์€ ์ฒด๊ณ„์ ์œผ๋กœ ์ˆ˜์ง‘๋˜์–ด ์žˆ์ง€ ์•Š์€ ๊ฒฝ์šฐ๊ฐ€ ๋งŽ๋‹ค. ์ด๋Ÿฐ ํ•œ์ •์ ์ธ ๋ฐ์ดํ„ฐ๋ฅผ ์ด์šฉํ•œ ์˜ˆ์ธก์€ ๊ฐ ๋ฐ์ดํ„ฐ์˜ ์˜ํ–ฅ๋ ฅ์ด ๊ณผ๋Œ€ํ‰๊ฐ€๋  ์ˆ˜์žˆ์œผ๋ฏ€๋กœ ๋ฏธ๋ž˜์˜ ์˜ˆ์ธก๊ฐ’์˜ ์˜ค์ฐจ๋ฅผ ํฌ๊ฒŒ ํ•  ์ˆ˜ ์žˆ๋‹ค. ์—ฐ์†์ ์ด๊ฑฐ๋‚˜ ์ด์‚ฐ๋˜์–ด ์žˆ๋Š” ์ž…๋ ฅ๋ฐ์ดํ„ฐ๋“ค์˜ ์ฐจ์ด๋„ ์ ์ ˆํ•œ ๊ฒฐ๊ณผ๋ฅผ ์˜ˆ์ธกํ•˜๊ธฐ์— ๋ฌธ์ œ๊ฐ€ ๋  ์ˆ˜ ์žˆ๋‹ค. </p><p>์ด๋Ÿฌํ•œ ์˜ˆ์ธก ๋ฐฉ๋ฒ•๋ก ์ด ๊ฐ€์ง„ ์˜ค์ฐจ์™€ ํ•œ๊ณ„์—๋„ ๋ถˆ๊ตฌํ•˜๊ณ  ์˜ˆ์ธก์˜ ๊ธ์ •์ ์ธ ํšจ์šฉ์„ฑ ๋•Œ๋ฌธ์— ์ตœ๊ทผ ๋“ฑ์žฅํ•˜๊ณ  ์žˆ๋Š” ๋น…๋ฐ์ดํ„ฐ ๊ธฐ๋ฐ˜์˜ ๋จธ์‹ ๋Ÿฌ๋‹ ๋“ฑ์˜ ๋ฐฉ๋ฒ•๋ก ์„ ํ†ตํ•œ ๋…ธ๋ ฅ์ด ๊พธ์ค€ํžˆ ์ง„ํ–‰๋˜๊ณ  ์žˆ๋‹ค. </p><h3>๊ฐ€. ์„ ํ˜•ํšŒ๊ท€๋ถ„์„</h3><p>์„ ํ˜•ํšŒ๊ท€๋ถ„์„์€ ๋ฒกํ„ฐ ๋…๋ฆฝ๋ณ€์ˆ˜ \( x \) ์™€ ์Šค์นผ๋ผ ์ข…์†๋ณ€์ˆ˜ \( y \) ์˜ ๊ด€๊ณ„๋ฅผ ์ •๋Ÿ‰์ ์œผ๋กœ ๋ถ„์„ํ•˜์—ฌ ๊ฐ€์žฅ ๋น„์Šทํ•œ ์˜ˆ์ธก๊ฐ’ \( \hat{y} \) ์„ ๋„์ถœํ•˜๋Š” ๋ฐฉ๋ฒ•๋ก ์ด๋‹ค. </p><p>\[ \hat{y}=f(x) \approx y \]</p><p>์„ ํ˜•ํšŒ๊ท€๋ถ„์„์„ ์œ„ํ•ด์„œ๋Š” ๊ฐ ๋ณ€์ˆ˜์˜ ์กด์žฌ๋ฅผ ์‚ฌ์ „์— ํŒŒ์•…ํ•  ํ•„์š”๊ฐ€ ์žˆ๋‹ค. ๊ด€์ค‘์ˆ˜ ์˜ˆ์ธก์—์„œ ์„ ํ˜• ํšŒ๊ท€ ๋ถ„์„(๋‹ค์ค‘ํšŒ๊ท€๋ถ„์„)์„ ์‚ฌ์šฉํ•  ๊ฒฝ์šฐ, ๊ด€์ค‘์ˆ˜์— ์˜ํ–ฅ์„ ๋ฏธ์น˜๋Š” ๋ณ€์ˆ˜๋ฅผ ์–ด๋Š ์ •๋„ ์•Œ ์ˆ˜ ์žˆ์–ด์•ผ ํ•˜๋ฏ€๋กœ ์˜ˆ์ธก ๊ฒฐ๊ณผ๊ฐ’์ด ์ด๊ด€์ค‘์ˆ˜์™€ ๊ฐ™์€ ํ‰๊ท ๊ฐ’ ๋„์ถœ์—๋Š” ์ ํ•ฉํ•˜์ง€๋งŒ, ๊ตฌ์—ญ๋ณ„ ๊ด€์ค‘์ˆ˜ ๋“ฑ์„ ์„ธ๋ฐ€ํ•˜๊ฒŒ ์˜ˆ์ธก๊ฐ’์„ ๋„์ถœํ•ด์•ผ ํ•  ๊ฒฝ์šฐ์—๋Š” ์ž…๋ ฅ ๋ณ€์ˆ˜ ๋ฐ ๋ฐ์ดํ„ฐ์˜ ํ•œ๊ณ„๋กœ ๊ฒฐ๊ณผ๊ฐ’์— ์˜ค์ฐจ๊ฐ€ ์ปค์งˆ ์ˆ˜ ์žˆ๋‹ค. </p><h3>๋‚˜. ์‹œ๊ณ„์—ด๋ถ„์„</h3><p>์‹œ๊ณ„์—ด๋ถ„์„ ๋ฐฉ๋ฒ•์€ ์–‘์  ์˜ˆ์ธก๋ฐฅ๋ฒ•์œผ๋กœ ๊ณผ๊ฑฐ์˜ ๋ฐ์ดํ„ฐ๋ฅผ ์‹œ๊ฐ„์— ๋”ฐ๋ฅธ ๋ณ€ํ™”๋ฅผ ํŒŒ์•…ํ•˜์—ฌ ์˜ˆ์ธก๊ฐ’์„ ๋„์ถœํ•˜๋Š” ๋ฐฉ๋ฒ•๋ก ์ด๋‹ค. ์‹œ๊ณ„์—ด ๋ถ„์„๋ฐฉ๋ฒ•์—๋Š” ์ง€์ˆ˜ํ‰ํ™œ๋ฒ•, ์ž๊ธฐํšŒ๊ท€๋ฒ•, ARIMA๋ฒ•์ด ์žˆ๋‹ค. ์ง€์ˆ˜ํ‰ํ™œ๋ฒ•์€ ๊ณผ๊ฑฐ ๋ฐ์ดํ„ฐ ์˜ํ–ฅ๋ ฅ์˜ ์ฐจ์ด๋ฅผ ์ค„์ด๊ธฐ ์ตœ์‹  ์ž๋ฃŒ์— ๊ฐ€์ค‘์น˜๋ฅผ ์ฃผ์–ด์„œ ์˜ˆ์ธก๊ฐ’์„ ๋„์ถœํ•˜๋Š” ๋ฐฉ๋ฒ•์ด๋‹ค. ์ž๊ธฐํšŒ๊ท€๋ฒ•์€ ๊ณผ๊ฑฐ ๋ฐ์ดํ„ฐ๊ฐ€ ๋ฏธ์น˜๋Š” ์˜ํ–ฅ๋ ฅ์„ ์–ด๋Š ์ •๋„ ์ œ๊ฑฐํ•˜์—ฌ ์˜ˆ์ธก๊ฐ’์„ ๋„์ถœํ•˜๋Š” ๋ฐฉ๋ฒ•์ด๋‹ค. ARIMA๋ฒ•์€ ์‹œ๊ณ„์—ด ๋ถ„์„ ๋ฐฉ๋ฒ•์˜ ๋Œ€ํ‘œ์ ์ธ ๋ฐฉ๋ฒ•์œผ๋กœ์จ, ์‹œ๊ณ„์—ด ์ž๋ฃŒ์˜ ์ž๊ธฐ ์ƒ๊ด€ ํŠน์„ฑ์„ ์ด์šฉํ•œ๋‹ค. ์ด์™€ ๊ฐ™์€ ๋‹ค์–‘ํ•œ ์‹œ๊ณ„์—ด๋ถ„์„ ๋ฐฉ๋ฒ•๋ก ์„ ์ด์šฉํ•œ ์˜ˆ์ธก์€ ํ†ต์ƒ ์˜ค๋žœ ๊ธฐ๊ฐ„์˜ ๋ฐ์ดํ„ฐ๊ฐ€ ์žˆ์„ ๋•Œ ์‚ฌ์šฉํ•œ๋‹ค. </p><p>์‹œ๊ณ„์—ด ๋ถ„์„(์ง€์ˆ˜ํ‰ํ™œ, ์ž๊ธฐํšŒ๊ท€, ARIMA) ๋ฐฉ๋ฒ•๋ก ์„ ๊ด€์ค‘์ˆ˜ ์˜ˆ์ธก์— ํ™œ์šฉํ•˜๋ ค๋ฉด ์˜ค๋žœ ๊ธฐ๊ฐ„์˜ ๊ด€์ค‘์ˆ˜ ๋ฐ์ดํ„ฐ๊ฐ€ ์žˆ์–ด์•ผ ํ•œ๋‹ค. ์ƒˆ๋กœ์šด ์ด๋ฒคํŠธ์˜ ๊ฒฝ์šฐ ๋ˆ„์  ๋ฐ์ดํ„ฐ๊ฐ€ ๋ถ€์กฑํ•˜๊ธฐ ๋•Œ๋ฌธ์— ์›ํ•˜๋Š” ๊ด€์ค‘์ˆ˜ ์˜ˆ์ธก๊ฐ’์„ ๋„์ถœํ•˜๋Š”๋ฐ๋Š” ํ•œ๊ณ„๊ฐ€ ์žˆ์„ ์ˆ˜ ๋ฐ–์— ์—†๋‹ค. </p><h3>๋‹ค. ์‹œ๋ฎฌ๋ ˆ์ด์…˜</h3><p>์‹œ๋ฎฌ๋ ˆ์ด์…˜(์ˆ˜ํ•™์  ๋ชจ๋ธ๋ง) ๊ธฐ๋ฒ•์€ ๊ธฐ์—…์˜ ๋น„์ฆˆ๋‹ˆ์Šค ๋กœ์ง์„ ์ˆ˜ํ•™์ ์œผ๋กœ ๊ตฌ์ถ•ํ•˜์—ฌ ์ปดํ“จํ„ฐ ์‹œ๋ฎฌ๋ ˆ์ด์…˜์„ ํ†ตํ•ด ์˜ˆ์ธก๊ฐ’์„ ๋„์ถœํ•˜๋Š” ๋ฐฉ๋ฒ•์ด๋‹ค. ๋ณดํ†ต ๋ฌผ๋ฅ˜, ์œ ํ†ต ๋“ฑ ๋น„์ฆˆ๋‹ˆ์Šค ๋กœ์ง์„ ์„ธ์„ธํžˆ ์ž˜ ์•Œ๊ณ  ์žˆ์„ ๋•Œ ์‚ฌ์šฉํ•œ๋‹ค. ์‹œ๋ฎฌ๋ ˆ์ด์…˜ ๊ธฐ๋ฒ•์€ ์ตœ์ ์˜ ์šฐํŽธ๋ฌผ ๋ฐฐ๋‹ฌ ๊ฒฝ๋กœ ๋„์ถœ๊ณผ ๊ฐ™์ด ํ†ต๊ณ„์ ์ด๊ฑฐ๋‚˜ ์ˆ˜ํ•™์ ์ธ ๋ถ„์„์œผ๋กœ๋Š” ์ •ํ™•ํ•œ ์˜ˆ์ธก ๊ฐ’์„ ์ฃผ์–ด์ง„ ์‹œ๊ฐ„ ๋‚ด์— ๋„์ถœํ•  ์ˆ˜ ์—†์„ ๋•Œ ์ฃผ๋กœ ์‚ฌ์šฉํ•œ๋‹ค. </p><p>์‹œ๋ฎฌ๋ ˆ์ด์…˜(์ˆ˜ํ•™์  ๋ชจ๋ธ๋ง) ๊ธฐ๋ฒ•์€ ๊ด€์ค‘์ˆ˜์— ์˜ํ–ฅ์„ ๋ฏธ์น˜๋Š” ๋น„์ฆˆ๋‹ˆ์Šค ๋กœ์ง์„ ์ •ํ™•ํžˆ ํŒŒ์•…ํ•˜๊ณ  ์žˆ์„ ๋•Œ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋‹ค. ์•„์ง ํ”„๋กœ์•ผ๊ตฌ ๊ด€์ค‘์ˆ˜์— ์˜ํ–ฅ์„ ๋ฏธ์น˜๋Š” ์š”์ธ์— ๋Œ€ํ•œ ์—ฐ๊ตฌ๋‚˜ ์„ธ๋ฐ€ํ•œ ๋น„์ฆˆ๋‹ˆ์Šค๋กœ์ง์„ ๋ถ„์„ํ•œ ๊ฒฐ๊ณผ๊ฐ€ ๋งŽ์ง€ ์•Š์•„์„œ ๊ด€์ค‘์ˆ˜ ์˜ˆ์ธก์— ์ ์šฉํ•˜๊ธฐ๋Š” ์‰ฝ์ง€ ์•Š์„ ๊ฒƒ์ด๋‹ค. </p><h3>๋ผ. ๋จธ์‹ ๋Ÿฌ๋‹</h3><p>๋จธ์‹ ๋Ÿฌ๋‹์ด๋ž€ ์ฃผ๋กœ ๋น…๋ฐ์ดํ„ฐ๋ฅผ ํ™œ์šฉํ•ด ๋น„์„ ํ˜•์˜ ํ˜•ํƒœ๋กœ ๊ฒฐ๊ณผ๊ฐ’์„ ์˜ˆ์ธกํ•˜๋Š” ๋ฐฉ๋ฒ•์ด๋‹ค. ๋จธ์‹ ๋Ÿฌ๋‹ ๊ธฐ๋ฒ•์€ ์„ ํ˜•ํšŒ๊ท€๋ถ„์„ ๋ฐฉ๋ฒ•๋ก ๊ณผ ๋‹ฌ๋ฆฌ ์‚ฌ์ „์— ์˜ํ–ฅ์„ ๋ฏธ์น˜๋Š” ๋ณ€์ˆ˜๋ฅผ ๋ชจ๋‘ ์•Œ์ง€ ๋ชปํ•œ ์ƒํƒœ์—์„œ๋„ ์˜ˆ์ธก๊ฐ’์„ ๋„์ถœํ•  ์ˆ˜ ์žˆ๋‹ค. ๋”ฐ๋ผ์„œ ๋น…๋ฐ์ดํ„ฐ ํ˜•ํƒœ๋กœ ์ž๋ฃŒ๋ฅผ ์ˆ˜์ง‘ํ•  ์ˆ˜ ์žˆ๊ณ , ์˜ˆ์ธกํ•˜์ง€ ๋ชปํ•œ ๋ณ€์ˆ˜๋“ค์ด ์ข…์ข… ๋“ฑ์žฅํ•˜๋Š” ๊ฒฝ์šฐ์— ์ ์ ˆํžˆ ํ™œ์šฉํ•  ์ˆ˜ ์žˆ๋‹ค. ๋‹ค๋งŒ ์‹ค์ œ ๋ถ„์„์‹œ๊ฐ„๋ณด๋‹ค ๋ฐ์ดํ„ฐ๋ฅผ ์ปดํ“จํ„ฐ๊ฐ€ ์ดํ•ดํ•˜๊ธฐ ์‰ฝ๋„๋ก ์ •์ œํ•˜๋Š” ์‹œ๊ฐ„์ด ๋” ๋งŽ์ด ๊ฑธ๋ฆด ์ˆ˜๊ฐ€ ์žˆ๊ณ , ๋ถ„์„๋ฐฉ๋ฒ•๋ก ์— ๋”ฐ๋ผ ์˜ˆ์ธก๊ฐ’์ด ๋‹ฌ๋ผ์ง€๋Š” ํ•œ๊ณ„๋„ ์กด์žฌํ•œ๋‹ค. </p><p>๋จธ์‹ ๋Ÿฌ๋‹ ๋ฐฉ๋ฒ•๋ก ์€ ๋น…๋ฐ์ดํ„ฐ๋ฅผ ํ™œ์šฉํ•ด ๋น„์„ ํ˜•์˜ ํ˜•ํƒœ๋กœ ๊ด€์ค‘ ์ˆ˜๋ฅผ ์˜ˆ์ธกํ•  ๋•Œ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋‹ค. ๋˜ํ•œ ์˜ˆ์ธก ๋ชปํ•œ ๋ณ€์ˆ˜๊ฐ€ ์žˆ๋”๋ผ๋„ ๋ฐ์ดํ„ฐ์˜ ํ•™์Šต์„ ํ†ตํ•˜์—ฌ ์–ด๋Š ์ •๋„ ์˜ˆ์ธก๊ฐ’์„ ๋„์ถœํ•˜๋Š” ๊ฒƒ์ด ๊ฐ€๋Šฅํ•˜๋‹ค. ๋”ฐ๋ผ์„œ ํ˜„์žฌ์˜ ์ œํ•œ๋œ ๊ธฐ๊ฐ„์— ์ˆ˜์ง‘๋œ ๋น…๋ฐ์ดํ„ฐ๋ฅผ ํ™œ์šฉํ•˜์—ฌ ์˜ˆ์ธก๊ฐ’์„ ๋„์ถœํ•˜๊ธฐ์— ์ตœ์ ์˜ ๋ฐฉ๋ฒ•๋ก ์œผ๋กœ ๋ณผ ์ˆ˜ ์žˆ๋‹ค. </p> - <h2>2. ์„œ๋น„์Šค ๊ตฌ์„ฑ๊ธฐ๋ฒ•</h2><p>์„œ๋น„์Šค ๊ตฌ์„ฑ๊ธฐ๋ฒ•์€ ์‚ฌ์šฉ์ž๊ฐ€ ์›ํ•˜๋Š” ์„œ๋น„์Šค๋ฅผ ์ ์ ˆํžˆ ์ œ๊ณตํ•  ์ˆ˜ ์žˆ๋Š” ๋””๋ฐ”์ด์Šค๋ฅผ ์„ ํƒํ•˜์—ฌ ์„œ๋น„์Šค ์„ธ์…˜์„ ๊ตฌ์„ฑํ•˜๋Š” ๊ฒƒ์ด๋‹ค. ๋˜ํ•œ ์„œ๋น„์Šค ์„ธ์…˜์„ ์œ„ํ•ด ์„ ํƒ๋˜๋Š” ๋””๋ฐ”์ด์Šค๋Š” ์‚ฌ์šฉ์ž์˜ ์œ„์น˜๋‚˜ ์—…๋ฌด, ์„œ๋น„์Šค๊ฐ€ ์š”์ฒญ๋˜๋Š” ์‹œ๊ธฐ์— ๋”ฐ๋ผ ์ˆ˜์‹œ๋กœ ๋ณ€ํ•˜๊ฒŒ ๋œ๋‹ค. ์ด๋Ÿฌํ•œ ์„œ๋น„์Šค ๊ตฌ์„ฑ ๊ธฐ๋ฒ•์€ ์‚ฌ์šฉ์ž์—๊ฒŒ ์š”์ฒญํ•˜๋Š” ์„œ๋น„์Šค์— ๋Œ€ํ•ด์„œ ์‚ฌ์šฉ์ž๊ฐ€ ๋งŒ์กฑํ•  ์ˆ˜ ์žˆ๋Š” ํ’ˆ์งˆ์„ ์ œ๊ณตํ•  ์ˆ˜ ์žˆ์–ด์•ผ ํ•˜๋ฉฐ, ๋Š๊น€์—†๋Š” ์„œ๋น„์Šค ์ œ๊ณต์„ ์œ„ํ•ด ์‚ฌ์˜๊ฐ€๋Šฅํ•œ ๋””๋ฐ”์ด์Šค๋ฅผ ๋ฏธ๋ฆฌ ์˜ˆ์•ฝํ•˜์—ฌ ์‚ฌ์šฉ์ž๊ฐ€ ์„œ๋น„์Šค ์š”์ฒญ์‹œ ์„œ๋น„์Šค ์ œ๊ณต์‹œ๊ฐ„์„ ์ค„์ผ ํ•„์š”๋„ ์žˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ์„œ๋น„์Šค ์˜ˆ์•ฝ๊ธฐ๋ฒ•์€ ์‚ฌ์šฉ์ž์˜ priority๋‚˜ ์Šค์ผ€์ค„ ๋ฐ ์ด๋™์„ฑ ์ •๋ณด ๋“ฑ์˜ ์ƒํ™ฉ ์ •๋ณด๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ์˜ˆ์ธกํ˜• ์„œ๋น„์Šค ์˜ˆ์•ฝ ๊ตฌ์„ฑ๊ธฐ๋ฒ•์ด ๊ฐ€๋Šฅํ•ด์•ผ ํ•œ๋‹ค. ์ด๋Ÿฌํ•œ ๋™์ ์ธ ์„œ๋น„์Šค ๊ตฌ์„ฑ๊ธฐ๋ฒ•์„ ํ†ตํ•ด ์‚ฌ์šฉ์ž๋Š” ์–ด๋””์—์„œ๋‚˜ ์–ธ์ œ๋“ ์ง€ ์›ํ•˜๋Š” ์„œ๋น„์Šค๋ฅผ ์ œ๊ณต๋ฐ›์„ ์žˆ๊ฒŒ ํ•จ์œผ๋กœ์จ ์„œ๋น„์Šค ๊ฐ€์šฉ์„ฑ์„ ๋†’์ผ ์ˆ˜ ์žˆ๊ฒŒ ํ•œ๋‹ค. </p><p>์ง€๋Šฅ์ ์ธ ์„œ๋น„์Šค ๊ตฌ์„ฑ์„ ์œ„ํ•ด์„œ ๊ณต๊ฐ„๋‚ด์˜ ๋””๋ฐ”์ด์Šค๋“ค์€ ์‚ฌ์šฉ์ž๊ฐ€ ์›ํ•˜๋Š” ์„œ๋น„์Šค๋ฅผ ์ ์ ˆํžˆ ์ œ๊ณตํ•  ์ˆ˜ ์žˆ๋Š”์ง€ ์—ฌ๋ถ€๋ฅผ ํŒ๋ณ„ํ•˜๊ธฐ ์œ„ํ•œ ์„œ๋น„์Šค ์ปดํฌ๋„ŒํŠธ๋ฅผ ๊ฐ€์ ธ์•ผ ํ•œ๋‹ค. ์ฆ‰ ํ•˜๋‚˜์˜ ์–ดํ”Œ๋ฆฌ์ผ€์ด์…˜ ์„œ๋น„์Šค๋Š” ๋‹ค์–‘ํ•œ ์„œ๋น„์Šค ์ปดํฌ๋„ŒํŠธ๋“ค์˜ ์กฐํ•ฉ์„ ํ†ตํ•ด ์ œ๊ณต๋  ์ˆ˜ ์žˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๋™์˜์ƒ์„ ๋ณด๊ธฐ ์œ„ํ•ด์„œ๋Š” ๋™์˜์ƒ์„ ๋ณด์—ฌ์ค„ ์ˆ˜ ์žˆ๋Š” ๋””์Šคํ”Œ๋ ˆ์ด ๋ถ€๋ถ„๊ณผ ์Œ์„ฑ์„ ์žฌ์ƒํ•  ์ˆ˜ ์žˆ๋Š” ์˜ค๋””์˜ค์™€ ๊ฐ™์€ ์„œ๋น„์Šค ์ปดํฌ๋„ŒํŠธ๋“ค์ด ์ œ๊ณต๋˜์–ด์•ผ ์‚ฌ์šฉ์ž์—๊ฒŒ ์ ์ ˆํ•œ ์–ดํ”Œ๋ฆฌ์ผ€์ด์…˜ ์„œ๋น„์Šค๋ฅผ ์ œ๊ณตํ•  ์ˆ˜ ์žˆ๋Š” ๊ฒƒ์ด๋‹ค. ์ด๋Ÿฌํ•œ ์„œ๋น„์Šค ์ปดํฌ๋„ŒํŠธ๋Š” ๋””๋ฐ”์ด์Šค๊ฐ€ ๊ฐ€์ง€๋Š” ๋‹ค์–‘ํ•œ ์„œ๋น„์Šค ์ปดํฌ์ง€์…˜(service composition)์„ ์ •์˜ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ์˜์กด์ ์ด๋ฉฐ, ์ฐธ๊ณ ๋ฌธํ—Œ [7\(\sim\)9]์—์„œ์™€ ๊ฐ™์ด ๋‹ค์–‘ํ•œ ํ”„๋กœ์ ํŠธ๋“ค์„ ํ†ตํ•ด ํ™œ๋ฐœํžˆ ์—ฐ๊ตฌ๊ฐ€ ์ง„ํ–‰๋˜๊ณ  ์žˆ๋‹ค. ์ด๋Ÿฌํ•œ ์„œ๋น„์Šค ์ปดํฌ๋„ŒํŠธ๋Š” ์žฌ์‚ฌ์šฉ์ด ๊ฐ€๋Šฅํ•˜๋ฉฐ, ์„œ๋กœ ๋‹ค๋ฅธ ์ปดํฌ๋„ŒํŠธ๊ฐ„์˜ ํ˜‘์—…์„ ํ†ตํ•ด ๋ณด๋‹ค ๋‚˜์€ ์„œ๋น„์Šค๋ฅผ ์ œ๊ณตํ•  ์ˆ˜๋„ ์žˆ๋‹ค. ์ด๋Ÿฌํ•œ ์„œ๋น„์Šค ์ปดํฌ๋„ŒํŠธ๋ฅผ ํ™œ์šฉํ•˜์—ฌ ํผ์Šค๋„ ์„œ๋ฒ„๋Š” ์–ดํ”Œ๋ฆฌ์ผ€์ด์…˜ ์„œ๋น„์Šค๋ฅผ ์ ์ ˆํ•˜๊ฒŒ ์ œ๊ณตํ•  ์ˆ˜ ์žˆ๋Š” ํ›„๋ณด ๋””๋ฐ”์ด์Šค๋ฅผ ์ฐพ์•„ ์„œ๋น„์Šค ๊ตฌ์„ฑ ํ…Œ์ด๋ธ”์„ ์ƒ์„ฑํ•˜์—ฌ ๊ด€๋ผํ•ด์•ผ ํ•œ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ๊ฐ ์„œ๋น„์Šค ๊ตฌ์„ฑ๋ฆฌ์ŠคํŠธ๋Š” ์‚ฌ์šฉ์ž์—๊ฒŒ ๋ณด๋‹ค ๋‚˜์€ ์„œ๋น„์Šค ์งˆ์„ ์ œ๊ณตํ•  ์ˆ˜ ์žˆ๋Š” ์šฐ์„ ์ˆœ์œ„์— ๋”ฐ๋ผ ๊ตฌ์„ฑ๋œ๋‹ค. ํ‘œ 1 ์€ ์–ดํ”Œ๋ฆฌ์ผ€์ด์…˜ ์„œ๋น„์Šค์— ๋”ฐ๋ฅธ ์„œ๋น„์Šค ๊ตฌ์„ฑ ํ…Œ์ด๋ธ” ์˜ˆ๋ฅผ ๋ณด์—ฌ์ค€๋‹ค. </p><h2>3. ๊ธฐ์กด์˜ ์ ‘๊ทผ์ œ์–ด ๋ฐ ์„œ๋น„์Šค ๊ตฌ์„ฑ๊ธฐ๋ฒ•์˜ ๋ฌธ์ œ์ </h2><p>๊ทธ๋ฆผ 2์—์„œ ๋ณด๋Š” ๋ฐ”์™€ ๊ฐ™์ด ์ ‘๊ทผ๋ชจ๋“œ๊ฐ€ group mode ์ผ ๋•Œ ์—ฌ๋Ÿฌ ํผ์Šค๋„ ์„œ๋ฒ„๋“ค์ด ์ž์‹ ์—๊ฒŒ ์ฃผ์–ด์ง„ ๊ถŒํ•œ๋‚ด์—์„œ ๊ณต๊ฐ„๋‚ด์˜ ์ธ์„ญ ๋””๋ฐ”์ด์Šค๋ฅผ ์ด์šฉํ•˜๊ณ ์ž ํ•  ๋•Œ ์ž„์˜์˜ ์‚ฌ์šฉ์ž์— ์˜ํ•ด ๋””๋ฐ”์ด์Šค๊ฐ€ ์‚ฌ์šฉ๋˜๊ณ  ์žˆ์–ด ์ฃผ๋ณ€์˜ ๋‹ค๋ฅธ ์‚ฌ์šฉ์ž๊ฐ€ ๊ณต์œ ๋œ ๋””๋ฐ”์ด์Šค๋กœ ์„œ๋น„์Šค๋ฅผ ๋ฐ›์ง€ ๋ชปํ•˜๋Š” ์„œ๋น„์Šค ์ถฉ๋Œ ํ˜„์ƒ์ด ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด PS 1์€ ๋ฌด์„  ์ธํ„ฐํŽ˜์ด์Šค๋กœ UWB๋ฅผ ์‚ฌ์šฉํ•˜๋ฉฐ, PS2๋Š” 802.11๊ธฐ๋ฐ˜์˜ WLAN์„ ์‚ฌ์šฉํ•œ๋‹ค. ๋˜ํ•œ ๊ณต๊ฐ„๋‚ด์—๋Š” ๊ฐ๊ฐ์˜ ๋ฌด์„  ์ธํ„ฐํŽ˜์ด์Šค์— ๋Œ€ํ•œ AP ๊ฐ€ ์„ค์น˜๋˜์–ด ์žˆ๋‹ค๊ณ  ๊ฐ€์ •ํ•˜์ž. ์ด๋•Œ PS 1์ด Digital TV๋ฅผ ํ†ตํ•ด VOD ์„œ๋น„์Šค๋ฅผ ์ œ๊ณต๋ฐ›๊ณ  ์žˆ๋‹ค. ์ด ๊ฒฝ์šฐ PS 2 ๊ฐ€ ๋™์ผํ•œ ์„œ๋น„์Šค๋ฅผ Digital TV ์— ์š”์ฒญํ•˜๋ฉด ๋ฌด์„ ๋งํฌ ๊ณ„์ธต์—์„œ ์ธ์ง€ํ•  ์ˆ˜ ์—†๋Š” ์–ดํ”Œ๋ฆฌ์ผ€์ด์…˜ ๊ณ„์ธต์—์„œ์˜ ์„œ๋น„์Šค ์ถฉ๋Œ์ด ๋ฐœ์ƒํ•œ๋‹ค. ์ด๋Ÿฌํ•œ ์„œ๋น„์Šค ์ถฉ๋Œ๋กœ ์ธํ•ด ํผ์Šค๋„ ์„œ๋ฒ„๋Š” ๋ถˆํ•„์š”ํ•œ ๋ฉ”์‹œ์ง€๋ฅผ ๋ฐœ์ƒ์‹œ์ผœ ๋น„ํšจ์œจ์ ์œผ๋กœ ๋ฐฐํ„ฐ๋ฆฌ๋ฅผ ์†Œ๋ชจํ•œ๋‹ค. </p><p>๋˜ํ•œ ์„œ๋น„์Šค ์ปดํฌ๋„ŒํŠธ ๊ธฐ๋ฐ˜์œผ๋กœ ์–ดํ”Œ๋ฆฌ์ผ€์ด์…˜ ์„œ๋น„์Šค๋ฅผ ์ ์ ˆํžˆ ์ œ๊ณตํ•  ์ˆ˜ ์žˆ๋Š” ํ›„๋ณด ๋””๋ฐ”์ด์Šค๋ฅผ ํ†ตํ•ด ์„œ๋น„์Šค๋ฅผ ์ œ๊ณต๋ฐ›์„ ๋•Œ ๊ทธ๋ฆผ 3์—์„œ ๋ณด๋“ฏ์ด ์šฐ์„ ์ˆœ์œ„๊ฐ€ ๋†’์€ ํ›„๋ณด ๋””๋ฐ”์ด์Šค์—๊ฒŒ ๋จผ์ € ์„œ๋น„์Šค ์š”์ฒญ ๋ฉ”์‹œ์ง€๋ฅผ ๋ณด๋‚ด๊ฒŒ ๋œ๋‹ค. ์ด๋•Œ ์„œ๋น„์Šค ์š”์ฒญ๋ฉ”์‹œ์ง€๋ฅผ ๋ฐ›์€ ๋””๋ฐ”์ด์Šค๊ฐ€ ์‚ฌ์šฉ ์ค‘์ผ ๋•Œ ์‚ฌ์šฉ์ž๋Š” ์ฐจ์„ ์ฑ…์ธ ๋””๋ฐ”์ด์Šค์—๊ฒŒ ์„œ๋น„์Šค ์š”์ฒญ ๋ฉ”์‹œ์ง€๋ฅผ ๋ณด๋‚ด๊ฒŒ ๋œ๋‹ค. ์ด๋Ÿฌํ•œ ๋ถˆํ‘ˆ์š”ํ•œ ์„œ๋น„์Šค ์š”์ฒญ ๋ฉ”์‹œ์ง€๋ฅผ ์ฃผ๊ณ ๋ฐ›๋Š” ๋ฐ ๊ฑธ๋ฆฌ๋Š” ์‹œ๊ฐ„์€ ์‚ฌ์šฉ์ž์—๊ฒŒ ์„œ๋น„์Šค๋ฅผ ์ œ๊ณตํ•˜๋Š”๋ฐ ์ง€์—ฐ์„ ๋ฐœ์ƒ์‹œํ‚จ๋‹ค. </p> - source_sentence: ์ด์ˆ˜ํ™” ์ƒ์—์„œ๋Š” ๋ฌผ๊ณผ ๋ฐ˜์‘ํ•˜์—ฌ ํŽ˜๋กœ๋ธŒ์Šค์นด์ดํŠธ ์†Œ์žฌ์˜ ๋ถ„ํ•ด๋ฅผ ์•ผ๊ธฐํ•˜๋Š” ์›์ธ์ด ๋ญ์•ผ? sentences: - <h1>์š” ์•ฝ</h1><p>๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” Electrocorticogram(ECoG) ์‹ ํ˜ธ๋ฅผ ์ด์šฉํ•˜์—ฌ ์†๊ณผ ํŒ”๊ฟˆ์น˜์˜ ์›€์ง์ž„์„ ์ถ”๋ก ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์ œ์•ˆํ•œ๋‹ค. ํ™˜์ž๋กœ๋ถ€ํ„ฐ ๋‹ค์ˆ˜์˜ ์ฑ„๋„์„ ์ด์šฉํ•˜์—ฌ ํ‘œ๋ฉด ๊ทผ์ „๋„ ์‹ ํ˜ธ์™€ ECoG ์‹ ํ˜ธ๋ฅผ ๋™์‹œ์— ์ทจ๋“ํ•˜์˜€๋‹ค. ์ถ”๋ก ํ•˜๋Š” ๋™์ž‘์€ ์†์„ ์ฅ์—ˆ๋‹ค ํŽด๋Š” ๋™์ž‘๊ณผ ํŒ”๊ฟˆ์น˜๋ฅผ ์•ˆ์œผ๋กœ ๊ตฝํžˆ๋Š” ๋™์ž‘์ด๋ฉฐ, ์™ธ๋ถ€ ์ž๊ทน์— ์˜ํ•ด ๋™์ž‘์„ ์ˆ˜ํ–‰ํ•˜๋Š” ๋ฐฉ๋ฒ• ๋Œ€์‹  ํ™˜์ž์˜ ์ž์œ ์˜์ง€์— ์˜ํ•ด ๋™์ž‘์„ ์ˆ˜ํ–‰ํ•˜๊ฒŒ ํ•˜์˜€๋‹ค. ํ‘œ๋ฉด ๊ทผ์ „๋„ ์‹ ํ˜ธ๋ฅผ ์ด์šฉํ•˜์—ฌ ๋™์ž‘์„ ์ˆ˜ํ–‰ํ•œ ์šด๋™ ์‹œ์ ์„ ์ฐพ๊ณ , ECOG ์‹ ํ˜ธ๋ฅผ ์ด์šฉํ•˜์—ฌ ๋™์ž‘์„ ์ถ”๋ก ํ•œ๋‹ค. ๊ฐ ๋™์ž‘์˜ ํŠน์ง•์„ ์ถ”์ถœํ•˜๊ธฐ ์œ„ํ•˜์—ฌ ECoG ์‹ ํ˜ธ๋ฅผ ์ „์ฒด ๋Œ€์—ญ์„ ํฌํ•จํ•œ \( \delta, \theta, a \), \( \beta, \mathrm{y} \) ์ด 6 ๊ฐœ์˜ ๋Œ€์—ญ์„ ๋‚˜๋ˆ„์–ด ์ •๋ณด ์—”ํŠธ๋กœํ”ผ๋ฅผ ๊ตฌํ•˜๊ณ , ์ตœ๋Œ€์šฐ๋„์ถ”์ •๋ฒ•์„ ์‚ฌ์šฉํ•˜์—ฌ ๋™์ž‘์„ ์ถ”์ •ํ•˜์˜€๋‹ค. ์‹คํ—˜ ๊ฒฐ๊ณผ ๊ฐ๋งˆ๋Œ€์—ญ์˜ ECOG๋ฅผ ์‚ฌ์šฉํ•  ๊ฒฝ์šฐ ๋‹ค๋ฅธ ๋Œ€์—ญ์„ ์‚ฌ์šฉํ•  ๋•Œ ๋ณด๋‹ค ๋†’์€ ํ‰๊ท  \( 74 \% \) ์˜ ์„ฑ๋Šฅ์„ ๋ณด์ด๋ฉฐ, ๋‹ค๋ฅธ ๋Œ€์—ญ๋ณด๋‹ค ๊ฐ๋งˆ ๋Œ€์—ญ์—์„œ ๋†’์€ ์ถ”์ • ์„ฑ๊ณต๋ฅ ์„ ๋ณด์˜€๋‹ค. ๋˜ํ•œ ์šด๋™ ์‹œ์ ์„ ๊ธฐ์ค€์œผ๋กœ 3 ๊ฐœ์˜ ์‹œ๊ฐ„ ๊ตฌ๊ฐ„์œผ๋กœ ๋‚˜๋ˆ„์–ด ์ค€๋น„์ „์œ„๋ฅผ ํฌํ•จํ•˜๋Š” 'before' ๊ตฌ๊ฐ„๊ณผ 'onset' ๊ตฌ๊ฐ„์„ ๋น„๊ตํ•˜์˜€๋‹ค. 'before' ๊ตฌ๊ฐ„๊ณผ 'onset' ๊ตฌ๊ฐ„์—์„œ ์ถ”์ • ์„ฑ๊ณต๋ฅ ์€ ๊ฐ๊ฐ \( 66 \% \), \( 65 \% \) ๋กœ ์ค€๋น„์ „์œ„๋ฅผ ์ด์šฉํ•  ์ˆ˜ ์žˆ๋‹ค๋Š” ๊ฒƒ์„ ์•Œ ์ˆ˜ ์žˆ์—ˆ๋‹ค. </p> - "ํšจ์œจ์ด ๋†’๊ณ  ๊ด‘์•ˆ์ •์„ฑ์ด ์šฐ์ˆ˜ํ•œ ํŽ˜๋กœ๋ธŒ์Šค์นด์ดํŠธ ํƒœ์–‘์ „์ง€ ์†Œ์žฌ/์†Œ์ž ๊ธฐ์ˆ  ๊ฐœ๋ฐœ - ๊ณ ํšจ์œจ(21.2%)๊ณผ ๊ณ ์•ˆ์ •์„ฑ(1,000์‹œ๊ฐ„ ์œ ์ง€)์„ ๋ชจ๋‘\ \ ๋งŒ์กฑํ•˜๋Š” ํŽ˜๋กœ๋ธŒ์Šค์นด์ดํŠธ ํƒœ์–‘์ „์ง€์šฉ ํ•ต์‹ฌ ์†Œ์žฌ ๋ฐ ์ €๋น„์šฉ ์ œ์กฐ ๊ธฐ์ˆ  ๊ฐœ๋ฐœ-\nโ–ก ์ด๋ฒˆ ์—ฐ๊ตฌ์—์„œ๋Š” ์ด์ „ ์—ฐ๊ตฌ์„ฑ๊ณผ(๊ตฌ์กฐ, ๊ณต์ •, ์‹ ์กฐ์„ฑ ๊ธฐ์ˆ )๋ฅผ\ \ ๊ธฐ๋ฐ˜*์œผ๋กœ ์ด์ข…์ ‘ํ•ฉ** ํŽ˜๋กœ๋ธŒ์Šค์นด์ดํŠธ ํƒœ์–‘์ „์ง€์˜ ๊ณ ํšจ์œจํ™”(21.2%)์™€ ๋†’์€ ๊ด‘์•ˆ์ •์„ฑ(์ž์™ธ์„  ํฌํ•จํ•œ ๊ด‘์กฐ์‚ฌ์—์„œ 1,000์‹œ๊ฐ„ ์ด์ƒ ์•ˆ์ •ํ•œ\ \ ํšจ์œจ ์œ ์ง€)์„ ๋ชจ๋‘ ๋งŒ์กฑํ•˜๋Š” ๊ด‘์ „๊ทน ์†Œ์žฌ๋ฅผ ์ €์˜จ(๊ธฐ์กด 900 โ„ƒ์ด์ƒ ๊ณ ์˜จ โ†’ 200 โ„ƒ์ดํ•˜) ์—์„œ ํ•ฉ์„ฑํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๊ฐœ๋ฐœํ•˜์˜€๋‹ค. *ใ€ ์—ฐ๊ตฌ์ง„\ \ ์ด์ „ ์—ฐ๊ตฌ์„ฑ๊ณผ ใ€‘\nใƒป๋ฌด-์œ ๊ธฐ ํ•˜์ด๋ธŒ๋ฆฌ๋“œ ํŽ˜๋กœ๋ธŒ์Šค์นด์ดํŠธ ํƒœ์–‘์ „์ง€ ํ”Œ๋žซํผ ๊ตฌ์กฐ ๊ธฐ์ˆ  ๊ฐœ๋ฐœ (Nature Photonics 2013.5) \n\ ใƒป๋งค์šฐ ๊ท ์ผํ•˜๊ณ  ์น˜๋ฐ€ํ•œ ํŽ˜๋กœ๋ธŒ์Šค์นด์ดํŠธ ๋ฐ•๋ง‰ ์ œ์กฐ ์‹ ๊ทœ ์šฉ์•ก ๊ณต์ • ๊ธฐ์ˆ  ๊ฐœ๋ฐœ (Nature Materials 2014.7) \nใƒป๊ณ ํšจ์œจ์„ ์œ„ํ•œ\ \ ํŽ˜๋กœ๋ธŒ์Šค์นด์ดํŠธ ๊ฒฐ์ •์ƒ ์•ˆ์ •ํ™” ์‹ ์กฐ์„ฑ ๊ธฐ์ˆ  ๊ฐœ๋ฐœ (Nature 2015.1) \nใƒป๊ณ ํ’ˆ์งˆ ํŽ˜๋กœ๋ธŒ์Šค์นด์ดํŠธ ๋ฐ•๋ง‰ ํ˜•์„ฑ์„ ์œ„ํ•œ ์‹ ๊ทœ ๊ณต์ • ๊ธฐ์ˆ \ \ ๊ฐœ๋ฐœ (Science 2015.6) ๋“ฑ\n** ์ด์ข…์ ‘ํ•ฉ : ๊ฐ™์€ ์†Œ์žฌ๊ฐ„์˜ ์ ‘ํ•ฉ์ธ ๋™์ข… ์ ‘ํ•ฉ๊ณผ ๋‹ฌ๋ฆฌ ๋‹ค๋ฅธ ์ข…๋ฅ˜์˜ ์†Œ์žฌ๊ฐ„์˜ ์ ‘ํ•ฉ์„ ์˜๋ฏธ, ํŽ˜๋กœ๋ธŒ์Šค์นด์ดํŠธ๋Š”\ \ ๋ฌด๊ธฐ๋ฌผ, ์œ ๊ธฐ๋ฌผ, ๋ฌด/์œ ๊ธฐ ํ˜ผ์„ฑ๋ฌผ ๊ฐ„์˜ ์ด์ข…์ ‘ํ•ฉ์„ ์ด๋ฃธ.\nใ…‡ ๋” ๋‚˜์•„๊ฐ€์„œ ์—ฐ์†์ ์ด๋ฉฐ ๋Œ€๋Ÿ‰ ์ƒ์‚ฐ ๊ณต์ •์ด ๊ฐ€๋Šฅํ•œโ€œํ•ซ-ํ”„๋ ˆ์‹ฑ (hot-pressing)\ \ ๊ณต๋ฒ•*โ€์„ ์ƒˆ๋กญ๊ฒŒ ์ œ์•ˆํ•˜์—ฌ, ๊ณ ํšจ์œจ / ๊ณ ์•ˆ์ •์„ฑ / ์ €๋น„์šฉ์˜ ๋ฐฉ๋ฒ•์œผ๋กœ ํŽ˜๋กœ๋ธŒ์Šค์นด์ดํŠธ ํƒœ์–‘์ „์ง€๋ฅผ ์ œ์กฐํ•˜๋Š” ์ƒˆ๋กœ์šด ํƒœ์–‘์ „์ง€์ œ์กฐ ๋ฐฉ๋ฒ•๋ก ์„ ์ œ์•ˆํ•˜์˜€๋‹ค.\ \ * ํ•ซ-ํ”„๋ ˆ์‹ฑ ๊ณต๋ฒ• : ์˜จ๋„์™€ ์••๋ ฅ์„ ๊ฐ€ํ•˜์—ฌ ๋‘ ๋ฌผ์ฒด๋ฅผ ๋‹จ๋‹จํžˆ ์ ์ฐฉ ์‹œํ‚ค๋Š” ๋ฐฉ๋ฒ•" - <h1>2. ํ™˜๊ฒฝ์  ์š”์ธ์— ์˜ํ•œ ํŽ˜๋กœ๋ธŒ์นด์ดํŠธ ์†Œ์žฌ ๋ถˆ์•ˆ์ •์„ฑ</h1><h2>2.1. ์ˆ˜๋ถ„์— ์˜ํ•œ ์•ˆ์ •์„ฑ ์˜ํ–ฅ</h2><p>์œ ๊ธฐ ํŽ˜๋กœ๋ธŒ์Šค์นด์ดํŠธ์ธ \( \mathrm{MAPbI}_{3} \) ์˜ \(\mathrm{MA}^{+}\)์™€ \(\mathrm{I}^{-}\)๋Š” ์•ฝํ•œ ๊ฒฐํ•ฉ์„ ํ•˜๊ณ  ์žˆ์–ด ์ด์ˆ˜ํ™” ์ƒ (dihydrate phase)์—์„œ๋Š” ๋ฌผ๊ณผ ๋ฐ˜์‘ํ•˜์—ฌ ํŽ˜๋กœ๋ธŒ์Šค์นด์ดํŠธ ์†Œ์žฌ์˜ ๋ถ„ํ•ด๋ฅผ ์•ผ๊ธฐํ•œ๋‹ค. ์ด๋Š” \( \mathrm{MAPbI}_{3} \) ์™€ ๋ฌผ์ด ๋ฐ˜์‘ํ•˜์—ฌ ์ƒ์„ฑ๋œ ์ด์ˆ˜ํ™” ํ™”ํ•ฉ๋ฌผ (\( \mathrm{MAPbI}_{3} \cdot \mathrm{H}_{2} \mathrm{O} \)) ์ด \( \mathrm{CH}_{3} \mathrm{NH}_{2}\), \(\mathrm{HI}\), \(\mathrm{PbI}_{2} \) ๋กœ ๋ถ„ํ•ด๋˜๊ณ , ์ƒ์„ฑ๋œ \( \mathrm{CH}_{3} \mathrm{NH}_{2} \) ์™€ \( \mathrm{HI} \) ๋Š” ๋ฌผ์— ๋…น์•„ ๊ฒฐ๊ตญ ๊ณ ์ƒ์˜ \( \mathrm{PbI}_{2} \) ๋งŒ ๋‚จ๋Š” ๊ฒƒ์œผ๋กœ ์„ค๋ช…ํ•  ์ˆ˜ ์žˆ๋‹ค. </p><p>๋ฌด๊ธฐ ํŽ˜๋กœ๋ธŒ์Šค์นด์ดํŠธ๋Š” ์ˆ˜๋ถ„์— ์˜ํ•œ ์žฌ๊ฒฐ์ •ํ™” ๋ฐ ํ‘œ๋ฉด ๊ฒฐํ•ฉ ๋ฆฌ๊ฐ„๋“œ์˜ ์†์‹ค๊ณผ ๋ถ„ํ•ด๋กœ ์ธํ•ด ํ‘œ๋ฉด์— ํŠธ๋žฉ ์ค€์œ„๊ฐ€ ์ฆ๊ฐ€ํ•˜์—ฌ ๋ฐœ๊ด‘ํšจ์œจ์ด ๊ฐ์†Œํ•œ๋‹ค. ๋˜ํ•œ ํŽ˜๋กœ๋ธŒ ์Šค์นด์ดํŠธ ์†Œ์žฌ๋Š” ๋น›์ด ์—†๋Š” ์ƒํ™ฉ์—์„œ๋„ ๋ฌผ์— ์˜ํ•ด ์†Œ์žฌ๊ฐ€ ๋ถ„ํ•ด๋˜์–ด ์•ˆ์ •์„ฑ์ด ๊ฐ์†Œํ•œ๋‹ค. </p><h2>2.2. ๋น›์— ์˜ํ•œ ์•ˆ์ •์„ฑ ์˜ํ–ฅ</h2><p>ํŽ˜๋กœ๋ธŒ์Šค์นด์ดํŠธ ์†Œ์žฌ๊ฐ€ ์žฅ์‹œ๊ฐ„ ๋น›์— ๋…ธ์ถœ๋˜๋Š” ๊ฒฝ์šฐ ๊ด‘-์ƒ์„ฑ ์ „ํ•˜ (photo-generated carrier)๊ฐ€ ํŽ˜๋กœ๋ธŒ์Šค์นด์ดํŠธ ์†Œ์žฌ ํ‘œ๋ฉด์œผ๋กœ ํ™•์‚ฐ๋˜์–ด ์ด์˜จ์„ฑ ํ‘œ๋ฉด ๋ฆฌ๊ฐ„๋“œ์™€ ๊ฒฐํ•ฉํ•œ๋‹ค. ์ด ๊ณผ์ • ์ค‘์— ๋ช‡ ๊ฐœ์˜ ๋ฆฌ๊ฐ„๋“œ๋“ค์€ ์šฉ๋งค์— ๋…น์•„, ๋ณดํ˜ธ๋˜์ง€ ์•Š์€ ๋ฉด์„ ์ค‘์‹ฌ์œผ๋กœ ํŽ˜๋กœ๋ธŒ์Šค์นด์ดํŠธ ์†Œ์žฌ๋ผ๋ฆฌ ์‘์ง‘ํ•˜์—ฌ ๋ฐœ๊ด‘ ํšจ์œจ์ด ๊ฐ์†Œํ•œ๋‹ค. ๋˜ํ•œ ํŽ˜๋กœ๋ธŒ์Šค์นด์ดํŠธ ์†Œ์žฌ์˜ ์‘์ง‘ ๋ฐ ๋ฆฌ๊ฐ„๋“œ ์†์‹ค๋กœ ์ธํ•ด ํŠธ๋žฉ ์ค€์œ„๊ฐ€ ์ฆ๊ฐ€ํ•˜์—ฌ ๊ด‘ํ•™์  ํŠน์„ฑ์ด ํ˜„์ €ํžˆ ๊ฐ์†Œ๋œ๋‹ค. pc-LED๋Š” ์‹ค์ƒํ™œ์—์„œ ์žฅ์‹œ๊ฐ„ ๋น›์— ๋…ธ์ถœ๋˜๊ธฐ๋•Œ๋ฌธ์— ๋น›์— ์˜ํ•œ ๋ฐœ๊ด‘ ๊ฐ์†Œ ๋ฐ ์†Œ์žฌ ์•ˆ์ •์„ฑ ๊ฐ์†Œ๋Š” ๊ณ ์—ฐ์ƒ‰ ๋ฐœ๊ด‘์„ ํ•„์š”๋กœ ํ•˜๋Š” pc-LED์˜ ์ ์šฉ์— ๋ฌธ์ œ๊ฐ€ ๋œ๋‹ค. </p><h2>2.3. ์‚ฐ์†Œ์— ์˜ํ•œ ์•ˆ์ •์„ฑ ์˜ํ–ฅ</h2><p>ํŽ˜๋กœ๋ธŒ์Šค์นด์ดํŠธ ์†Œ์žฌ๋Š” ๋น›์— ๋…ธ์ถœ๋œ ๊ฒฝ์šฐ์—๋งŒ ์‚ฐ์†Œ์™€ ๋ฐ˜์‘ํ•˜๋ฉฐ ํŠนํžˆ ๊ด‘-์ƒ์„ฑ ์ „ํ•˜๋ฅผ ๊ฐ€์ง„ ํŽ˜๋กœ๋ธŒ์Šค์นด์ดํŠธ ์†Œ์žฌ๋Š” ์‚ฐ์†Œ ๋ถ„์ž์˜ ์˜ํ–ฅ์„ ๋ฐ›๊ธฐ ์‰ฝ๋‹ค. ์‚ฐ์†Œ ๋ถ„์ž๊ฐ€ ๊ฒฉ์ž๋กœ ํ™•์‚ฐ๋˜์–ด ๊ณต๊ณต ๊ฒฐํ•จ (vacancy)์„ ์ฑ„์šฐ๊ฒŒ ๋˜๊ณ  ๊ด‘-์ƒ์„ฑ ์ „์ž๊ฐ€ ์ „๋„๋Œ€์—, ์ •๊ณต์ด ๊ฐ€์ „์ž๋Œ€์— ์ƒ์„ฑ๋œ๋‹ค. ํŽ˜๋กœ๋ธŒ์Šค์นด์ดํŠธ ์†Œ์žฌ์™€ ์‚ฐ์†Œ๊ฐ€ ๋ฐ˜์‘ํ•ด \( \mathrm{O}^{2-} \) ๊ฐ€ ์ƒ์„ฑ๋˜์–ด \( \mathrm{MAPbI}_{3} \) ๊ฐ€ \( \mathrm{PbI}_{2}\), \(\mathrm{H}_{2} \mathrm{O}\), \(\mathrm{I}_{2}\), \(\mathrm{CH}_{3} \mathrm{NH}_{2} \) ๋กœ ๋ถ„ํ•ด๋œ๋‹ค. ์ด๋Ÿฌํ•œ ๊ด‘-์‚ฐํ™” (photo-oxidation) ๊ณผ์ •์œผ๋กœ ํŽ˜๋กœ๋ธŒ์Šค์นด์ดํŠธ ์†Œ์žฌ๊ฐ€ ๋ถ„ํ•ด๋˜์–ด ์•ˆ์ •์„ฑ์ด ๊ฐ์†Œํ•œ๋‹ค. </p><h2>2.4. ์—ด์— ์˜ํ•œ ์•ˆ์ •์„ฑ ์˜ํ–ฅ</h2><p>์—ด์ค‘๋Ÿ‰๋ถ„์„ (TGA) ๋ถ„์„์œผ๋กœ ํ™•์ธํ•œ ํŽ˜๋กœ๋ธŒ์Šค์นด์ดํŠธ ์†Œ์žฌ๋Š” ์ˆ˜๋ถ„๊ณผ ์‚ฐ์†Œ๊ฐ€ ์—†์„ ๋•Œ \( \mathrm{CsPbX}_{3} \) ๋Š” \( 500{ }^{\circ} \mathrm{C} \),\( \mathrm{MAPbX}_{3} \) ๋Š” \( 220{ }^{\circ} \mathrm{C} \) ๊นŒ์ง€ ๊ตฌ์กฐ๋ฅผ ์œ ์ง€ํ•  ์ˆ˜ ์žˆ๋‹ค. ์œ  ยท ๋ฌด๊ธฐ ํŽ˜๋กœ๋ธŒ์Šค์นด์ดํŠธ๋Š” ์—ด์— ์˜ํ•ด ๋น„๊ต์  ๋†’์€ ์•ˆ์ •์„ฑ์„ ๊ฐ€์ง€๊ณ  ์žˆ์ง€๋งŒ ๊ณ ์˜จ์—์„œ ํŽ˜๋กœ๋ธŒ์Šค์นด์ดํŠธ ์†Œ์žฌ๊ฐ€ ์ˆ˜๋ถ„๊ณผ ์‚ฐ์†Œ์— ๋ฐ˜์‘ํ•˜๋ฉด ๊ตฌ์กฐ ๋ถ„ํ•ด๊ฐ€ ๋” ๊ฐ€์†ํ™”๋˜์–ด ์•ˆ์ •์„ฑ์ด ๊ธ‰๊ฒฉํžˆ ๊ฐ์†Œํ•œ๋‹ค. </p><p>๋˜ํ•œ ๊ณ ์˜จ์—์„œ ๋ฐœ๊ด‘ ํšจ์œจ์ด ๊ฐ์†Œํ•˜๋Š”๋ฐ ์ด๋Š” ์—ด์ ์œผ๋กœ ํ™œ์„ฑํ™”๋œ ํ• ๋กœ๊ฒ ๊ณต๊ณต ๊ฒฐํ•จ์— ์˜ํ•ด \(\mathrm{MAPbBr}_{3} \) ๋Š”\( 100{ }^{\circ} \mathrm{C} \) ์ด์ƒ์˜ ์˜จ๋„์—์„œ ๋ฐœ๊ด‘์„ ๊ฑฐ์˜ ๋ณด์ด์ง€ ์•Š์œผ๋ฉฐ \( \mathrm{CsPbBr}_{3} \) ๋Š” ์•ฝ \( 80 \% \) ์˜ ๋ฐœ๊ด‘ ์†์‹ค์„ ๋ณด์ด๋Š” ๊ฒƒ์œผ๋กœ ํ™•์ธํ•  ์ˆ˜ ์žˆ๋‹ค. </p> pipeline_tag: sentence-similarity library_name: sentence-transformers --- # SentenceTransformer based on BAAI/bge-m3 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-m3](https://huggingface.co./BAAI/bge-m3). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [BAAI/bge-m3](https://huggingface.co./BAAI/bge-m3) <!-- at revision 5617a9f61b028005a4858fdac845db406aefb181 --> - **Maximum Sequence Length:** 1024 tokens - **Output Dimensionality:** 1024 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co./models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the ๐Ÿค— Hub model = SentenceTransformer("seongil-dn/bge-m3-kor-retrieval-451949-bs64-science") # Run inference sentences = [ '์ด์ˆ˜ํ™” ์ƒ์—์„œ๋Š” ๋ฌผ๊ณผ ๋ฐ˜์‘ํ•˜์—ฌ ํŽ˜๋กœ๋ธŒ์Šค์นด์ดํŠธ ์†Œ์žฌ์˜ ๋ถ„ํ•ด๋ฅผ ์•ผ๊ธฐํ•˜๋Š” ์›์ธ์ด ๋ญ์•ผ?', '<h1>2. ํ™˜๊ฒฝ์  ์š”์ธ์— ์˜ํ•œ ํŽ˜๋กœ๋ธŒ์นด์ดํŠธ ์†Œ์žฌ ๋ถˆ์•ˆ์ •์„ฑ</h1><h2>2.1. ์ˆ˜๋ถ„์— ์˜ํ•œ ์•ˆ์ •์„ฑ ์˜ํ–ฅ</h2><p>์œ ๊ธฐ ํŽ˜๋กœ๋ธŒ์Šค์นด์ดํŠธ์ธ \\( \\mathrm{MAPbI}_{3} \\) ์˜ \\(\\mathrm{MA}^{+}\\)์™€ \\(\\mathrm{I}^{-}\\)๋Š” ์•ฝํ•œ ๊ฒฐํ•ฉ์„ ํ•˜๊ณ  ์žˆ์–ด ์ด์ˆ˜ํ™” ์ƒ (dihydrate phase)์—์„œ๋Š” ๋ฌผ๊ณผ ๋ฐ˜์‘ํ•˜์—ฌ ํŽ˜๋กœ๋ธŒ์Šค์นด์ดํŠธ ์†Œ์žฌ์˜ ๋ถ„ํ•ด๋ฅผ ์•ผ๊ธฐํ•œ๋‹ค. ์ด๋Š” \\( \\mathrm{MAPbI}_{3} \\) ์™€ ๋ฌผ์ด ๋ฐ˜์‘ํ•˜์—ฌ ์ƒ์„ฑ๋œ ์ด์ˆ˜ํ™” ํ™”ํ•ฉ๋ฌผ (\\( \\mathrm{MAPbI}_{3} \\cdot \\mathrm{H}_{2} \\mathrm{O} \\)) ์ด \\( \\mathrm{CH}_{3} \\mathrm{NH}_{2}\\), \\(\\mathrm{HI}\\), \\(\\mathrm{PbI}_{2} \\) ๋กœ ๋ถ„ํ•ด๋˜๊ณ , ์ƒ์„ฑ๋œ \\( \\mathrm{CH}_{3} \\mathrm{NH}_{2} \\) ์™€ \\( \\mathrm{HI} \\) ๋Š” ๋ฌผ์— ๋…น์•„ ๊ฒฐ๊ตญ ๊ณ ์ƒ์˜ \\( \\mathrm{PbI}_{2} \\) ๋งŒ ๋‚จ๋Š” ๊ฒƒ์œผ๋กœ ์„ค๋ช…ํ•  ์ˆ˜ ์žˆ๋‹ค. </p><p>๋ฌด๊ธฐ ํŽ˜๋กœ๋ธŒ์Šค์นด์ดํŠธ๋Š” ์ˆ˜๋ถ„์— ์˜ํ•œ ์žฌ๊ฒฐ์ •ํ™” ๋ฐ ํ‘œ๋ฉด ๊ฒฐํ•ฉ ๋ฆฌ๊ฐ„๋“œ์˜ ์†์‹ค๊ณผ ๋ถ„ํ•ด๋กœ ์ธํ•ด ํ‘œ๋ฉด์— ํŠธ๋žฉ ์ค€์œ„๊ฐ€ ์ฆ๊ฐ€ํ•˜์—ฌ ๋ฐœ๊ด‘ํšจ์œจ์ด ๊ฐ์†Œํ•œ๋‹ค. ๋˜ํ•œ ํŽ˜๋กœ๋ธŒ ์Šค์นด์ดํŠธ ์†Œ์žฌ๋Š” ๋น›์ด ์—†๋Š” ์ƒํ™ฉ์—์„œ๋„ ๋ฌผ์— ์˜ํ•ด ์†Œ์žฌ๊ฐ€ ๋ถ„ํ•ด๋˜์–ด ์•ˆ์ •์„ฑ์ด ๊ฐ์†Œํ•œ๋‹ค. </p><h2>2.2. ๋น›์— ์˜ํ•œ ์•ˆ์ •์„ฑ ์˜ํ–ฅ</h2><p>ํŽ˜๋กœ๋ธŒ์Šค์นด์ดํŠธ ์†Œ์žฌ๊ฐ€ ์žฅ์‹œ๊ฐ„ ๋น›์— ๋…ธ์ถœ๋˜๋Š” ๊ฒฝ์šฐ ๊ด‘-์ƒ์„ฑ ์ „ํ•˜ (photo-generated carrier)๊ฐ€ ํŽ˜๋กœ๋ธŒ์Šค์นด์ดํŠธ ์†Œ์žฌ ํ‘œ๋ฉด์œผ๋กœ ํ™•์‚ฐ๋˜์–ด ์ด์˜จ์„ฑ ํ‘œ๋ฉด ๋ฆฌ๊ฐ„๋“œ์™€ ๊ฒฐํ•ฉํ•œ๋‹ค. ์ด ๊ณผ์ • ์ค‘์— ๋ช‡ ๊ฐœ์˜ ๋ฆฌ๊ฐ„๋“œ๋“ค์€ ์šฉ๋งค์— ๋…น์•„, ๋ณดํ˜ธ๋˜์ง€ ์•Š์€ ๋ฉด์„ ์ค‘์‹ฌ์œผ๋กœ ํŽ˜๋กœ๋ธŒ์Šค์นด์ดํŠธ ์†Œ์žฌ๋ผ๋ฆฌ ์‘์ง‘ํ•˜์—ฌ ๋ฐœ๊ด‘ ํšจ์œจ์ด ๊ฐ์†Œํ•œ๋‹ค. ๋˜ํ•œ ํŽ˜๋กœ๋ธŒ์Šค์นด์ดํŠธ ์†Œ์žฌ์˜ ์‘์ง‘ ๋ฐ ๋ฆฌ๊ฐ„๋“œ ์†์‹ค๋กœ ์ธํ•ด ํŠธ๋žฉ ์ค€์œ„๊ฐ€ ์ฆ๊ฐ€ํ•˜์—ฌ ๊ด‘ํ•™์  ํŠน์„ฑ์ด ํ˜„์ €ํžˆ ๊ฐ์†Œ๋œ๋‹ค. pc-LED๋Š” ์‹ค์ƒํ™œ์—์„œ ์žฅ์‹œ๊ฐ„ ๋น›์— ๋…ธ์ถœ๋˜๊ธฐ๋•Œ๋ฌธ์— ๋น›์— ์˜ํ•œ ๋ฐœ๊ด‘ ๊ฐ์†Œ ๋ฐ ์†Œ์žฌ ์•ˆ์ •์„ฑ ๊ฐ์†Œ๋Š” ๊ณ ์—ฐ์ƒ‰ ๋ฐœ๊ด‘์„ ํ•„์š”๋กœ ํ•˜๋Š” pc-LED์˜ ์ ์šฉ์— ๋ฌธ์ œ๊ฐ€ ๋œ๋‹ค. </p><h2>2.3. ์‚ฐ์†Œ์— ์˜ํ•œ ์•ˆ์ •์„ฑ ์˜ํ–ฅ</h2><p>ํŽ˜๋กœ๋ธŒ์Šค์นด์ดํŠธ ์†Œ์žฌ๋Š” ๋น›์— ๋…ธ์ถœ๋œ ๊ฒฝ์šฐ์—๋งŒ ์‚ฐ์†Œ์™€ ๋ฐ˜์‘ํ•˜๋ฉฐ ํŠนํžˆ ๊ด‘-์ƒ์„ฑ ์ „ํ•˜๋ฅผ ๊ฐ€์ง„ ํŽ˜๋กœ๋ธŒ์Šค์นด์ดํŠธ ์†Œ์žฌ๋Š” ์‚ฐ์†Œ ๋ถ„์ž์˜ ์˜ํ–ฅ์„ ๋ฐ›๊ธฐ ์‰ฝ๋‹ค. ์‚ฐ์†Œ ๋ถ„์ž๊ฐ€ ๊ฒฉ์ž๋กœ ํ™•์‚ฐ๋˜์–ด ๊ณต๊ณต ๊ฒฐํ•จ (vacancy)์„ ์ฑ„์šฐ๊ฒŒ ๋˜๊ณ  ๊ด‘-์ƒ์„ฑ ์ „์ž๊ฐ€ ์ „๋„๋Œ€์—, ์ •๊ณต์ด ๊ฐ€์ „์ž๋Œ€์— ์ƒ์„ฑ๋œ๋‹ค. ํŽ˜๋กœ๋ธŒ์Šค์นด์ดํŠธ ์†Œ์žฌ์™€ ์‚ฐ์†Œ๊ฐ€ ๋ฐ˜์‘ํ•ด \\( \\mathrm{O}^{2-} \\) ๊ฐ€ ์ƒ์„ฑ๋˜์–ด \\( \\mathrm{MAPbI}_{3} \\) ๊ฐ€ \\( \\mathrm{PbI}_{2}\\), \\(\\mathrm{H}_{2} \\mathrm{O}\\), \\(\\mathrm{I}_{2}\\), \\(\\mathrm{CH}_{3} \\mathrm{NH}_{2} \\) ๋กœ ๋ถ„ํ•ด๋œ๋‹ค. ์ด๋Ÿฌํ•œ ๊ด‘-์‚ฐํ™” (photo-oxidation) ๊ณผ์ •์œผ๋กœ ํŽ˜๋กœ๋ธŒ์Šค์นด์ดํŠธ ์†Œ์žฌ๊ฐ€ ๋ถ„ํ•ด๋˜์–ด ์•ˆ์ •์„ฑ์ด ๊ฐ์†Œํ•œ๋‹ค. </p><h2>2.4. ์—ด์— ์˜ํ•œ ์•ˆ์ •์„ฑ ์˜ํ–ฅ</h2><p>์—ด์ค‘๋Ÿ‰๋ถ„์„ (TGA) ๋ถ„์„์œผ๋กœ ํ™•์ธํ•œ ํŽ˜๋กœ๋ธŒ์Šค์นด์ดํŠธ ์†Œ์žฌ๋Š” ์ˆ˜๋ถ„๊ณผ ์‚ฐ์†Œ๊ฐ€ ์—†์„ ๋•Œ \\( \\mathrm{CsPbX}_{3} \\) ๋Š” \\( 500{ }^{\\circ} \\mathrm{C} \\),\\( \\mathrm{MAPbX}_{3} \\) ๋Š” \\( 220{ }^{\\circ} \\mathrm{C} \\) ๊นŒ์ง€ ๊ตฌ์กฐ๋ฅผ ์œ ์ง€ํ•  ์ˆ˜ ์žˆ๋‹ค. ์œ  ยท ๋ฌด๊ธฐ ํŽ˜๋กœ๋ธŒ์Šค์นด์ดํŠธ๋Š” ์—ด์— ์˜ํ•ด ๋น„๊ต์  ๋†’์€ ์•ˆ์ •์„ฑ์„ ๊ฐ€์ง€๊ณ  ์žˆ์ง€๋งŒ ๊ณ ์˜จ์—์„œ ํŽ˜๋กœ๋ธŒ์Šค์นด์ดํŠธ ์†Œ์žฌ๊ฐ€ ์ˆ˜๋ถ„๊ณผ ์‚ฐ์†Œ์— ๋ฐ˜์‘ํ•˜๋ฉด ๊ตฌ์กฐ ๋ถ„ํ•ด๊ฐ€ ๋” ๊ฐ€์†ํ™”๋˜์–ด ์•ˆ์ •์„ฑ์ด ๊ธ‰๊ฒฉํžˆ ๊ฐ์†Œํ•œ๋‹ค. </p><p>๋˜ํ•œ ๊ณ ์˜จ์—์„œ ๋ฐœ๊ด‘ ํšจ์œจ์ด ๊ฐ์†Œํ•˜๋Š”๋ฐ ์ด๋Š” ์—ด์ ์œผ๋กœ ํ™œ์„ฑํ™”๋œ ํ• ๋กœ๊ฒ ๊ณต๊ณต ๊ฒฐํ•จ์— ์˜ํ•ด \\(\\mathrm{MAPbBr}_{3} \\) ๋Š”\\( 100{ }^{\\circ} \\mathrm{C} \\) ์ด์ƒ์˜ ์˜จ๋„์—์„œ ๋ฐœ๊ด‘์„ ๊ฑฐ์˜ ๋ณด์ด์ง€ ์•Š์œผ๋ฉฐ \\( \\mathrm{CsPbBr}_{3} \\) ๋Š” ์•ฝ \\( 80 \\% \\) ์˜ ๋ฐœ๊ด‘ ์†์‹ค์„ ๋ณด์ด๋Š” ๊ฒƒ์œผ๋กœ ํ™•์ธํ•  ์ˆ˜ ์žˆ๋‹ค. </p>', 'ํšจ์œจ์ด ๋†’๊ณ  ๊ด‘์•ˆ์ •์„ฑ์ด ์šฐ์ˆ˜ํ•œ ํŽ˜๋กœ๋ธŒ์Šค์นด์ดํŠธ ํƒœ์–‘์ „์ง€ ์†Œ์žฌ/์†Œ์ž ๊ธฐ์ˆ  ๊ฐœ๋ฐœ - ๊ณ ํšจ์œจ(21.2%)๊ณผ ๊ณ ์•ˆ์ •์„ฑ(1,000์‹œ๊ฐ„ ์œ ์ง€)์„ ๋ชจ๋‘ ๋งŒ์กฑํ•˜๋Š” ํŽ˜๋กœ๋ธŒ์Šค์นด์ดํŠธ ํƒœ์–‘์ „์ง€์šฉ ํ•ต์‹ฌ ์†Œ์žฌ ๋ฐ ์ €๋น„์šฉ ์ œ์กฐ ๊ธฐ์ˆ  ๊ฐœ๋ฐœ-\nโ–ก ์ด๋ฒˆ ์—ฐ๊ตฌ์—์„œ๋Š” ์ด์ „ ์—ฐ๊ตฌ์„ฑ๊ณผ(๊ตฌ์กฐ, ๊ณต์ •, ์‹ ์กฐ์„ฑ ๊ธฐ์ˆ )๋ฅผ ๊ธฐ๋ฐ˜*์œผ๋กœ ์ด์ข…์ ‘ํ•ฉ** ํŽ˜๋กœ๋ธŒ์Šค์นด์ดํŠธ ํƒœ์–‘์ „์ง€์˜ ๊ณ ํšจ์œจํ™”(21.2%)์™€ ๋†’์€ ๊ด‘์•ˆ์ •์„ฑ(์ž์™ธ์„  ํฌํ•จํ•œ ๊ด‘์กฐ์‚ฌ์—์„œ 1,000์‹œ๊ฐ„ ์ด์ƒ ์•ˆ์ •ํ•œ ํšจ์œจ ์œ ์ง€)์„ ๋ชจ๋‘ ๋งŒ์กฑํ•˜๋Š” ๊ด‘์ „๊ทน ์†Œ์žฌ๋ฅผ ์ €์˜จ(๊ธฐ์กด 900 โ„ƒ์ด์ƒ ๊ณ ์˜จ โ†’ 200 โ„ƒ์ดํ•˜) ์—์„œ ํ•ฉ์„ฑํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๊ฐœ๋ฐœํ•˜์˜€๋‹ค. *ใ€ ์—ฐ๊ตฌ์ง„ ์ด์ „ ์—ฐ๊ตฌ์„ฑ๊ณผ ใ€‘\nใƒป๋ฌด-์œ ๊ธฐ ํ•˜์ด๋ธŒ๋ฆฌ๋“œ ํŽ˜๋กœ๋ธŒ์Šค์นด์ดํŠธ ํƒœ์–‘์ „์ง€ ํ”Œ๋žซํผ ๊ตฌ์กฐ ๊ธฐ์ˆ  ๊ฐœ๋ฐœ (Nature Photonics 2013.5) \nใƒป๋งค์šฐ ๊ท ์ผํ•˜๊ณ  ์น˜๋ฐ€ํ•œ ํŽ˜๋กœ๋ธŒ์Šค์นด์ดํŠธ ๋ฐ•๋ง‰ ์ œ์กฐ ์‹ ๊ทœ ์šฉ์•ก ๊ณต์ • ๊ธฐ์ˆ  ๊ฐœ๋ฐœ (Nature Materials 2014.7) \nใƒป๊ณ ํšจ์œจ์„ ์œ„ํ•œ ํŽ˜๋กœ๋ธŒ์Šค์นด์ดํŠธ ๊ฒฐ์ •์ƒ ์•ˆ์ •ํ™” ์‹ ์กฐ์„ฑ ๊ธฐ์ˆ  ๊ฐœ๋ฐœ (Nature 2015.1) \nใƒป๊ณ ํ’ˆ์งˆ ํŽ˜๋กœ๋ธŒ์Šค์นด์ดํŠธ ๋ฐ•๋ง‰ ํ˜•์„ฑ์„ ์œ„ํ•œ ์‹ ๊ทœ ๊ณต์ • ๊ธฐ์ˆ  ๊ฐœ๋ฐœ (Science 2015.6) ๋“ฑ\n** ์ด์ข…์ ‘ํ•ฉ : ๊ฐ™์€ ์†Œ์žฌ๊ฐ„์˜ ์ ‘ํ•ฉ์ธ ๋™์ข… ์ ‘ํ•ฉ๊ณผ ๋‹ฌ๋ฆฌ ๋‹ค๋ฅธ ์ข…๋ฅ˜์˜ ์†Œ์žฌ๊ฐ„์˜ ์ ‘ํ•ฉ์„ ์˜๋ฏธ, ํŽ˜๋กœ๋ธŒ์Šค์นด์ดํŠธ๋Š” ๋ฌด๊ธฐ๋ฌผ, ์œ ๊ธฐ๋ฌผ, ๋ฌด/์œ ๊ธฐ ํ˜ผ์„ฑ๋ฌผ ๊ฐ„์˜ ์ด์ข…์ ‘ํ•ฉ์„ ์ด๋ฃธ.\nใ…‡ ๋” ๋‚˜์•„๊ฐ€์„œ ์—ฐ์†์ ์ด๋ฉฐ ๋Œ€๋Ÿ‰ ์ƒ์‚ฐ ๊ณต์ •์ด ๊ฐ€๋Šฅํ•œโ€œํ•ซ-ํ”„๋ ˆ์‹ฑ (hot-pressing) ๊ณต๋ฒ•*โ€์„ ์ƒˆ๋กญ๊ฒŒ ์ œ์•ˆํ•˜์—ฌ, ๊ณ ํšจ์œจ / ๊ณ ์•ˆ์ •์„ฑ / ์ €๋น„์šฉ์˜ ๋ฐฉ๋ฒ•์œผ๋กœ ํŽ˜๋กœ๋ธŒ์Šค์นด์ดํŠธ ํƒœ์–‘์ „์ง€๋ฅผ ์ œ์กฐํ•˜๋Š” ์ƒˆ๋กœ์šด ํƒœ์–‘์ „์ง€์ œ์กฐ ๋ฐฉ๋ฒ•๋ก ์„ ์ œ์•ˆํ•˜์˜€๋‹ค. * ํ•ซ-ํ”„๋ ˆ์‹ฑ ๊ณต๋ฒ• : ์˜จ๋„์™€ ์••๋ ฅ์„ ๊ฐ€ํ•˜์—ฌ ๋‘ ๋ฌผ์ฒด๋ฅผ ๋‹จ๋‹จํžˆ ์ ์ฐฉ ์‹œํ‚ค๋Š” ๋ฐฉ๋ฒ•', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 1024] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Hyperparameters #### Non-Default Hyperparameters - `per_device_train_batch_size`: 64 - `learning_rate`: 3e-05 - `num_train_epochs`: 1 - `warmup_ratio`: 0.05 - `fp16`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: no - `prediction_loss_only`: True - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 8 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 3e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.05 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: True - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `eval_use_gather_object`: False - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | |:------:|:----:|:-------------:| | 0.0312 | 1 | 0.941 | | 0.0625 | 2 | 0.9909 | | 0.0938 | 3 | 0.7258 | | 0.125 | 4 | 0.538 | | 0.1562 | 5 | 0.567 | | 0.1875 | 6 | 0.4329 | | 0.2188 | 7 | 0.4238 | | 0.25 | 8 | 0.3989 | | 0.2812 | 9 | 0.3825 | | 0.3125 | 10 | 0.392 | | 0.3438 | 11 | 0.3822 | | 0.375 | 12 | 0.3271 | | 0.4062 | 13 | 0.3284 | | 0.4375 | 14 | 0.3468 | | 0.4688 | 15 | 0.3098 | | 0.5 | 16 | 0.3332 | | 0.5312 | 17 | 0.2871 | | 0.5625 | 18 | 0.3132 | | 0.5938 | 19 | 0.3172 | | 0.625 | 20 | 0.3133 | | 0.6562 | 21 | 0.3134 | | 0.6875 | 22 | 0.2968 | | 0.7188 | 23 | 0.3227 | | 0.75 | 24 | 0.2977 | | 0.7812 | 25 | 0.3022 | | 0.8125 | 26 | 0.2556 | | 0.8438 | 27 | 0.3152 | | 0.875 | 28 | 0.2597 | | 0.9062 | 29 | 0.3088 | | 0.9375 | 30 | 0.2702 | | 0.9688 | 31 | 0.3415 | | 1.0 | 32 | 0.2765 | ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 3.2.1 - Transformers: 4.44.2 - PyTorch: 2.3.1+cu121 - Accelerate: 1.1.1 - Datasets: 2.21.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### CachedMultipleNegativesRankingLoss ```bibtex @misc{gao2021scaling, title={Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup}, author={Luyu Gao and Yunyi Zhang and Jiawei Han and Jamie Callan}, year={2021}, eprint={2101.06983}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
nhung01/eff20207-9dd3-4910-87df-54f00afa70d0
nhung01
"2025-01-28T06:12:02Z"
6
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:NousResearch/Hermes-3-Llama-3.1-8B", "base_model:adapter:NousResearch/Hermes-3-Llama-3.1-8B", "license:llama3", "8-bit", "bitsandbytes", "region:us" ]
null
"2025-01-28T05:53:52Z"
--- library_name: peft license: llama3 base_model: NousResearch/Hermes-3-Llama-3.1-8B tags: - axolotl - generated_from_trainer model-index: - name: eff20207-9dd3-4910-87df-54f00afa70d0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: NousResearch/Hermes-3-Llama-3.1-8B bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 94ffa3eaa02f0f89_train_data.json ds_type: json format: custom path: /workspace/input_data/94ffa3eaa02f0f89_train_data.json type: field_input: input field_instruction: instruction field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: nhung01/eff20207-9dd3-4910-87df-54f00afa70d0 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/94ffa3eaa02f0f89_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: d3379fa4-7a55-407e-8f15-7b0aefbda53d wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: d3379fa4-7a55-407e-8f15-7b0aefbda53d warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # eff20207-9dd3-4910-87df-54f00afa70d0 This model is a fine-tuned version of [NousResearch/Hermes-3-Llama-3.1-8B](https://huggingface.co./NousResearch/Hermes-3-Llama-3.1-8B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.9093 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 5.9423 | 0.2904 | 200 | 4.9093 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
hkivancoral/smids_5x_deit_tiny_adamax_00001_fold2
hkivancoral
"2023-12-18T00:34:08Z"
5
0
transformers
[ "transformers", "pytorch", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:facebook/deit-small-patch16-224", "base_model:finetune:facebook/deit-small-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2023-12-14T14:52:53Z"
--- license: apache-2.0 base_model: facebook/deit-small-patch16-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: smids_5x_deit_tiny_adamax_00001_fold2 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: test args: default metrics: - name: Accuracy type: accuracy value: 0.8752079866888519 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_5x_deit_tiny_adamax_00001_fold2 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co./facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.0539 - Accuracy: 0.8752 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.3245 | 1.0 | 375 | 0.3357 | 0.8586 | | 0.2435 | 2.0 | 750 | 0.3012 | 0.8802 | | 0.1837 | 3.0 | 1125 | 0.3092 | 0.8802 | | 0.0922 | 4.0 | 1500 | 0.3362 | 0.8719 | | 0.064 | 5.0 | 1875 | 0.4063 | 0.8619 | | 0.0948 | 6.0 | 2250 | 0.4674 | 0.8619 | | 0.0452 | 7.0 | 2625 | 0.5334 | 0.8602 | | 0.0373 | 8.0 | 3000 | 0.6077 | 0.8619 | | 0.0111 | 9.0 | 3375 | 0.6364 | 0.8769 | | 0.0018 | 10.0 | 3750 | 0.7083 | 0.8636 | | 0.0038 | 11.0 | 4125 | 0.7404 | 0.8752 | | 0.0175 | 12.0 | 4500 | 0.8300 | 0.8719 | | 0.0012 | 13.0 | 4875 | 0.8986 | 0.8652 | | 0.0087 | 14.0 | 5250 | 0.8825 | 0.8686 | | 0.004 | 15.0 | 5625 | 0.8822 | 0.8785 | | 0.0001 | 16.0 | 6000 | 0.9237 | 0.8735 | | 0.0162 | 17.0 | 6375 | 0.9830 | 0.8619 | | 0.0 | 18.0 | 6750 | 1.0120 | 0.8702 | | 0.0 | 19.0 | 7125 | 1.0192 | 0.8719 | | 0.0001 | 20.0 | 7500 | 0.9781 | 0.8735 | | 0.0 | 21.0 | 7875 | 1.0188 | 0.8702 | | 0.0 | 22.0 | 8250 | 0.9776 | 0.8735 | | 0.0 | 23.0 | 8625 | 1.0494 | 0.8702 | | 0.0 | 24.0 | 9000 | 0.9531 | 0.8752 | | 0.0 | 25.0 | 9375 | 1.0293 | 0.8719 | | 0.0 | 26.0 | 9750 | 1.0427 | 0.8652 | | 0.0 | 27.0 | 10125 | 1.0483 | 0.8719 | | 0.0 | 28.0 | 10500 | 1.0202 | 0.8735 | | 0.0 | 29.0 | 10875 | 1.0779 | 0.8686 | | 0.0 | 30.0 | 11250 | 1.0065 | 0.8719 | | 0.0018 | 31.0 | 11625 | 1.0762 | 0.8702 | | 0.0202 | 32.0 | 12000 | 1.0874 | 0.8669 | | 0.0024 | 33.0 | 12375 | 1.0366 | 0.8735 | | 0.0 | 34.0 | 12750 | 1.1165 | 0.8686 | | 0.0 | 35.0 | 13125 | 1.0244 | 0.8752 | | 0.0 | 36.0 | 13500 | 1.1014 | 0.8719 | | 0.0 | 37.0 | 13875 | 1.0995 | 0.8702 | | 0.0 | 38.0 | 14250 | 1.1070 | 0.8719 | | 0.0 | 39.0 | 14625 | 1.0209 | 0.8769 | | 0.0048 | 40.0 | 15000 | 1.0540 | 0.8752 | | 0.0 | 41.0 | 15375 | 1.0624 | 0.8752 | | 0.0015 | 42.0 | 15750 | 1.0637 | 0.8752 | | 0.0013 | 43.0 | 16125 | 1.0536 | 0.8752 | | 0.0013 | 44.0 | 16500 | 1.0479 | 0.8752 | | 0.0013 | 45.0 | 16875 | 1.0540 | 0.8752 | | 0.0 | 46.0 | 17250 | 1.0694 | 0.8752 | | 0.0016 | 47.0 | 17625 | 1.0601 | 0.8752 | | 0.0 | 48.0 | 18000 | 1.0596 | 0.8752 | | 0.0013 | 49.0 | 18375 | 1.0574 | 0.8752 | | 0.0012 | 50.0 | 18750 | 1.0539 | 0.8752 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.1+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
sultan/BioM-ALBERT-xxlarge
sultan
"2023-11-04T23:06:35Z"
12
2
transformers
[ "transformers", "pytorch", "albert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2022-03-02T23:29:05Z"
# BioM-Transformers: Building Large Biomedical Language Models with BERT, ALBERT and ELECTRA # Abstract The impact of design choices on the performance of biomedical language models recently has been a subject for investigation. In this paper, we empirically study biomedical domain adaptation with large transformer models using different design choices. We evaluate the performance of our pretrained models against other existing biomedical language models in the literature. Our results show that we achieve state-of-the-art results on several biomedical domain tasks despite using similar or less computational cost compared to other models in the literature. Our findings highlight the significant effect of design choices on improving the performance of biomedical language models. # Model Description This model was pre-trained on PubMed Abstracts only with biomedical domain vocabulary for 264K steps with a batch size of 8192 on TPUv3-512 unit. In order to help researchers with limited resources to fine-tune larger models, we created an example with PyTorch XLA. PyTorch XLA (https://github.com/pytorch/xla) is a library that allows you to use PyTorch on TPU units, which is provided for free by Google Colab and Kaggle. Follow this example to work with PyTorch/XLA [Link](https://github.com/salrowili/BioM-Transformers/blob/main/examples/Fine_Tuning_Biomedical_Models_on_Text_Classification_Task_With_HuggingFace_Transformers_and_PyTorch_XLA.ipynb) Check our GitHub repo at https://github.com/salrowili/BioM-Transformers for TensorFlow and GluonNLP checkpoints. We also updated this repo with a couple of examples on how to fine-tune LMs on text classification and questions answering tasks such as ChemProt, SQuAD, and BioASQ. # Colab Notebook Examples BioM-ELECTRA-LARGE on NER and ChemProt Task [![Open In Colab][COLAB]](https://colab.research.google.com/github/salrowili/BioM-Transformers/blob/main/examples/Example_of_NER_and_ChemProt_Task_on_TPU.ipynb) BioM-ELECTRA-Large on SQuAD2.0 and BioASQ7B Factoid tasks [![Open In Colab][COLAB]](https://colab.research.google.com/github/salrowili/BioM-Transformers/blob/main/examples/Example_of_SQuAD2_0_and_BioASQ7B_tasks_with_BioM_ELECTRA_Large_on_TPU.ipynb) BioM-ALBERT-xxlarge on SQuAD2.0 and BioASQ7B Factoid tasks [![Open In Colab][COLAB]](https://colab.research.google.com/github/salrowili/BioM-Transformers/blob/main/examples/Example_of_SQuAD2_0_and_BioASQ7B_tasks_with_BioM_ALBERT_xxlarge_on_TPU.ipynb) Text Classification Task With HuggingFace Transformers and PyTorchXLA on Free TPU [![Open In Colab][COLAB]](https://colab.research.google.com/github/salrowili/BioM-Transformers/blob/main/examples/Fine_Tuning_Biomedical_Models_on_Text_Classification_Task_With_HuggingFace_Transformers_and_PyTorch_XLA.ipynb) Reproducing our BLURB results with JAX [![Open In Colab][COLAB]](https://colab.research.google.com/github/salrowili/BioM-Transformers/blob/main/examples/BLURB_LeaderBoard_with_TPU_VM.ipynb) Finetunning BioM-Transformers with Jax/Flax on TPUv3-8 with free Kaggle resource [![Open In Colab][COLAB]](https://www.kaggle.com/code/sultanalrowili/biom-transoformers-with-flax-on-tpu-with-kaggle) [COLAB]: https://colab.research.google.com/assets/colab-badge.svg # Acknowledgment We would like to acknowledge the support we have from Tensorflow Research Cloud (TFRC) team to grant us access to TPUv3 units. # Citation ```bibtex @inproceedings{alrowili-shanker-2021-biom, title = "{B}io{M}-Transformers: Building Large Biomedical Language Models with {BERT}, {ALBERT} and {ELECTRA}", author = "Alrowili, Sultan and Shanker, Vijay", booktitle = "Proceedings of the 20th Workshop on Biomedical Language Processing", month = jun, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2021.bionlp-1.24", pages = "221--227", abstract = "The impact of design choices on the performance of biomedical language models recently has been a subject for investigation. In this paper, we empirically study biomedical domain adaptation with large transformer models using different design choices. We evaluate the performance of our pretrained models against other existing biomedical language models in the literature. Our results show that we achieve state-of-the-art results on several biomedical domain tasks despite using similar or less computational cost compared to other models in the literature. Our findings highlight the significant effect of design choices on improving the performance of biomedical language models.", } ```
stefan-it/hmbench-ajmc-de-hmbert_64k-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
stefan-it
"2023-10-26T10:06:14Z"
5
0
flair
[ "flair", "pytorch", "tensorboard", "token-classification", "sequence-tagger-model", "de", "base_model:dbmdz/bert-base-historic-multilingual-64k-td-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-64k-td-cased", "license:mit", "region:us" ]
token-classification
"2023-10-23T15:48:43Z"
--- language: de license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-64k-td-cased widget: - text: โ€” Dramatiลฟch war der Stoff vor Sophokles von ร„ลฟchylos behandelt worden in den ฮ˜ฯฮฟแฟ‡ฯƒฯƒฮฑฮน , denen vielleicht in der Trilogie das Stรผc>"OnJwยป ฮบฮฟฮฏฯƒฮนฯ‚ vorherging , das Stรผck ฮฃฮฑฮปฮฑฮผฮฏฮฝฮนฮฑฮน folgte . --- # Fine-tuned Flair Model on AjMC German NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [AjMC German](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md) NER Dataset using hmBERT 64k as backbone LM. The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics, and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/) project. The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[4, 8]` * Learning Rates: `[5e-05, 3e-05]` And report micro F1-score on development set: | Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average | |-------------------|--------------|--------------|--------------|-----------------|--------------|-----------------| | `bs4-e10-lr3e-05` | [0.8806][1] | [0.8988][2] | [0.8967][3] | [0.8924][4] | [0.8994][5] | 0.8936 ยฑ 0.0078 | | `bs8-e10-lr5e-05` | [0.8951][6] | [0.8972][7] | [0.8933][8] | [**0.8892**][9] | [0.8902][10] | 0.893 ยฑ 0.0033 | | `bs4-e10-lr5e-05` | [0.8789][11] | [0.891][12] | [0.9012][13] | [0.891][14] | [0.8873][15] | 0.8899 ยฑ 0.008 | | `bs8-e10-lr3e-05` | [0.88][16] | [0.8889][17] | [0.8764][18] | [0.897][19] | [0.8948][20] | 0.8874 ยฑ 0.009 | [1]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert_64k-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert_64k-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert_64k-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert_64k-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert_64k-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert_64k-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert_64k-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert_64k-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert_64k-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert_64k-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert_64k-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert_64k-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert_64k-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert_64k-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert_64k-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert_64k-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert_64k-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert_64k-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert_64k-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert_64k-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa Mรคrz](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion ร‡ano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs โค๏ธ
tensorblock/OLMo-1B-hf-GGUF
tensorblock
"2024-11-16T00:59:36Z"
62
0
null
[ "gguf", "TensorBlock", "GGUF", "en", "dataset:allenai/dolma", "base_model:allenai/OLMo-1B-hf", "base_model:quantized:allenai/OLMo-1B-hf", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-11-10T18:30:19Z"
--- license: apache-2.0 datasets: - allenai/dolma language: - en tags: - TensorBlock - GGUF base_model: allenai/OLMo-1B-hf --- <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"> Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a> </p> </div> </div> ## allenai/OLMo-1B-hf - GGUF This repo contains GGUF format model files for [allenai/OLMo-1B-hf](https://huggingface.co./allenai/OLMo-1B-hf). The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d). <div style="text-align: left; margin: 20px 0;"> <a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;"> Run them on the TensorBlock client using your local machine โ†— </a> </div> ## Prompt template ``` ``` ## Model file specification | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [OLMo-1B-hf-Q2_K.gguf](https://huggingface.co./tensorblock/OLMo-1B-hf-GGUF/blob/main/OLMo-1B-hf-Q2_K.gguf) | Q2_K | 0.447 GB | smallest, significant quality loss - not recommended for most purposes | | [OLMo-1B-hf-Q3_K_S.gguf](https://huggingface.co./tensorblock/OLMo-1B-hf-GGUF/blob/main/OLMo-1B-hf-Q3_K_S.gguf) | Q3_K_S | 0.510 GB | very small, high quality loss | | [OLMo-1B-hf-Q3_K_M.gguf](https://huggingface.co./tensorblock/OLMo-1B-hf-GGUF/blob/main/OLMo-1B-hf-Q3_K_M.gguf) | Q3_K_M | 0.563 GB | very small, high quality loss | | [OLMo-1B-hf-Q3_K_L.gguf](https://huggingface.co./tensorblock/OLMo-1B-hf-GGUF/blob/main/OLMo-1B-hf-Q3_K_L.gguf) | Q3_K_L | 0.607 GB | small, substantial quality loss | | [OLMo-1B-hf-Q4_0.gguf](https://huggingface.co./tensorblock/OLMo-1B-hf-GGUF/blob/main/OLMo-1B-hf-Q4_0.gguf) | Q4_0 | 0.643 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [OLMo-1B-hf-Q4_K_S.gguf](https://huggingface.co./tensorblock/OLMo-1B-hf-GGUF/blob/main/OLMo-1B-hf-Q4_K_S.gguf) | Q4_K_S | 0.649 GB | small, greater quality loss | | [OLMo-1B-hf-Q4_K_M.gguf](https://huggingface.co./tensorblock/OLMo-1B-hf-GGUF/blob/main/OLMo-1B-hf-Q4_K_M.gguf) | Q4_K_M | 0.683 GB | medium, balanced quality - recommended | | [OLMo-1B-hf-Q5_0.gguf](https://huggingface.co./tensorblock/OLMo-1B-hf-GGUF/blob/main/OLMo-1B-hf-Q5_0.gguf) | Q5_0 | 0.768 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [OLMo-1B-hf-Q5_K_S.gguf](https://huggingface.co./tensorblock/OLMo-1B-hf-GGUF/blob/main/OLMo-1B-hf-Q5_K_S.gguf) | Q5_K_S | 0.768 GB | large, low quality loss - recommended | | [OLMo-1B-hf-Q5_K_M.gguf](https://huggingface.co./tensorblock/OLMo-1B-hf-GGUF/blob/main/OLMo-1B-hf-Q5_K_M.gguf) | Q5_K_M | 0.789 GB | large, very low quality loss - recommended | | [OLMo-1B-hf-Q6_K.gguf](https://huggingface.co./tensorblock/OLMo-1B-hf-GGUF/blob/main/OLMo-1B-hf-Q6_K.gguf) | Q6_K | 0.901 GB | very large, extremely low quality loss | | [OLMo-1B-hf-Q8_0.gguf](https://huggingface.co./tensorblock/OLMo-1B-hf-GGUF/blob/main/OLMo-1B-hf-Q8_0.gguf) | Q8_0 | 1.166 GB | very large, extremely low quality loss - not recommended | ## Downloading instruction ### Command line Firstly, install Huggingface Client ```shell pip install -U "huggingface_hub[cli]" ``` Then, downoad the individual model file the a local directory ```shell huggingface-cli download tensorblock/OLMo-1B-hf-GGUF --include "OLMo-1B-hf-Q2_K.gguf" --local-dir MY_LOCAL_DIR ``` If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try: ```shell huggingface-cli download tensorblock/OLMo-1B-hf-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf' ```
huggingtweets/discarddiscord
huggingtweets
"2021-05-22T01:45:29Z"
6
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2022-03-02T23:29:05Z"
--- language: en thumbnail: https://www.huggingtweets.com/discarddiscord/1614246710317/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div> <div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1029964613029437440/3_fRmZuH_400x400.jpg')"> </div> <div style="margin-top: 8px; font-size: 19px; font-weight: 800">luna ๐Ÿค– AI Bot </div> <div style="font-size: 15px">@discarddiscord bot</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on [@discarddiscord's tweets](https://twitter.com/discarddiscord). | Data | Quantity | | --- | --- | | Tweets downloaded | 1495 | | Retweets | 289 | | Short tweets | 213 | | Tweets kept | 993 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1tvxkurq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co./gpt2) which is fine-tuned on @discarddiscord's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2g2xt22m) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2g2xt22m/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/discarddiscord') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co./gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
tensorblock/Reasoning-0.5b-GGUF
tensorblock
"2024-11-26T03:35:01Z"
83
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "qwen2", "trl", "sft", "reasoning", "TensorBlock", "GGUF", "en", "dataset:KingNish/reasoning-base-20k", "base_model:KingNish/Reasoning-0.5b", "base_model:quantized:KingNish/Reasoning-0.5b", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
"2024-11-26T03:32:07Z"
--- base_model: KingNish/Reasoning-0.5b language: - en license: apache-2.0 datasets: - KingNish/reasoning-base-20k tags: - text-generation-inference - transformers - unsloth - qwen2 - trl - sft - reasoning - TensorBlock - GGUF --- <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"> Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a> </p> </div> </div> ## KingNish/Reasoning-0.5b - GGUF This repo contains GGUF format model files for [KingNish/Reasoning-0.5b](https://huggingface.co./KingNish/Reasoning-0.5b). The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d). <div style="text-align: left; margin: 20px 0;"> <a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;"> Run them on the TensorBlock client using your local machine โ†— </a> </div> ## Prompt template ``` <|im_start|>system {system_prompt}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` ## Model file specification | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [Reasoning-0.5b-Q2_K.gguf](https://huggingface.co./tensorblock/Reasoning-0.5b-GGUF/blob/main/Reasoning-0.5b-Q2_K.gguf) | Q2_K | 0.339 GB | smallest, significant quality loss - not recommended for most purposes | | [Reasoning-0.5b-Q3_K_S.gguf](https://huggingface.co./tensorblock/Reasoning-0.5b-GGUF/blob/main/Reasoning-0.5b-Q3_K_S.gguf) | Q3_K_S | 0.338 GB | very small, high quality loss | | [Reasoning-0.5b-Q3_K_M.gguf](https://huggingface.co./tensorblock/Reasoning-0.5b-GGUF/blob/main/Reasoning-0.5b-Q3_K_M.gguf) | Q3_K_M | 0.355 GB | very small, high quality loss | | [Reasoning-0.5b-Q3_K_L.gguf](https://huggingface.co./tensorblock/Reasoning-0.5b-GGUF/blob/main/Reasoning-0.5b-Q3_K_L.gguf) | Q3_K_L | 0.369 GB | small, substantial quality loss | | [Reasoning-0.5b-Q4_0.gguf](https://huggingface.co./tensorblock/Reasoning-0.5b-GGUF/blob/main/Reasoning-0.5b-Q4_0.gguf) | Q4_0 | 0.352 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [Reasoning-0.5b-Q4_K_S.gguf](https://huggingface.co./tensorblock/Reasoning-0.5b-GGUF/blob/main/Reasoning-0.5b-Q4_K_S.gguf) | Q4_K_S | 0.385 GB | small, greater quality loss | | [Reasoning-0.5b-Q4_K_M.gguf](https://huggingface.co./tensorblock/Reasoning-0.5b-GGUF/blob/main/Reasoning-0.5b-Q4_K_M.gguf) | Q4_K_M | 0.398 GB | medium, balanced quality - recommended | | [Reasoning-0.5b-Q5_0.gguf](https://huggingface.co./tensorblock/Reasoning-0.5b-GGUF/blob/main/Reasoning-0.5b-Q5_0.gguf) | Q5_0 | 0.397 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [Reasoning-0.5b-Q5_K_S.gguf](https://huggingface.co./tensorblock/Reasoning-0.5b-GGUF/blob/main/Reasoning-0.5b-Q5_K_S.gguf) | Q5_K_S | 0.413 GB | large, low quality loss - recommended | | [Reasoning-0.5b-Q5_K_M.gguf](https://huggingface.co./tensorblock/Reasoning-0.5b-GGUF/blob/main/Reasoning-0.5b-Q5_K_M.gguf) | Q5_K_M | 0.420 GB | large, very low quality loss - recommended | | [Reasoning-0.5b-Q6_K.gguf](https://huggingface.co./tensorblock/Reasoning-0.5b-GGUF/blob/main/Reasoning-0.5b-Q6_K.gguf) | Q6_K | 0.506 GB | very large, extremely low quality loss | | [Reasoning-0.5b-Q8_0.gguf](https://huggingface.co./tensorblock/Reasoning-0.5b-GGUF/blob/main/Reasoning-0.5b-Q8_0.gguf) | Q8_0 | 0.531 GB | very large, extremely low quality loss - not recommended | ## Downloading instruction ### Command line Firstly, install Huggingface Client ```shell pip install -U "huggingface_hub[cli]" ``` Then, downoad the individual model file the a local directory ```shell huggingface-cli download tensorblock/Reasoning-0.5b-GGUF --include "Reasoning-0.5b-Q2_K.gguf" --local-dir MY_LOCAL_DIR ``` If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try: ```shell huggingface-cli download tensorblock/Reasoning-0.5b-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf' ```
TheBloke/airoboros-33B-gpt4-1.2-GPTQ
TheBloke
"2023-08-21T08:40:51Z"
23
9
transformers
[ "transformers", "safetensors", "llama", "text-generation", "dataset:jondurbin/airoboros-gpt4-1.2", "license:other", "autotrain_compatible", "text-generation-inference", "4-bit", "gptq", "region:us" ]
text-generation
"2023-06-14T13:07:17Z"
--- inference: false license: other datasets: - jondurbin/airoboros-gpt4-1.2 --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # John Durbin's Airoboros 33B GPT4 1.2 GPTQ These files are GPTQ 4bit model files for [John Durbin's Airoboros 33B GPT4 1.2](https://huggingface.co./jondurbin/airoboros-33b-gpt4-1.2). It is the result of quantising to 4bit using [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ). ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co./TheBloke/airoboros-33B-gpt4-1.2-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co./TheBloke/airoboros-33B-gpt4-1.2-GGML) * [Jon Durbin's unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co./jondurbin/airoboros-33b-gpt4-1.2) ## Prompt template ``` A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. USER: prompt ASSISTANT: ``` ## How to easily download and use this model in text-generation-webui Please make sure you're using the latest version of text-generation-webui 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/airoboros-33B-gpt4-1.2-GPTQ`. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done" 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `airoboros-33B-gpt4-1.2-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. * Note that you do not need to set GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started! ## How to use this GPTQ model from Python code First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed: `pip install auto-gptq` Then try the following example code: ```python from transformers import AutoTokenizer, pipeline, logging from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig import argparse model_name_or_path = "TheBloke/airoboros-33B-gpt4-1.2-GPTQ" model_basename = "airoboros-33b-gpt4-1.2-GPTQ-4bit--1g.act.order" use_triton = False tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) model = AutoGPTQForCausalLM.from_quantized(model_name_or_path, model_basename=model_basename, use_safetensors=True, trust_remote_code=False, device="cuda:0", use_triton=use_triton, quantize_config=None) prompt = "Tell me about AI" prompt_template=f'''### Human: {prompt} ### Assistant:''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline # Prevent printing spurious transformers error when using pipeline with AutoGPTQ logging.set_verbosity(logging.CRITICAL) print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, temperature=0.7, top_p=0.95, repetition_penalty=1.15 ) print(pipe(prompt_template)[0]['generated_text']) ``` ## Provided files **airoboros-33b-gpt4-1.2-GPTQ-4bit--1g.act.order.safetensors** This will work with AutoGPTQ and CUDA versions of GPTQ-for-LLaMa. There are reports of issues with Triton mode of recent GPTQ-for-LLaMa. If you have issues, please use AutoGPTQ instead. It was created without group_size to lower VRAM requirements, and with --act-order (desc_act) to boost inference accuracy as much as possible. * `airoboros-33b-gpt4-1.2-GPTQ-4bit--1g.act.order.safetensors` * Works with AutoGPTQ in CUDA or Triton modes. * Works with GPTQ-for-LLaMa in CUDA mode. May have issues with GPTQ-for-LLaMa Triton mode. * Works with text-generation-webui, including one-click-installers. * Parameters: Groupsize = -1. Act Order / desc_act = True. <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, ้˜ฟๆ˜Ž, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieล‚, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: John Durbin's Airoboros 33B GPT4 1.2 ### Overview This is a qlora fine-tuned 33b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros This is mostly an extension of [1.1](https://huggingface.co./jondurbin/airoboros-33b-gpt4-1.1) with thousands of new training data and an update to allow "PLAINFORMAT" at the end of coding prompts to just print the code without backticks or explanations/usage/etc. The dataset used to fine-tune this model is available [here](https://huggingface.co./datasets/jondurbin/airoboros-gpt4-1.2), with a specific focus on: - coding - math/reasoning (using orca style ELI5 instruction/response pairs) - trivia - role playing - multiple choice and fill-in-the-blank - context-obedient question answering - theory of mind - misc/general This model was fine-tuned with a fork of [qlora](https://github.com/jondurbin/qlora), which among other things was updated to use a slightly modified vicuna template to be compatible with the 7b/13b versions: ``` A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. USER: [prompt] ASSISTANT: ``` So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon). ### Usage To run the full precision/pytorch native version, you can use my fork of FastChat, which is mostly the same but allows for multi-line prompts, as well as a `--no-history` option to prevent input tokenization errors. ``` pip install git+https://github.com/jondurbin/FastChat ``` Be sure you are pulling the latest branch! Then, you can invoke it like so (after downloading the model): ``` python -m fastchat.serve.cli \ --model-path airoboros-33b-gpt4-1.2 \ --temperature 0.5 \ --max-new-tokens 2048 \ --no-history ``` Alternatively, please check out TheBloke's quantized versions: - https://huggingface.co./TheBloke/airoboros-33B-gpt4-1.2-GPTQ - https://huggingface.co./TheBloke/airoboros-33B-gpt4-1.2-GGML ### Coding updates from gpt4/1.1: I added a few hundred instruction/response pairs to the training data with "PLAINFORMAT" as a single, all caps term at the end of the normal instructions, which produce plain text output instead of markdown/backtick code formatting. It's not guaranteed to work all the time, but mostly it does seem to work as expected. So for example, instead of: ``` Implement the Snake game in python. ``` You would use: ``` Implement the Snake game in python. PLAINFORMAT ``` ### Other updates from gpt4/1.1: - Several hundred role-playing data. - A few thousand ORCA style reasoning/math questions with ELI5 prompts to generate the responses (should not be needed in your prompts to this model however, just ask the question). - Many more coding examples in various languages, including some that use specific libraries (pandas, numpy, tensorflow, etc.)
AlekseiPravdin/Hermes-2-Pro-Llama-3-8B-Llama3-8B-Chinese-Chat-slerp-merge
AlekseiPravdin
"2024-08-16T15:42:21Z"
9
0
null
[ "safetensors", "llama", "merge", "mergekit", "lazymergekit", "NousResearch/Hermes-2-Pro-Llama-3-8B", "shenzhi-wang/Llama3-8B-Chinese-Chat", "license:apache-2.0", "region:us" ]
null
"2024-08-16T02:50:14Z"
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - NousResearch/Hermes-2-Pro-Llama-3-8B - shenzhi-wang/Llama3-8B-Chinese-Chat --- # Hermes-2-Pro-Llama-3-8B-Llama3-8B-Chinese-Chat-slerp-merge Hermes-2-Pro-Llama-3-8B-Llama3-8B-Chinese-Chat-slerp-merge is a sophisticated language model resulting from the strategic merging of two distinct models: [NousResearch/Hermes-2-Pro-Llama-3-8B](https://huggingface.co./NousResearch/Hermes-2-Pro-Llama-3-8B) and [shenzhi-wang/Llama3-8B-Chinese-Chat](https://huggingface.co./shenzhi-wang/Llama3-8B-Chinese-Chat). The merging process was executed using [mergekit](https://github.com/cg123/mergekit), a specialized tool designed for precise model blending to achieve optimal performance and synergy between the merged architectures. ## ๐Ÿงฉ Merge Configuration ```yaml slices: - sources: - model: NousResearch/Hermes-2-Pro-Llama-3-8B layer_range: [0, 31] - model: shenzhi-wang/Llama3-8B-Chinese-Chat layer_range: [0, 31] merge_method: slerp base_model: NousResearch/Hermes-2-Pro-Llama-3-8B parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: float16 ``` ## Model Features This merged model combines the advanced generative capabilities of [NousResearch/Hermes-2-Pro-Llama-3-8B](https://huggingface.co./NousResearch/Hermes-2-Pro-Llama-3-8B), which excels in function calling and structured outputs, with the robust performance of [shenzhi-wang/Llama3-8B-Chinese-Chat](https://huggingface.co./shenzhi-wang/Llama3-8B-Chinese-Chat), which is fine-tuned for Chinese and English interactions. The result is a versatile model that supports a wide range of text generation tasks, including conversational AI, structured data outputs, and multilingual capabilities. ## Use Cases - **Conversational AI**: Engage in natural dialogues in both English and Chinese, leveraging the strengths of both parent models. - **Function Calling**: Utilize advanced function calling capabilities for structured outputs, making it suitable for applications requiring precise data handling. - **Multilingual Support**: Effectively communicate in both English and Chinese, catering to a diverse user base. ## Evaluation Results ### Hermes-2-Pro-Llama-3-8B - Function Calling Evaluation: 90% - JSON Structured Outputs Evaluation: 84% ### Llama3-8B-Chinese-Chat - Enhanced performance in roleplay, function calling, and math capabilities, particularly in the latest version. ## Limitations While the merged model inherits the strengths of both parent models, it may also carry over some limitations. For instance, the model's performance in highly specialized domains may not match that of dedicated models. Additionally, biases present in the training data of either parent model could influence the outputs, necessitating careful consideration in sensitive applications. In summary, Hermes-2-Pro-Llama-3-8B-Llama3-8B-Chinese-Chat-slerp-merge represents a significant advancement in language modeling, combining the best features of its predecessors to deliver a powerful tool for a variety of applications.
lucas-meyer/seq-xls-r-fleurs_zu-run3-asr_xh-run2
lucas-meyer
"2023-11-07T13:17:40Z"
7
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:lucas-meyer/asr_xh", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2023-10-31T12:24:23Z"
--- tags: - generated_from_trainer metrics: - wer model-index: - name: seq-xls-r-fleurs_zu-run3-asr_xh-run2 results: [] datasets: - lucas-meyer/asr_xh --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # seq-xls-r-fleurs_zu-run3-asr_xh-run2 This model is a fine-tuned version of [lucas-meyer/xls-r-fleurs_zu-run3](https://huggingface.co./lucas-meyer/xls-r-fleurs_zu-run3) on the asr_xh dataset. It achieves the following results: - Wer (Validation): 51.15% - Wer (Test): 51.32% ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 3 - total_train_batch_size: 12 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer (Train) | |:-------------:|:-----:|:----:|:---------------:|:------:| | 8.7872 | 0.48 | 100 | 3.3525 | 1.0 | | 3.1413 | 0.96 | 200 | 3.0025 | 1.0 | | 1.7204 | 1.44 | 300 | 0.6932 | 0.7477 | | 0.6719 | 1.91 | 400 | 0.5336 | 0.6871 | | 0.5452 | 2.39 | 500 | 0.4911 | 0.6239 | | 0.4981 | 2.87 | 600 | 0.4559 | 0.6339 | | 0.4112 | 3.35 | 700 | 0.4295 | 0.5604 | | 0.3807 | 3.83 | 800 | 0.3999 | 0.5390 | | 0.3222 | 4.31 | 900 | 0.3903 | 0.5303 | | 0.3041 | 4.78 | 1000 | 0.3714 | 0.5125 | | 0.258 | 5.26 | 1100 | 0.4244 | 0.5368 | | 0.2356 | 5.74 | 1200 | 0.4421 | 0.5494 | | 0.2136 | 6.22 | 1300 | 0.4220 | 0.5420 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu117 - Datasets 2.14.4 - Tokenizers 0.13.3
gubartz/testea
gubartz
"2024-04-15T17:32:30Z"
107
0
transformers
[ "transformers", "safetensors", "longt5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
"2024-04-15T17:32:02Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Eigeen/Xwin-LM-13B-V0.2-exl2
Eigeen
"2023-10-29T15:23:07Z"
19
3
transformers
[ "transformers", "llama", "text-generation", "text generation", "instruct", "en", "license:llama2", "autotrain_compatible", "region:us" ]
text-generation
"2023-10-19T05:22:16Z"
--- inference: false language: - en license: llama2 model_creator: Xwin-LM model_link: https://huggingface.co./Xwin-LM/Xwin-LM-13B-V0.2 model_name: Xwin-LM-13B-V0.2 model_type: llama pipeline_tag: text-generation quantized_by: Eigeen tags: - text generation - instruct thumbnail: null --- # Xwin-LM-13B-V0.2 - ExLlamaV2 Original model: [Xwin-LM-13B-V0.2](https://huggingface.co./Xwin-LM/Xwin-LM-13B-V0.2) # Quantizations - [3bpw](https://huggingface.co./Eigeen/Xwin-LM-13B-V0.2-exl2/tree/main) - [4bpw](https://huggingface.co./Eigeen/Xwin-LM-13B-V0.2-exl2/tree/4bpw) - [5bpw](https://huggingface.co./Eigeen/Xwin-LM-13B-V0.2-exl2/tree/5bpw) - [5.5bpw](https://huggingface.co./Eigeen/Xwin-LM-13B-V0.2-exl2/tree/5.5bpw) - [6bpw](https://huggingface.co./Eigeen/Xwin-LM-13B-V0.2-exl2/tree/6bpw) - [8bpw](https://huggingface.co./Eigeen/Xwin-LM-13B-V0.2-exl2/tree/8bpw)
Salesforce/blip2-itm-vit-g-coco
Salesforce
"2025-02-03T06:39:12Z"
1,231
1
transformers
[ "transformers", "pytorch", "safetensors", "blip-2", "zero-shot-image-classification", "arxiv:1910.09700", "license:mit", "endpoints_compatible", "region:us" ]
zero-shot-image-classification
"2023-08-23T21:34:45Z"
--- library_name: transformers license: mit --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Ethical Considerations This release is for research purposes only in support of an academic paper. Our models, datasets, and code are not specifically designed or evaluated for all downstream purposes. We strongly recommend users evaluate and address potential concerns related to accuracy, safety, and fairness before deploying this model. We encourage users to consider the common limitations of AI, comply with applicable laws, and leverage best practices when selecting use cases, particularly for high-risk scenarios where errors or misuse could significantly impact peopleโ€™s lives, rights, or safety. For further guidance on use cases, refer to our AUP and AI AUP.
juhw/uiop51
juhw
"2025-02-21T14:42:27Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-02-21T14:37:49Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Zoyd/CreitinGameplays_ConvAI-9b-v2-2_5bpw_exl2
Zoyd
"2024-05-30T19:15:23Z"
7
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "en", "dataset:CreitinGameplays/merged-data-v2", "base_model:mistralai/Mistral-7B-Instruct-v0.3", "base_model:quantized:mistralai/Mistral-7B-Instruct-v0.3", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "exl2", "region:us" ]
text-generation
"2024-05-30T17:42:32Z"
--- license: mit datasets: - CreitinGameplays/merged-data-v2 base_model: - mistralai/Mistral-7B-v0.3 - mistralai/Mistral-7B-Instruct-v0.3 language: - en --- **Exllamav2** quant (**exl2** / **2.5 bpw**) made with ExLlamaV2 v0.1.1 Other EXL2 quants: | **Quant** | **Model Size** | **lm_head** | | ----- | ---------- | ------- | |<center>**[2.2](https://huggingface.co./Zoyd/CreitinGameplays_ConvAI-9b-v2-2_2bpw_exl2)**</center> | <center>2671 MB</center> | <center>6</center> | |<center>**[2.5](https://huggingface.co./Zoyd/CreitinGameplays_ConvAI-9b-v2-2_5bpw_exl2)**</center> | <center>2958 MB</center> | <center>6</center> | |<center>**[3.0](https://huggingface.co./Zoyd/CreitinGameplays_ConvAI-9b-v2-3_0bpw_exl2)**</center> | <center>3477 MB</center> | <center>6</center> | |<center>**[3.5](https://huggingface.co./Zoyd/CreitinGameplays_ConvAI-9b-v2-3_5bpw_exl2)**</center> | <center>3997 MB</center> | <center>6</center> | |<center>**[3.75](https://huggingface.co./Zoyd/CreitinGameplays_ConvAI-9b-v2-3_75bpw_exl2)**</center> | <center>4256 MB</center> | <center>6</center> | |<center>**[4.0](https://huggingface.co./Zoyd/CreitinGameplays_ConvAI-9b-v2-4_0bpw_exl2)**</center> | <center>4515 MB</center> | <center>6</center> | |<center>**[4.25](https://huggingface.co./Zoyd/CreitinGameplays_ConvAI-9b-v2-4_25bpw_exl2)**</center> | <center>4776 MB</center> | <center>6</center> | |<center>**[5.0](https://huggingface.co./Zoyd/CreitinGameplays_ConvAI-9b-v2-5_0bpw_exl2)**</center> | <center>5556 MB</center> | <center>6</center> | |<center>**[6.0](https://huggingface.co./Zoyd/CreitinGameplays_ConvAI-9b-v2-6_0bpw_exl2)**</center> | <center>6605 MB</center> | <center>8</center> | |<center>**[6.5](https://huggingface.co./Zoyd/CreitinGameplays_ConvAI-9b-v2-6_5bpw_exl2)**</center> | <center>7137 MB</center> | <center>8</center> | |<center>**[8.0](https://huggingface.co./Zoyd/CreitinGameplays_ConvAI-9b-v2-8_0bpw_exl2)**</center> | <center>7983 MB</center> | <center>8</center> | # **ConvAI-9b v2: A Conversational AI Model** ![img](https://huggingface.co./CreitinGameplays/ConvAI-9b/resolve/main/convai.png) ## **1. Model Details** * **Model Name:** ConvAI-9b v2 * **Authors:** CreitinGameplays * **Date:** May 29th, 2024 ## **2. Model Description** ConvAI-9b v2 is a fine-tuned conversational AI model with 9 billion parameters. It is based on the following models: * **Base Model:** [mistralai/Mistral-7B-v0.3](https://huggingface.co./mistralai/Mistral-7B-v0.3) * **Merged Model:** [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co./mistralai/Mistral-7B-Instruct-v0.3) ## **3. Training Data** The model was fine-tuned on a custom dataset of conversations between an AI assistant and a user. The dataset format followed a specific structure: ``` <|system|> (system prompt, e.g.: You are a helpful AI language model called ChatGPT, your goal is helping users with their questions) </s> <|user|> (user prompt) </s> ``` ## **4. Intended Uses** ConvAI-9b is intended for use in conversational AI applications, such as: * Chatbots * Virtual assistants * Interactive storytelling * Educational tools ## **5. Limitations** * Like any other language model, ConvAI-9b v2 may generate incorrect or misleading responses. * It may exhibit biases present in the training data. * The model's performance can be affected by the quality and format of the input text. ## **6. Evaluation** ~ soon
ppppppppeter/CNMB
ppppppppeter
"2023-06-06T02:39:06Z"
0
0
null
[ "region:us" ]
null
"2023-06-06T02:35:12Z"
--- title: ECG MAC emoji: ๐Ÿจ colorFrom: blue colorTo: green sdk: streamlit sdk_version: 1.19.0 app_file: app.py pinned: false --- Check out the configuration reference at https://huggingface.co./docs/hub/spaces-config-reference
nttx/81803abe-fe4d-41e0-8848-26301bd41fa3
nttx
"2025-01-14T03:45:50Z"
10
0
peft
[ "peft", "safetensors", "phi", "axolotl", "generated_from_trainer", "base_model:microsoft/phi-1_5", "base_model:adapter:microsoft/phi-1_5", "license:mit", "region:us" ]
null
"2025-01-14T02:35:27Z"
--- library_name: peft license: mit base_model: microsoft/phi-1_5 tags: - axolotl - generated_from_trainer model-index: - name: 81803abe-fe4d-41e0-8848-26301bd41fa3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: microsoft/phi-1_5 bf16: true chat_template: llama3 data_processes: 16 dataset_prepared_path: null datasets: - data_files: - 9e52b1647ca8ad56_train_data.json ds_type: json format: custom path: /workspace/input_data/9e52b1647ca8ad56_train_data.json type: field_input: author field_instruction: title field_output: description format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: 5 eval_batch_size: 2 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: nttx/81803abe-fe4d-41e0-8848-26301bd41fa3 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 8 mlflow_experiment_name: /tmp/9e52b1647ca8ad56_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_torch output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 special_tokens: pad_token: <|endoftext|> strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: a05fa1bd-feca-4a09-ae0a-b6400ceec5d1 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: a05fa1bd-feca-4a09-ae0a-b6400ceec5d1 warmup_steps: 30 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 81803abe-fe4d-41e0-8848-26301bd41fa3 This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co./microsoft/phi-1_5) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.8537 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 30 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.616 | 0.0001 | 1 | 3.3481 | | 3.5885 | 0.0042 | 50 | 3.0556 | | 2.8249 | 0.0084 | 100 | 2.9615 | | 2.7647 | 0.0126 | 150 | 2.8701 | | 3.039 | 0.0169 | 200 | 2.8537 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
CyberHarem/aihara_yuzu_citrus
CyberHarem
"2023-09-28T15:45:15Z"
0
0
null
[ "art", "text-to-image", "dataset:CyberHarem/aihara_yuzu_citrus", "license:mit", "region:us" ]
text-to-image
"2023-09-28T15:26:55Z"
--- license: mit datasets: - CyberHarem/aihara_yuzu_citrus pipeline_tag: text-to-image tags: - art --- # Lora of aihara_yuzu_citrus This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co./deepghs). The base model used during training is [NAI](https://huggingface.co./deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co./Meina/MeinaMix_V11). After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora. For example, if you want to use the model from step 9000, you need to download `9000/aihara_yuzu_citrus.pt` as the embedding and `9000/aihara_yuzu_citrus.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters. **The best step we recommend is 9000**, with the score of 0.603. The trigger words are: 1. `aihara_yuzu_citrus` 2. `blonde_hair, green_eyes, long_hair, jewelry, earrings, brown_hair` For the following groups, it is not recommended to use this model and we express regret: 1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail. 2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits. 3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm. 4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters. 5. Individuals who finds the generated image content offensive to their values. These are available steps: | Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | pattern_14 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata | |:---------|:----------|:--------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-----------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------| | **9000** | **0.603** | [**Download**](9000/aihara_yuzu_citrus.zip) | ![pattern_1-9000](9000/previews/pattern_1.png) | ![pattern_2-9000](9000/previews/pattern_2.png) | ![pattern_3-9000](9000/previews/pattern_3.png) | ![pattern_4-9000](9000/previews/pattern_4.png) | ![pattern_5-9000](9000/previews/pattern_5.png) | ![pattern_6-9000](9000/previews/pattern_6.png) | ![pattern_7-9000](9000/previews/pattern_7.png) | ![pattern_8-9000](9000/previews/pattern_8.png) | ![pattern_9-9000](9000/previews/pattern_9.png) | ![pattern_10-9000](9000/previews/pattern_10.png) | [<NSFW, click to see>](9000/previews/pattern_11.png) | ![pattern_12-9000](9000/previews/pattern_12.png) | ![pattern_13-9000](9000/previews/pattern_13.png) | ![pattern_14-9000](9000/previews/pattern_14.png) | ![bikini-9000](9000/previews/bikini.png) | [<NSFW, click to see>](9000/previews/bondage.png) | ![free-9000](9000/previews/free.png) | ![maid-9000](9000/previews/maid.png) | ![miko-9000](9000/previews/miko.png) | [<NSFW, click to see>](9000/previews/nude.png) | [<NSFW, click to see>](9000/previews/nude2.png) | ![suit-9000](9000/previews/suit.png) | ![yukata-9000](9000/previews/yukata.png) | | 8400 | 0.592 | [Download](8400/aihara_yuzu_citrus.zip) | ![pattern_1-8400](8400/previews/pattern_1.png) | ![pattern_2-8400](8400/previews/pattern_2.png) | ![pattern_3-8400](8400/previews/pattern_3.png) | ![pattern_4-8400](8400/previews/pattern_4.png) | ![pattern_5-8400](8400/previews/pattern_5.png) | ![pattern_6-8400](8400/previews/pattern_6.png) | ![pattern_7-8400](8400/previews/pattern_7.png) | ![pattern_8-8400](8400/previews/pattern_8.png) | ![pattern_9-8400](8400/previews/pattern_9.png) | ![pattern_10-8400](8400/previews/pattern_10.png) | [<NSFW, click to see>](8400/previews/pattern_11.png) | ![pattern_12-8400](8400/previews/pattern_12.png) | ![pattern_13-8400](8400/previews/pattern_13.png) | ![pattern_14-8400](8400/previews/pattern_14.png) | ![bikini-8400](8400/previews/bikini.png) | [<NSFW, click to see>](8400/previews/bondage.png) | ![free-8400](8400/previews/free.png) | ![maid-8400](8400/previews/maid.png) | ![miko-8400](8400/previews/miko.png) | [<NSFW, click to see>](8400/previews/nude.png) | [<NSFW, click to see>](8400/previews/nude2.png) | ![suit-8400](8400/previews/suit.png) | ![yukata-8400](8400/previews/yukata.png) | | 7800 | 0.539 | [Download](7800/aihara_yuzu_citrus.zip) | ![pattern_1-7800](7800/previews/pattern_1.png) | ![pattern_2-7800](7800/previews/pattern_2.png) | ![pattern_3-7800](7800/previews/pattern_3.png) | ![pattern_4-7800](7800/previews/pattern_4.png) | ![pattern_5-7800](7800/previews/pattern_5.png) | ![pattern_6-7800](7800/previews/pattern_6.png) | ![pattern_7-7800](7800/previews/pattern_7.png) | ![pattern_8-7800](7800/previews/pattern_8.png) | ![pattern_9-7800](7800/previews/pattern_9.png) | ![pattern_10-7800](7800/previews/pattern_10.png) | [<NSFW, click to see>](7800/previews/pattern_11.png) | ![pattern_12-7800](7800/previews/pattern_12.png) | ![pattern_13-7800](7800/previews/pattern_13.png) | ![pattern_14-7800](7800/previews/pattern_14.png) | ![bikini-7800](7800/previews/bikini.png) | [<NSFW, click to see>](7800/previews/bondage.png) | ![free-7800](7800/previews/free.png) | ![maid-7800](7800/previews/maid.png) | ![miko-7800](7800/previews/miko.png) | [<NSFW, click to see>](7800/previews/nude.png) | [<NSFW, click to see>](7800/previews/nude2.png) | ![suit-7800](7800/previews/suit.png) | ![yukata-7800](7800/previews/yukata.png) | | 7200 | 0.560 | [Download](7200/aihara_yuzu_citrus.zip) | ![pattern_1-7200](7200/previews/pattern_1.png) | ![pattern_2-7200](7200/previews/pattern_2.png) | ![pattern_3-7200](7200/previews/pattern_3.png) | ![pattern_4-7200](7200/previews/pattern_4.png) | ![pattern_5-7200](7200/previews/pattern_5.png) | ![pattern_6-7200](7200/previews/pattern_6.png) | ![pattern_7-7200](7200/previews/pattern_7.png) | ![pattern_8-7200](7200/previews/pattern_8.png) | ![pattern_9-7200](7200/previews/pattern_9.png) | ![pattern_10-7200](7200/previews/pattern_10.png) | [<NSFW, click to see>](7200/previews/pattern_11.png) | ![pattern_12-7200](7200/previews/pattern_12.png) | ![pattern_13-7200](7200/previews/pattern_13.png) | ![pattern_14-7200](7200/previews/pattern_14.png) | ![bikini-7200](7200/previews/bikini.png) | [<NSFW, click to see>](7200/previews/bondage.png) | ![free-7200](7200/previews/free.png) | ![maid-7200](7200/previews/maid.png) | ![miko-7200](7200/previews/miko.png) | [<NSFW, click to see>](7200/previews/nude.png) | [<NSFW, click to see>](7200/previews/nude2.png) | ![suit-7200](7200/previews/suit.png) | ![yukata-7200](7200/previews/yukata.png) | | 6600 | 0.538 | [Download](6600/aihara_yuzu_citrus.zip) | ![pattern_1-6600](6600/previews/pattern_1.png) | ![pattern_2-6600](6600/previews/pattern_2.png) | ![pattern_3-6600](6600/previews/pattern_3.png) | ![pattern_4-6600](6600/previews/pattern_4.png) | ![pattern_5-6600](6600/previews/pattern_5.png) | ![pattern_6-6600](6600/previews/pattern_6.png) | ![pattern_7-6600](6600/previews/pattern_7.png) | ![pattern_8-6600](6600/previews/pattern_8.png) | ![pattern_9-6600](6600/previews/pattern_9.png) | ![pattern_10-6600](6600/previews/pattern_10.png) | [<NSFW, click to see>](6600/previews/pattern_11.png) | ![pattern_12-6600](6600/previews/pattern_12.png) | ![pattern_13-6600](6600/previews/pattern_13.png) | ![pattern_14-6600](6600/previews/pattern_14.png) | ![bikini-6600](6600/previews/bikini.png) | [<NSFW, click to see>](6600/previews/bondage.png) | ![free-6600](6600/previews/free.png) | ![maid-6600](6600/previews/maid.png) | ![miko-6600](6600/previews/miko.png) | [<NSFW, click to see>](6600/previews/nude.png) | [<NSFW, click to see>](6600/previews/nude2.png) | ![suit-6600](6600/previews/suit.png) | ![yukata-6600](6600/previews/yukata.png) | | 6000 | 0.526 | [Download](6000/aihara_yuzu_citrus.zip) | ![pattern_1-6000](6000/previews/pattern_1.png) | ![pattern_2-6000](6000/previews/pattern_2.png) | ![pattern_3-6000](6000/previews/pattern_3.png) | ![pattern_4-6000](6000/previews/pattern_4.png) | ![pattern_5-6000](6000/previews/pattern_5.png) | ![pattern_6-6000](6000/previews/pattern_6.png) | ![pattern_7-6000](6000/previews/pattern_7.png) | ![pattern_8-6000](6000/previews/pattern_8.png) | ![pattern_9-6000](6000/previews/pattern_9.png) | ![pattern_10-6000](6000/previews/pattern_10.png) | [<NSFW, click to see>](6000/previews/pattern_11.png) | ![pattern_12-6000](6000/previews/pattern_12.png) | ![pattern_13-6000](6000/previews/pattern_13.png) | ![pattern_14-6000](6000/previews/pattern_14.png) | ![bikini-6000](6000/previews/bikini.png) | [<NSFW, click to see>](6000/previews/bondage.png) | ![free-6000](6000/previews/free.png) | ![maid-6000](6000/previews/maid.png) | ![miko-6000](6000/previews/miko.png) | [<NSFW, click to see>](6000/previews/nude.png) | [<NSFW, click to see>](6000/previews/nude2.png) | ![suit-6000](6000/previews/suit.png) | ![yukata-6000](6000/previews/yukata.png) | | 5400 | 0.514 | [Download](5400/aihara_yuzu_citrus.zip) | ![pattern_1-5400](5400/previews/pattern_1.png) | ![pattern_2-5400](5400/previews/pattern_2.png) | ![pattern_3-5400](5400/previews/pattern_3.png) | ![pattern_4-5400](5400/previews/pattern_4.png) | ![pattern_5-5400](5400/previews/pattern_5.png) | ![pattern_6-5400](5400/previews/pattern_6.png) | ![pattern_7-5400](5400/previews/pattern_7.png) | ![pattern_8-5400](5400/previews/pattern_8.png) | ![pattern_9-5400](5400/previews/pattern_9.png) | ![pattern_10-5400](5400/previews/pattern_10.png) | [<NSFW, click to see>](5400/previews/pattern_11.png) | ![pattern_12-5400](5400/previews/pattern_12.png) | ![pattern_13-5400](5400/previews/pattern_13.png) | ![pattern_14-5400](5400/previews/pattern_14.png) | ![bikini-5400](5400/previews/bikini.png) | [<NSFW, click to see>](5400/previews/bondage.png) | ![free-5400](5400/previews/free.png) | ![maid-5400](5400/previews/maid.png) | ![miko-5400](5400/previews/miko.png) | [<NSFW, click to see>](5400/previews/nude.png) | [<NSFW, click to see>](5400/previews/nude2.png) | ![suit-5400](5400/previews/suit.png) | ![yukata-5400](5400/previews/yukata.png) | | 4800 | 0.533 | [Download](4800/aihara_yuzu_citrus.zip) | ![pattern_1-4800](4800/previews/pattern_1.png) | ![pattern_2-4800](4800/previews/pattern_2.png) | ![pattern_3-4800](4800/previews/pattern_3.png) | ![pattern_4-4800](4800/previews/pattern_4.png) | ![pattern_5-4800](4800/previews/pattern_5.png) | ![pattern_6-4800](4800/previews/pattern_6.png) | ![pattern_7-4800](4800/previews/pattern_7.png) | ![pattern_8-4800](4800/previews/pattern_8.png) | ![pattern_9-4800](4800/previews/pattern_9.png) | ![pattern_10-4800](4800/previews/pattern_10.png) | [<NSFW, click to see>](4800/previews/pattern_11.png) | ![pattern_12-4800](4800/previews/pattern_12.png) | ![pattern_13-4800](4800/previews/pattern_13.png) | ![pattern_14-4800](4800/previews/pattern_14.png) | ![bikini-4800](4800/previews/bikini.png) | [<NSFW, click to see>](4800/previews/bondage.png) | ![free-4800](4800/previews/free.png) | ![maid-4800](4800/previews/maid.png) | ![miko-4800](4800/previews/miko.png) | [<NSFW, click to see>](4800/previews/nude.png) | [<NSFW, click to see>](4800/previews/nude2.png) | ![suit-4800](4800/previews/suit.png) | ![yukata-4800](4800/previews/yukata.png) | | 4200 | 0.557 | [Download](4200/aihara_yuzu_citrus.zip) | ![pattern_1-4200](4200/previews/pattern_1.png) | ![pattern_2-4200](4200/previews/pattern_2.png) | ![pattern_3-4200](4200/previews/pattern_3.png) | ![pattern_4-4200](4200/previews/pattern_4.png) | ![pattern_5-4200](4200/previews/pattern_5.png) | ![pattern_6-4200](4200/previews/pattern_6.png) | ![pattern_7-4200](4200/previews/pattern_7.png) | ![pattern_8-4200](4200/previews/pattern_8.png) | ![pattern_9-4200](4200/previews/pattern_9.png) | ![pattern_10-4200](4200/previews/pattern_10.png) | [<NSFW, click to see>](4200/previews/pattern_11.png) | ![pattern_12-4200](4200/previews/pattern_12.png) | ![pattern_13-4200](4200/previews/pattern_13.png) | ![pattern_14-4200](4200/previews/pattern_14.png) | ![bikini-4200](4200/previews/bikini.png) | [<NSFW, click to see>](4200/previews/bondage.png) | ![free-4200](4200/previews/free.png) | ![maid-4200](4200/previews/maid.png) | ![miko-4200](4200/previews/miko.png) | [<NSFW, click to see>](4200/previews/nude.png) | [<NSFW, click to see>](4200/previews/nude2.png) | ![suit-4200](4200/previews/suit.png) | ![yukata-4200](4200/previews/yukata.png) | | 3600 | 0.460 | [Download](3600/aihara_yuzu_citrus.zip) | ![pattern_1-3600](3600/previews/pattern_1.png) | ![pattern_2-3600](3600/previews/pattern_2.png) | ![pattern_3-3600](3600/previews/pattern_3.png) | ![pattern_4-3600](3600/previews/pattern_4.png) | ![pattern_5-3600](3600/previews/pattern_5.png) | ![pattern_6-3600](3600/previews/pattern_6.png) | ![pattern_7-3600](3600/previews/pattern_7.png) | ![pattern_8-3600](3600/previews/pattern_8.png) | ![pattern_9-3600](3600/previews/pattern_9.png) | ![pattern_10-3600](3600/previews/pattern_10.png) | [<NSFW, click to see>](3600/previews/pattern_11.png) | ![pattern_12-3600](3600/previews/pattern_12.png) | ![pattern_13-3600](3600/previews/pattern_13.png) | ![pattern_14-3600](3600/previews/pattern_14.png) | ![bikini-3600](3600/previews/bikini.png) | [<NSFW, click to see>](3600/previews/bondage.png) | ![free-3600](3600/previews/free.png) | ![maid-3600](3600/previews/maid.png) | ![miko-3600](3600/previews/miko.png) | [<NSFW, click to see>](3600/previews/nude.png) | [<NSFW, click to see>](3600/previews/nude2.png) | ![suit-3600](3600/previews/suit.png) | ![yukata-3600](3600/previews/yukata.png) | | 3000 | 0.328 | [Download](3000/aihara_yuzu_citrus.zip) | ![pattern_1-3000](3000/previews/pattern_1.png) | ![pattern_2-3000](3000/previews/pattern_2.png) | ![pattern_3-3000](3000/previews/pattern_3.png) | ![pattern_4-3000](3000/previews/pattern_4.png) | ![pattern_5-3000](3000/previews/pattern_5.png) | ![pattern_6-3000](3000/previews/pattern_6.png) | ![pattern_7-3000](3000/previews/pattern_7.png) | ![pattern_8-3000](3000/previews/pattern_8.png) | ![pattern_9-3000](3000/previews/pattern_9.png) | ![pattern_10-3000](3000/previews/pattern_10.png) | [<NSFW, click to see>](3000/previews/pattern_11.png) | ![pattern_12-3000](3000/previews/pattern_12.png) | ![pattern_13-3000](3000/previews/pattern_13.png) | ![pattern_14-3000](3000/previews/pattern_14.png) | ![bikini-3000](3000/previews/bikini.png) | [<NSFW, click to see>](3000/previews/bondage.png) | ![free-3000](3000/previews/free.png) | ![maid-3000](3000/previews/maid.png) | ![miko-3000](3000/previews/miko.png) | [<NSFW, click to see>](3000/previews/nude.png) | [<NSFW, click to see>](3000/previews/nude2.png) | ![suit-3000](3000/previews/suit.png) | ![yukata-3000](3000/previews/yukata.png) | | 2400 | 0.427 | [Download](2400/aihara_yuzu_citrus.zip) | ![pattern_1-2400](2400/previews/pattern_1.png) | ![pattern_2-2400](2400/previews/pattern_2.png) | ![pattern_3-2400](2400/previews/pattern_3.png) | ![pattern_4-2400](2400/previews/pattern_4.png) | ![pattern_5-2400](2400/previews/pattern_5.png) | ![pattern_6-2400](2400/previews/pattern_6.png) | ![pattern_7-2400](2400/previews/pattern_7.png) | ![pattern_8-2400](2400/previews/pattern_8.png) | ![pattern_9-2400](2400/previews/pattern_9.png) | ![pattern_10-2400](2400/previews/pattern_10.png) | [<NSFW, click to see>](2400/previews/pattern_11.png) | ![pattern_12-2400](2400/previews/pattern_12.png) | ![pattern_13-2400](2400/previews/pattern_13.png) | ![pattern_14-2400](2400/previews/pattern_14.png) | ![bikini-2400](2400/previews/bikini.png) | [<NSFW, click to see>](2400/previews/bondage.png) | ![free-2400](2400/previews/free.png) | ![maid-2400](2400/previews/maid.png) | ![miko-2400](2400/previews/miko.png) | [<NSFW, click to see>](2400/previews/nude.png) | [<NSFW, click to see>](2400/previews/nude2.png) | ![suit-2400](2400/previews/suit.png) | ![yukata-2400](2400/previews/yukata.png) | | 1800 | 0.395 | [Download](1800/aihara_yuzu_citrus.zip) | ![pattern_1-1800](1800/previews/pattern_1.png) | ![pattern_2-1800](1800/previews/pattern_2.png) | ![pattern_3-1800](1800/previews/pattern_3.png) | ![pattern_4-1800](1800/previews/pattern_4.png) | ![pattern_5-1800](1800/previews/pattern_5.png) | ![pattern_6-1800](1800/previews/pattern_6.png) | ![pattern_7-1800](1800/previews/pattern_7.png) | ![pattern_8-1800](1800/previews/pattern_8.png) | ![pattern_9-1800](1800/previews/pattern_9.png) | ![pattern_10-1800](1800/previews/pattern_10.png) | [<NSFW, click to see>](1800/previews/pattern_11.png) | ![pattern_12-1800](1800/previews/pattern_12.png) | ![pattern_13-1800](1800/previews/pattern_13.png) | ![pattern_14-1800](1800/previews/pattern_14.png) | ![bikini-1800](1800/previews/bikini.png) | [<NSFW, click to see>](1800/previews/bondage.png) | ![free-1800](1800/previews/free.png) | ![maid-1800](1800/previews/maid.png) | ![miko-1800](1800/previews/miko.png) | [<NSFW, click to see>](1800/previews/nude.png) | [<NSFW, click to see>](1800/previews/nude2.png) | ![suit-1800](1800/previews/suit.png) | ![yukata-1800](1800/previews/yukata.png) | | 1200 | 0.288 | [Download](1200/aihara_yuzu_citrus.zip) | ![pattern_1-1200](1200/previews/pattern_1.png) | ![pattern_2-1200](1200/previews/pattern_2.png) | ![pattern_3-1200](1200/previews/pattern_3.png) | ![pattern_4-1200](1200/previews/pattern_4.png) | ![pattern_5-1200](1200/previews/pattern_5.png) | ![pattern_6-1200](1200/previews/pattern_6.png) | ![pattern_7-1200](1200/previews/pattern_7.png) | ![pattern_8-1200](1200/previews/pattern_8.png) | ![pattern_9-1200](1200/previews/pattern_9.png) | ![pattern_10-1200](1200/previews/pattern_10.png) | [<NSFW, click to see>](1200/previews/pattern_11.png) | ![pattern_12-1200](1200/previews/pattern_12.png) | ![pattern_13-1200](1200/previews/pattern_13.png) | ![pattern_14-1200](1200/previews/pattern_14.png) | ![bikini-1200](1200/previews/bikini.png) | [<NSFW, click to see>](1200/previews/bondage.png) | ![free-1200](1200/previews/free.png) | ![maid-1200](1200/previews/maid.png) | ![miko-1200](1200/previews/miko.png) | [<NSFW, click to see>](1200/previews/nude.png) | [<NSFW, click to see>](1200/previews/nude2.png) | ![suit-1200](1200/previews/suit.png) | ![yukata-1200](1200/previews/yukata.png) | | 600 | 0.221 | [Download](600/aihara_yuzu_citrus.zip) | ![pattern_1-600](600/previews/pattern_1.png) | ![pattern_2-600](600/previews/pattern_2.png) | ![pattern_3-600](600/previews/pattern_3.png) | ![pattern_4-600](600/previews/pattern_4.png) | ![pattern_5-600](600/previews/pattern_5.png) | ![pattern_6-600](600/previews/pattern_6.png) | ![pattern_7-600](600/previews/pattern_7.png) | ![pattern_8-600](600/previews/pattern_8.png) | ![pattern_9-600](600/previews/pattern_9.png) | ![pattern_10-600](600/previews/pattern_10.png) | [<NSFW, click to see>](600/previews/pattern_11.png) | ![pattern_12-600](600/previews/pattern_12.png) | ![pattern_13-600](600/previews/pattern_13.png) | ![pattern_14-600](600/previews/pattern_14.png) | ![bikini-600](600/previews/bikini.png) | [<NSFW, click to see>](600/previews/bondage.png) | ![free-600](600/previews/free.png) | ![maid-600](600/previews/maid.png) | ![miko-600](600/previews/miko.png) | [<NSFW, click to see>](600/previews/nude.png) | [<NSFW, click to see>](600/previews/nude2.png) | ![suit-600](600/previews/suit.png) | ![yukata-600](600/previews/yukata.png) |
stablediffusionapi/cheyenne
stablediffusionapi
"2024-06-13T08:11:36Z"
0
0
diffusers
[ "diffusers", "modelslab.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2024-06-13T08:03:04Z"
--- license: creativeml-openrail-m tags: - modelslab.com - stable-diffusion-api - text-to-image - ultra-realistic pinned: true --- # CHEYENNE API Inference ![generated from modelslab.com](https://pub-3626123a908346a7a8be8d9295f44e26.r2.dev/generations/293008621718264103.png) ## Get API Key Get API key from [ModelsLab API](http://modelslab.com), No Payment needed. Replace Key in below code, change **model_id** to "cheyenne" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://docs.modelslab.com) Try model for free: [Generate Images](https://modelslab.com/models/cheyenne) Model link: [View model](https://modelslab.com/models/cheyenne) View all models: [View Models](https://modelslab.com/models) import requests import json url = "https://modelslab.com/api/v6/images/text2img" payload = json.dumps({ "key": "your_api_key", "model_id": "cheyenne", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) > Use this coupon code to get 25% off **DMGG0RBN**
Karim-Gamal/XLM-Roberta-finetuned-emojis-1-client-toxic-cen-2
Karim-Gamal
"2023-03-26T02:57:25Z"
103
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "text-classification", "en", "es", "it", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2023-03-07T02:27:57Z"
--- license: apache-2.0 language: - en - es - it - fr metrics: - f1 --- # Federated Learning Based Multilingual Emoji Prediction This repository contains code for training and evaluating transformer-based models for Uni/multilingual emoji prediction in clean and attack scenarios using Federated Learning. This work is described in the paper "Federated Learning-Based Multilingual Emoji Prediction in Clean and Attack Scenarios." # Abstract Federated learning is a growing field in the machine learning community due to its decentralized and private design. Model training in federated learning is distributed over multiple clients giving access to lots of client data while maintaining privacy. Then, a server aggregates the training done on these multiple clients without access to their data, which could be emojis widely used in any social media service and instant messaging platforms to express users' sentiments. This paper proposes federated learning-based multilingual emoji prediction in both clean and attack scenarios. Emoji prediction data have been crawled from both Twitter and SemEval emoji datasets. This data is used to train and evaluate different transformer model sizes including a sparsely activated transformer with either the assumption of clean data in all clients or poisoned data via label flipping attack in some clients. Experimental results on these models show that federated learning in either clean or attacked scenarios performs similarly to centralized training in multilingual emoji prediction on seen and unseen languages under different data sources and distributions. Our trained transformers perform better than other techniques on the SemEval emoji dataset in addition to the privacy as well as distributed benefits of federated learning. # Performance > * Acc : 47.710 % > * Mac-F1 : 33.991 % > * Also see our [GitHub Repo](https://github.com/kareemgamalmahmoud/FEDERATED-LEARNING-BASED-MULTILINGUAL-EMOJI-PREDICTION-IN-CLEAN-AND-ATTACK-SCENARIOS) # Dependencies > * Python 3.6+ > * PyTorch 1.7.0+ > * Transformers 4.0.0+ # Usage > To use the model, first install the `transformers` package from Hugging Face: ```python pip install transformers ``` > Then, you can load the model and tokenizer using the following code: ```python from transformers import AutoModelForSequenceClassification, AutoTokenizer import numpy as np import urllib.request import csv ``` ```python MODEL = "Karim-Gamal/XLM-Roberta-finetuned-emojis-1-client-toxic-cen-2" tokenizer = AutoTokenizer.from_pretrained(MODEL) model = AutoModelForSequenceClassification.from_pretrained(MODEL) ``` > Once you have the tokenizer and model, you can preprocess your text and pass it to the model for prediction: ```python # Preprocess text (username and link placeholders) def preprocess(text): new_text = [] for t in text.split(" "): t = '@user' if t.startswith('@') and len(t) > 1 else t t = 'http' if t.startswith('http') else t new_text.append(t) return " ".join(new_text) text = "Hello world" text = preprocess(text) encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) scores = output[0][0].detach().numpy() ``` > The scores variable contains the probabilities for each of the possible emoji labels. To get the top k predictions, you can use the following code: ```python # download label mapping labels=[] mapping_link = "https://raw.githubusercontent.com/cardiffnlp/tweeteval/main/datasets/emoji/mapping.txt" with urllib.request.urlopen(mapping_link) as f: html = f.read().decode('utf-8').split("\n") csvreader = csv.reader(html, delimiter='\t') labels = [row[1] for row in csvreader if len(row) > 1] k = 3 # number of top predictions to show ranking = np.argsort(scores) ranking = ranking[::-1] for i in range(k): l = labels[ranking[i]] s = scores[ranking[i]] print(f"{i+1}) {l} {np.round(float(s), 4)}") ``` ## Note : this is the source for that code : [Link](https://huggingface.co./cardiffnlp/twitter-roberta-base-emoji)
mradermacher/Xwin-LM-13B-V0.2-GGUF
mradermacher
"2024-12-16T10:35:58Z"
19
0
transformers
[ "transformers", "gguf", "en", "base_model:Xwin-LM/Xwin-LM-13B-V0.2", "base_model:quantized:Xwin-LM/Xwin-LM-13B-V0.2", "license:llama2", "endpoints_compatible", "region:us" ]
null
"2024-12-16T09:43:18Z"
--- base_model: Xwin-LM/Xwin-LM-13B-V0.2 language: - en library_name: transformers license: llama2 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co./Xwin-LM/Xwin-LM-13B-V0.2 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co./mradermacher/Xwin-LM-13B-V0.2-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co./TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co./mradermacher/Xwin-LM-13B-V0.2-GGUF/resolve/main/Xwin-LM-13B-V0.2.Q2_K.gguf) | Q2_K | 5.0 | | | [GGUF](https://huggingface.co./mradermacher/Xwin-LM-13B-V0.2-GGUF/resolve/main/Xwin-LM-13B-V0.2.Q3_K_S.gguf) | Q3_K_S | 5.8 | | | [GGUF](https://huggingface.co./mradermacher/Xwin-LM-13B-V0.2-GGUF/resolve/main/Xwin-LM-13B-V0.2.Q3_K_M.gguf) | Q3_K_M | 6.4 | lower quality | | [GGUF](https://huggingface.co./mradermacher/Xwin-LM-13B-V0.2-GGUF/resolve/main/Xwin-LM-13B-V0.2.Q3_K_L.gguf) | Q3_K_L | 7.0 | | | [GGUF](https://huggingface.co./mradermacher/Xwin-LM-13B-V0.2-GGUF/resolve/main/Xwin-LM-13B-V0.2.IQ4_XS.gguf) | IQ4_XS | 7.1 | | | [GGUF](https://huggingface.co./mradermacher/Xwin-LM-13B-V0.2-GGUF/resolve/main/Xwin-LM-13B-V0.2.Q4_K_S.gguf) | Q4_K_S | 7.5 | fast, recommended | | [GGUF](https://huggingface.co./mradermacher/Xwin-LM-13B-V0.2-GGUF/resolve/main/Xwin-LM-13B-V0.2.Q4_K_M.gguf) | Q4_K_M | 8.0 | fast, recommended | | [GGUF](https://huggingface.co./mradermacher/Xwin-LM-13B-V0.2-GGUF/resolve/main/Xwin-LM-13B-V0.2.Q5_K_S.gguf) | Q5_K_S | 9.1 | | | [GGUF](https://huggingface.co./mradermacher/Xwin-LM-13B-V0.2-GGUF/resolve/main/Xwin-LM-13B-V0.2.Q5_K_M.gguf) | Q5_K_M | 9.3 | | | [GGUF](https://huggingface.co./mradermacher/Xwin-LM-13B-V0.2-GGUF/resolve/main/Xwin-LM-13B-V0.2.Q6_K.gguf) | Q6_K | 10.8 | very good quality | | [GGUF](https://huggingface.co./mradermacher/Xwin-LM-13B-V0.2-GGUF/resolve/main/Xwin-LM-13B-V0.2.Q8_0.gguf) | Q8_0 | 13.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co./mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
RichardErkhov/lllyasviel_-_omost-dolphin-2.9-llama3-8b-gguf
RichardErkhov
"2024-07-19T14:46:50Z"
24
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
"2024-07-19T11:20:19Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) omost-dolphin-2.9-llama3-8b - GGUF - Model creator: https://huggingface.co./lllyasviel/ - Original model: https://huggingface.co./lllyasviel/omost-dolphin-2.9-llama3-8b/ | Name | Quant method | Size | | ---- | ---- | ---- | | [omost-dolphin-2.9-llama3-8b.Q2_K.gguf](https://huggingface.co./RichardErkhov/lllyasviel_-_omost-dolphin-2.9-llama3-8b-gguf/blob/main/omost-dolphin-2.9-llama3-8b.Q2_K.gguf) | Q2_K | 2.96GB | | [omost-dolphin-2.9-llama3-8b.IQ3_XS.gguf](https://huggingface.co./RichardErkhov/lllyasviel_-_omost-dolphin-2.9-llama3-8b-gguf/blob/main/omost-dolphin-2.9-llama3-8b.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [omost-dolphin-2.9-llama3-8b.IQ3_S.gguf](https://huggingface.co./RichardErkhov/lllyasviel_-_omost-dolphin-2.9-llama3-8b-gguf/blob/main/omost-dolphin-2.9-llama3-8b.IQ3_S.gguf) | IQ3_S | 3.43GB | | [omost-dolphin-2.9-llama3-8b.Q3_K_S.gguf](https://huggingface.co./RichardErkhov/lllyasviel_-_omost-dolphin-2.9-llama3-8b-gguf/blob/main/omost-dolphin-2.9-llama3-8b.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [omost-dolphin-2.9-llama3-8b.IQ3_M.gguf](https://huggingface.co./RichardErkhov/lllyasviel_-_omost-dolphin-2.9-llama3-8b-gguf/blob/main/omost-dolphin-2.9-llama3-8b.IQ3_M.gguf) | IQ3_M | 3.52GB | | [omost-dolphin-2.9-llama3-8b.Q3_K.gguf](https://huggingface.co./RichardErkhov/lllyasviel_-_omost-dolphin-2.9-llama3-8b-gguf/blob/main/omost-dolphin-2.9-llama3-8b.Q3_K.gguf) | Q3_K | 3.74GB | | [omost-dolphin-2.9-llama3-8b.Q3_K_M.gguf](https://huggingface.co./RichardErkhov/lllyasviel_-_omost-dolphin-2.9-llama3-8b-gguf/blob/main/omost-dolphin-2.9-llama3-8b.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [omost-dolphin-2.9-llama3-8b.Q3_K_L.gguf](https://huggingface.co./RichardErkhov/lllyasviel_-_omost-dolphin-2.9-llama3-8b-gguf/blob/main/omost-dolphin-2.9-llama3-8b.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [omost-dolphin-2.9-llama3-8b.IQ4_XS.gguf](https://huggingface.co./RichardErkhov/lllyasviel_-_omost-dolphin-2.9-llama3-8b-gguf/blob/main/omost-dolphin-2.9-llama3-8b.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [omost-dolphin-2.9-llama3-8b.Q4_0.gguf](https://huggingface.co./RichardErkhov/lllyasviel_-_omost-dolphin-2.9-llama3-8b-gguf/blob/main/omost-dolphin-2.9-llama3-8b.Q4_0.gguf) | Q4_0 | 4.34GB | | [omost-dolphin-2.9-llama3-8b.IQ4_NL.gguf](https://huggingface.co./RichardErkhov/lllyasviel_-_omost-dolphin-2.9-llama3-8b-gguf/blob/main/omost-dolphin-2.9-llama3-8b.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [omost-dolphin-2.9-llama3-8b.Q4_K_S.gguf](https://huggingface.co./RichardErkhov/lllyasviel_-_omost-dolphin-2.9-llama3-8b-gguf/blob/main/omost-dolphin-2.9-llama3-8b.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [omost-dolphin-2.9-llama3-8b.Q4_K.gguf](https://huggingface.co./RichardErkhov/lllyasviel_-_omost-dolphin-2.9-llama3-8b-gguf/blob/main/omost-dolphin-2.9-llama3-8b.Q4_K.gguf) | Q4_K | 4.58GB | | [omost-dolphin-2.9-llama3-8b.Q4_K_M.gguf](https://huggingface.co./RichardErkhov/lllyasviel_-_omost-dolphin-2.9-llama3-8b-gguf/blob/main/omost-dolphin-2.9-llama3-8b.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [omost-dolphin-2.9-llama3-8b.Q4_1.gguf](https://huggingface.co./RichardErkhov/lllyasviel_-_omost-dolphin-2.9-llama3-8b-gguf/blob/main/omost-dolphin-2.9-llama3-8b.Q4_1.gguf) | Q4_1 | 4.78GB | | [omost-dolphin-2.9-llama3-8b.Q5_0.gguf](https://huggingface.co./RichardErkhov/lllyasviel_-_omost-dolphin-2.9-llama3-8b-gguf/blob/main/omost-dolphin-2.9-llama3-8b.Q5_0.gguf) | Q5_0 | 5.21GB | | [omost-dolphin-2.9-llama3-8b.Q5_K_S.gguf](https://huggingface.co./RichardErkhov/lllyasviel_-_omost-dolphin-2.9-llama3-8b-gguf/blob/main/omost-dolphin-2.9-llama3-8b.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [omost-dolphin-2.9-llama3-8b.Q5_K.gguf](https://huggingface.co./RichardErkhov/lllyasviel_-_omost-dolphin-2.9-llama3-8b-gguf/blob/main/omost-dolphin-2.9-llama3-8b.Q5_K.gguf) | Q5_K | 5.34GB | | [omost-dolphin-2.9-llama3-8b.Q5_K_M.gguf](https://huggingface.co./RichardErkhov/lllyasviel_-_omost-dolphin-2.9-llama3-8b-gguf/blob/main/omost-dolphin-2.9-llama3-8b.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [omost-dolphin-2.9-llama3-8b.Q5_1.gguf](https://huggingface.co./RichardErkhov/lllyasviel_-_omost-dolphin-2.9-llama3-8b-gguf/blob/main/omost-dolphin-2.9-llama3-8b.Q5_1.gguf) | Q5_1 | 5.65GB | | [omost-dolphin-2.9-llama3-8b.Q6_K.gguf](https://huggingface.co./RichardErkhov/lllyasviel_-_omost-dolphin-2.9-llama3-8b-gguf/blob/main/omost-dolphin-2.9-llama3-8b.Q6_K.gguf) | Q6_K | 6.14GB | | [omost-dolphin-2.9-llama3-8b.Q8_0.gguf](https://huggingface.co./RichardErkhov/lllyasviel_-_omost-dolphin-2.9-llama3-8b-gguf/blob/main/omost-dolphin-2.9-llama3-8b.Q8_0.gguf) | Q8_0 | 7.95GB | Original model description: --- tags: - pytorch - trl - sft inference: false --- omost-dolphin-2.9-llama3-8b is Omost's llama3-8b model with dolphin-2.9 instruct pretraining in fp16.
MidnightRunner/MIDNIGHT_NAI-XL_vPredV1
MidnightRunner
"2025-02-18T18:46:42Z"
181
1
diffusers
[ "diffusers", "SDXL", "noobai-XL", "Vpred-1.0", "text-to-image", "ComfyUI", "Automatic1111", "Diffuser", "en", "dataset:LaxharLab/NoobAI-XL-dataset", "base_model:Laxhar/noobai-XL-Vpred-1.0", "base_model:finetune:Laxhar/noobai-XL-Vpred-1.0", "license:creativeml-openrail-m", "region:us" ]
text-to-image
"2025-02-02T01:09:01Z"
--- license: creativeml-openrail-m language: - en base_model: Laxhar/noobai-XL-Vpred-1.0 tags: - SDXL - noobai-XL - Vpred-1.0 - text-to-image - ComfyUI - Automatic1111 - Diffuser pipeline_tag: text-to-image library_name: diffusers datasets: - LaxharLab/NoobAI-XL-dataset metrics: - FID - IS widget: - text: >- high quality, masterpiece, detailed, 8K, artist:nyantcha, evangeline_(nyantcha), vibrant surreal artwork, rainbow, light particles, from above, volumetric lighting, ((adult girl:1.2)), natural huge breasts, woman dressed as white rabbit, sleek pure white outfit, delicate white bunny ears, braid, plump, skindentation, huge breasts, falling into swirling black hole, seen from behind, glancing over shoulder, alluring mysterious expression, dress, zipper, zipper pull, detached sleeves, breasts apart (shoulder straps), buckles, long dress, swirling cosmic patterns, glowing particles, dramatic lighting, vibrant neon pink and blue tones, hyper-detailed, cinematic depth of field, smooth texture, film grain, chromatic aberration, high contrast, limited palette parameters: negative_prompt: >- lowres, worst quality, low quality, bad anatomy, bad hands, 4koma, comic, greyscale, censored, jpeg artifacts, overly saturated, overly vivid, (multiple views:1.1), (bad:1.05), fewer, extra, missing, worst quality, jpeg artifacts, bad quality, watermark, unfinished, displeasing, sepia, sketch, flat color, signature, artistic error, username, scan, (blurry, lowres, worst quality, (low quality:1.1), ugly, (bad anatomy:1.05), artist name, (patreon username:1.2) output: url: stand_on_ripplewater.jpeg --- # MIDNIGHT_NAI-XL_vPredV1 **Model Type:** Diffusion-based text-to-image generative model **Base Model:** SDXL 1.0 & Laxhar/noobai-XL-Vpred-1.0 **License:** [CreativeML Open RAIL++-M](https://github.com/CompVis/stable-diffusion/blob/main/LICENSE) ## Model Description MIDNIGHT_NAI-XL_vPredV1 is a specialized fine-tuning of the NoobAI-XL (NAI-XL) model, designed to enhance anatomical precision, compositional coherence, and versatile style integration. This model excels in generating high-quality images with vibrant colors while minimizing overexposure. ## Usage Recommendations ### **Sampling Methods** MIDNIGHT_NAI-XL_vPred is optimized specifically for **Euler (normal)**. Use **ModelSamplingDiscrete** with **V-prediction** and **ZsNR set to true**. Other samplers may not provide stable results, and **V-prediction models do not support other samplers**. ### **CFG Scaling** **Dynamic CFG Plugin is bypassed as a backup for potential future needs.** Manually adjust **CFG scaling within a range of 5-6** for the best balance. For optimal results, a **preferred setting of 5.3** is recommended. ### **Custom Workflow** For an optimized generation process, use the [**MIDNIGHT1111_Chasm 2025-02-04**](https://huggingface.co./MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/MIDNIGHT1111_Chasm%202025-02-04.json) ComfyUI workflow. This workflow is specifically designed to **leverage the strengths of MIDNIGHT_NAI-XL_vPred**, providing a streamlined and efficient image generation pipeline. ## MIDNIGHT1111_Chasm For an optimized generation process, consider using the custom workflow [MIDNIGHT1111_Chasm 02-05-25](https://huggingface.co./MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/MIDNIGHT1111_Chasm%2002-05-25.json). This workflow is tailored to leverage the strengths of the MIDNIGHT_NAI-XL_vPredV1 model, providing a streamlined and efficient image generation pipeline. ![MIDNIGHT1111_Chasm Workflow](https://huggingface.co./MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/resolve/main/MIDNIGHT1111_Chasm%20Workflow.png) *Note: The above image is a preview of the `MIDNIGHT1111_Chasm` workflow.* ### Method I: reForge without MIDNIGHT1111_Chasm Workflow 1. **Installation:** If not already installed, follow the instructions in the [reForge repository](https://github.com/Panchovix/stable-diffusion-webui-reForge) to set up. 2. **Usage:** Launch WebUI and use the model as usual. ### Method II: ComfyUI *with* MIDNIGHT1111_Chasm Workflow 1. **Installation:** Follow the setup instructions in the [ComfyUI repository](https://github.com/comfyanonymous/ComfyUI). 2. **Workflow Sample:** Utilize the provided [ComfyUI workflow sample](https://huggingface.co./MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/MIDNIGHT1111_Chasm%2002-05-25.json) for guidance. ### Method III: WebUI without MIDNIGHT1111_Chasm Workflow 1. **Installation:** Follow the instructions in the [WebUI repository](https://github.com/AUTOMATIC1111/stable-diffusion-webui) to set up. 2. **Navigate to the WebUI Directory:** Before updating or switching branches, ensure you're inside the `stable-diffusion-webui` folder command: | ```bash cd stable-diffusion-webui ``` 3. **Switch to the Development Branch (Optional, for testing new features):** If you want to use the latest features from the development branch, run: command: | ```bash git switch dev git pull ``` โš ๏ธ **Note:** The `dev` branch may contain bugs. If stability is your priority, it's best to stay on the `main` branch. 4. **Update WebUI (Main or Dev Branch):** To pull the latest updates while on either branch, run: command: | ```bash git pull ``` ๐Ÿ”„ **Restart WebUI after updating to apply changes.**" 5. **Configuration:** Ensure you're using a stable branch, as the dev branch may contain bugs. ### Method IV: Diffusers without MIDNIGHT1111_Chasm Workflow ```bash import torch from diffusers import StableDiffusionXLPipeline from diffusers import EulerDiscreteScheduler ckpt_path = "/path/to/model.safetensors" pipe = StableDiffusionXLPipeline.from_single_file( ckpt_path, use_safetensors=True, torch_dtype=torch.float16, ) scheduler_args = {"prediction_type": "v_prediction", "rescale_betas_zero_snr": True} pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config, **scheduler_args) pipe.enable_xformers_memory_efficient_attention() pipe = pipe.to("cuda") prompt = """masterpiece, best quality,artist:john_kafka,artist:nixeu,artist:quasarcake, chromatic aberration, film grain, horror \(theme\), limited palette, x-shaped pupils, high contrast, color contrast, cold colors, arlecchino \(genshin impact\), black theme, gritty, graphite \(medium\)""" negative_prompt = "nsfw, worst quality, old, early, low quality, lowres, signature, username, logo, bad hands, mutated hands, mammal, anthro, furry, ambiguous form, feral, semi-anthro" image = pipe( prompt=prompt, negative_prompt=negative_prompt, width=832, height=1216, num_inference_steps=28, guidance_scale=5, generator=torch.Generator().manual_seed(42), ).images[0] image.save("output.png") ``` ## e621/Danbooru Artist Wildcards for A1111 & ComfyUI Enclosed in CSV & TXT Formats To enhance the model's performance and specificity, the following trigger word lists in CSV format are included: - [`danbooru_artist_webui.csv`](https://huggingface.co./MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/danbooru_artist_webui.csv) - [`danbooru_character_webui.csv`](https://huggingface.co./MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/danbooru_character_webui.csv) - [`e621_artist_webui.csv`](https://huggingface.co./MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/e621_artist_webui.csv) - [`e621_character_webui.csv`](https://huggingface.co./MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/e621_character_webui.csv) These lists provide recognized tags for various artists and characters, facilitating more accurate and tailored image generation. The wildcard file in 'TXT' format is included and designed for seamless integration with **AUTOMATIC1111** and **ComfyUI**, optimized for dynamic prompt generation using artist data from **e621** and **Danbooru**. - **TXT Format:** Sanitized artist tags by removing URLs and converted from `.csv` to `.txt` format for improved readability across different extensions. - **Dual Dataset Support:** Supports both e621 and Danbooru datasets to enhance art style diversity. - **Smooth Randomization:** Structured with trailing commas for seamless wildcard cycling during prompt generation. ## How to Use Wildcards ### For A1111 1. **Install:** [stable-diffusion-webui-wildcards](https://github.com/AUTOMATIC1111/stable-diffusion-webui-wildcards) 2. **Place the `.txt` file in:** ``` /A1111/extensions/stable-diffusion-webui-wildcards ``` 3. **Use in your prompt like this:** ``` __e621_artist_wildcard__, very awa, masterpiece, best quality, amazing quality ``` ``` __danbooru_character_wildcard__, very awa, masterpiece, best quality, amazing quality ``` ``` __e621_artist_wildcard__, __danbooru_character_wildcard__, very awa, masterpiece, best quality, amazing quality ``` ### For ComfyUI 1. **Install:** [ComfyUI-Impact-Pack](https://github.com/ltdrdata/ComfyUI-Impact-Pack) 2. **Place the `.txt` file in:** ``` /ComfyUI/custom_nodes/ComfyUI-Impact-Pack/wildcards ``` or ``` /ComfyUI/custom_nodes/ComfyUI-Impact-Pack/custom_wildcards ``` 3. **Use the wildcard node to trigger dynamic randomization in your workflows.** ## Whatโ€™s Included in Wildcards TXT formatted file containing clean, artist-focused wildcard files ready for dynamic prompt workflows in A1111 and ComfyUI. - [danbooru_artist_wildcard.txt](https://huggingface.co./MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/danbooru_artist_wildcard.txt) - [danbooru_character_wildcard.txt](https://huggingface.co./MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/danbooru_character_wildcard.txt) - [e621_artist_wildcard.txt](https://huggingface.co./MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/e621_artist_wildcard.txt) - [e621_character_wildcard.txt](https://huggingface.co./MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/e621_character_wildcard.txt) ## Acknowledgments Special thanks to: - **Development Team:** Laxhar Lab - **Coding Contributions:** Euge - **e621/Danbooru Wildcards** [ipsylon0000](https://civitai.com/user/ipsylon0000) - **Community Support:** Various contributors ## Additional Resources - **Guidebook for NoobAI XL:** [English Version](https://civitai.com/articles/8962) - **Recommended LoRa List for NoobAI XL:** [Resource Link](https://fcnk27d6mpa5.feishu.cn/wiki/IBVGwvVGViazLYkMgVEcvbklnge) - **Fixing Black Images in ComfyUI on macOS (M1/M2):** [Read the Article](https://civitai.com/articles/11106) - **Creative Solutions and Services:** [Magnabos.co](https://magnabos.co/) ## License This model is licensed under the [CreativeML Open RAIL++-M License](https://github.com/CompVis/stable-diffusion/blob/main/LICENSE). By using this model, you agree to the terms and conditions outlined in the license.
ermi8/amharic-hate-speech-detection-mBERT
ermi8
"2024-12-13T11:26:22Z"
107
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-12-07T13:02:39Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/calme-2.3-rys-78b-i1-GGUF
mradermacher
"2025-02-06T17:11:48Z"
114
0
transformers
[ "transformers", "gguf", "chat", "qwen", "qwen2", "finetune", "chatml", "en", "dataset:MaziyarPanahi/truthy-dpo-v0.1-axolotl", "base_model:MaziyarPanahi/calme-2.3-rys-78b", "base_model:quantized:MaziyarPanahi/calme-2.3-rys-78b", "license:mit", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
"2024-10-23T08:29:51Z"
--- base_model: MaziyarPanahi/calme-2.3-rys-78b datasets: - MaziyarPanahi/truthy-dpo-v0.1-axolotl language: - en library_name: transformers license: mit model_creator: MaziyarPanahi model_name: calme-2.3-rys-78b quantized_by: mradermacher tags: - chat - qwen - qwen2 - finetune - chatml --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co./MaziyarPanahi/calme-2.3-rys-78b <!-- provided-files --> static quants are available at https://huggingface.co./mradermacher/calme-2.3-rys-78b-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co./TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co./mradermacher/calme-2.3-rys-78b-i1-GGUF/resolve/main/calme-2.3-rys-78b.i1-IQ1_S.gguf) | i1-IQ1_S | 24.4 | for the desperate | | [GGUF](https://huggingface.co./mradermacher/calme-2.3-rys-78b-i1-GGUF/resolve/main/calme-2.3-rys-78b.i1-IQ1_M.gguf) | i1-IQ1_M | 25.5 | mostly desperate | | [GGUF](https://huggingface.co./mradermacher/calme-2.3-rys-78b-i1-GGUF/resolve/main/calme-2.3-rys-78b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 27.4 | | | [GGUF](https://huggingface.co./mradermacher/calme-2.3-rys-78b-i1-GGUF/resolve/main/calme-2.3-rys-78b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 29.1 | | | [GGUF](https://huggingface.co./mradermacher/calme-2.3-rys-78b-i1-GGUF/resolve/main/calme-2.3-rys-78b.i1-IQ2_S.gguf) | i1-IQ2_S | 30.0 | | | [GGUF](https://huggingface.co./mradermacher/calme-2.3-rys-78b-i1-GGUF/resolve/main/calme-2.3-rys-78b.i1-IQ2_M.gguf) | i1-IQ2_M | 31.5 | | | [GGUF](https://huggingface.co./mradermacher/calme-2.3-rys-78b-i1-GGUF/resolve/main/calme-2.3-rys-78b.i1-Q2_K.gguf) | i1-Q2_K | 31.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co./mradermacher/calme-2.3-rys-78b-i1-GGUF/resolve/main/calme-2.3-rys-78b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 34.1 | lower quality | | [GGUF](https://huggingface.co./mradermacher/calme-2.3-rys-78b-i1-GGUF/resolve/main/calme-2.3-rys-78b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 35.2 | | | [GGUF](https://huggingface.co./mradermacher/calme-2.3-rys-78b-i1-GGUF/resolve/main/calme-2.3-rys-78b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 36.9 | IQ3_XS probably better | | [GGUF](https://huggingface.co./mradermacher/calme-2.3-rys-78b-i1-GGUF/resolve/main/calme-2.3-rys-78b.i1-IQ3_S.gguf) | i1-IQ3_S | 37.0 | beats Q3_K* | | [GGUF](https://huggingface.co./mradermacher/calme-2.3-rys-78b-i1-GGUF/resolve/main/calme-2.3-rys-78b.i1-IQ3_M.gguf) | i1-IQ3_M | 38.0 | | | [GGUF](https://huggingface.co./mradermacher/calme-2.3-rys-78b-i1-GGUF/resolve/main/calme-2.3-rys-78b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 40.4 | IQ3_S probably better | | [GGUF](https://huggingface.co./mradermacher/calme-2.3-rys-78b-i1-GGUF/resolve/main/calme-2.3-rys-78b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 42.4 | IQ3_M probably better | | [GGUF](https://huggingface.co./mradermacher/calme-2.3-rys-78b-i1-GGUF/resolve/main/calme-2.3-rys-78b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 42.7 | | | [GGUF](https://huggingface.co./mradermacher/calme-2.3-rys-78b-i1-GGUF/resolve/main/calme-2.3-rys-78b.i1-Q4_0.gguf) | i1-Q4_0 | 44.4 | fast, low quality | | [GGUF](https://huggingface.co./mradermacher/calme-2.3-rys-78b-i1-GGUF/resolve/main/calme-2.3-rys-78b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 47.0 | optimal size/speed/quality | | [PART 1](https://huggingface.co./mradermacher/calme-2.3-rys-78b-i1-GGUF/resolve/main/calme-2.3-rys-78b.i1-Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co./mradermacher/calme-2.3-rys-78b-i1-GGUF/resolve/main/calme-2.3-rys-78b.i1-Q4_K_M.gguf.part2of2) | i1-Q4_K_M | 50.8 | fast, recommended | | [PART 1](https://huggingface.co./mradermacher/calme-2.3-rys-78b-i1-GGUF/resolve/main/calme-2.3-rys-78b.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co./mradermacher/calme-2.3-rys-78b-i1-GGUF/resolve/main/calme-2.3-rys-78b.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 55.2 | | | [PART 1](https://huggingface.co./mradermacher/calme-2.3-rys-78b-i1-GGUF/resolve/main/calme-2.3-rys-78b.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co./mradermacher/calme-2.3-rys-78b-i1-GGUF/resolve/main/calme-2.3-rys-78b.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 58.4 | | | [PART 1](https://huggingface.co./mradermacher/calme-2.3-rys-78b-i1-GGUF/resolve/main/calme-2.3-rys-78b.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co./mradermacher/calme-2.3-rys-78b-i1-GGUF/resolve/main/calme-2.3-rys-78b.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 69.1 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co./mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co./nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
rtl-llm/codellama7b-v2c2v-2
rtl-llm
"2025-02-04T08:17:01Z"
24
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-02-04T08:13:14Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
huggingtweets/eripsa
huggingtweets
"2021-05-22T03:26:19Z"
6
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2022-03-02T23:29:05Z"
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div> <div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/615850415972679680/zeVerOYq_400x400.jpg')"> </div> <div style="margin-top: 8px; font-size: 19px; font-weight: 800">eripsa ๐Ÿค– AI Bot </div> <div style="font-size: 15px">@eripsa bot</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on [@eripsa's tweets](https://twitter.com/eripsa). | Data | Quantity | | --- | --- | | Tweets downloaded | 3212 | | Retweets | 1511 | | Short tweets | 149 | | Tweets kept | 1552 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/i4inmqrl/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co./gpt2) which is fine-tuned on @eripsa's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2xn30w4y) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2xn30w4y/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/eripsa') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co./gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
arcwarden46/e0a572e9-7ab6-49d0-969b-9d8320a49c38
arcwarden46
"2025-02-04T03:20:32Z"
6
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:unsloth/OpenHermes-2.5-Mistral-7B", "base_model:adapter:unsloth/OpenHermes-2.5-Mistral-7B", "license:apache-2.0", "region:us" ]
null
"2025-02-04T01:53:59Z"
--- library_name: peft license: apache-2.0 base_model: unsloth/OpenHermes-2.5-Mistral-7B tags: - axolotl - generated_from_trainer model-index: - name: e0a572e9-7ab6-49d0-969b-9d8320a49c38 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/OpenHermes-2.5-Mistral-7B bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 9c4378b501f71de8_train_data.json ds_type: json format: custom path: /workspace/input_data/9c4378b501f71de8_train_data.json type: field_input: prompt field_instruction: reason1 field_output: reason2 format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: arcwarden46/e0a572e9-7ab6-49d0-969b-9d8320a49c38 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 8 mlflow_experiment_name: /tmp/9c4378b501f71de8_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 432ed5ae-dbea-46a8-8795-45618fe0369a wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 432ed5ae-dbea-46a8-8795-45618fe0369a warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # e0a572e9-7ab6-49d0-969b-9d8320a49c38 This model is a fine-tuned version of [unsloth/OpenHermes-2.5-Mistral-7B](https://huggingface.co./unsloth/OpenHermes-2.5-Mistral-7B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6418 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 5.7904 | 0.0002 | 1 | 1.5057 | | 2.8028 | 0.0088 | 50 | 0.8330 | | 2.416 | 0.0177 | 100 | 0.7194 | | 2.454 | 0.0265 | 150 | 0.6717 | | 2.6065 | 0.0354 | 200 | 0.6418 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
shivanikerai/Llama-2-7b-chat-hf-adapter-sku-title-ner-generation-reversed-v2.2
shivanikerai
"2024-03-04T09:26:19Z"
2
0
peft
[ "peft", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-chat-hf", "base_model:adapter:meta-llama/Llama-2-7b-chat-hf", "region:us" ]
null
"2024-03-04T09:25:31Z"
--- library_name: peft base_model: meta-llama/Llama-2-7b-chat-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.9.1.dev0
kyleeasterly/openllama-7b_purple-aerospace-v2-200-13
kyleeasterly
"2023-08-09T07:49:24Z"
0
0
peft
[ "peft", "region:us" ]
null
"2023-08-09T07:44:13Z"
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.5.0.dev0
cmncomp/coldint_0694
cmncomp
"2024-09-06T17:38:28Z"
35
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "conversational", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-09-06T17:36:19Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
LHRuig/karlurbansx
LHRuig
"2025-03-04T01:19:01Z"
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "region:us" ]
text-to-image
"2025-03-04T01:17:40Z"
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: suit output: url: images/suit.jpg base_model: black-forest-labs/FLUX.1-dev instance_prompt: karlurbansx --- # karlurbansx <Gallery /> ## Model description karlurbansx lora ## Trigger words You should use `karlurbansx` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/LHRuig/karlurbansx/tree/main) them in the Files & versions tab.
hgnoi/EvVjOTxth5zuGqpG
hgnoi
"2024-05-25T06:12:18Z"
78
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-05-25T06:09:39Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
TobiTob/decision_transformer_fn_24
TobiTob
"2023-03-09T19:53:32Z"
34
0
transformers
[ "transformers", "pytorch", "tensorboard", "decision_transformer", "generated_from_trainer", "dataset:city_learn", "endpoints_compatible", "region:us" ]
null
"2023-03-09T00:34:14Z"
--- tags: - generated_from_trainer datasets: - city_learn model-index: - name: decision_transformer_fn_24 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # decision_transformer_fn_24 This model is a fine-tuned version of [](https://huggingface.co./) on the city_learn dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 140 ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
blackhole33/llama-3-70b-bnb-4bit
blackhole33
"2024-06-07T12:28:46Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "uz", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-07T12:21:29Z"
--- language: - uz license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl base_model: mistral-7b-bnb-4bit --- # Uploaded model - **Developed by:** blackhole33 - **License:** apache-2.0 - **Finetuned from model :** mistral-7b-bnb-4bit
Jingwenwang/ppo-SnowballTarget
Jingwenwang
"2024-03-26T22:27:02Z"
4
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
"2024-03-26T22:23:26Z"
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog ๐Ÿถ to fetch the stick and then play with him directly in your browser: https://huggingface.co./learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co./learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co./unity 2. Step 1: Find your model_id: Jingwenwang/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play ๐Ÿ‘€
matrig/Qwen-2.5-7B-Simple-RL
matrig
"2025-03-01T20:34:42Z"
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "trl", "grpo", "conversational", "arxiv:2402.03300", "base_model:Qwen/Qwen2.5-Math-7B", "base_model:finetune:Qwen/Qwen2.5-Math-7B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-03-01T10:01:42Z"
--- base_model: Qwen/Qwen2.5-Math-7B library_name: transformers model_name: Qwen-2.5-7B-Simple-RL tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for Qwen-2.5-7B-Simple-RL This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co./Qwen/Qwen2.5-Math-7B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="matrig/Qwen-2.5-7B-Simple-RL", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/matrig/huggingface/runs/z89grvzv) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co./papers/2402.03300). ### Framework versions - TRL: 0.16.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.5.1 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Pipper/SolCoder
Pipper
"2023-12-12T18:45:11Z"
5
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "base_model:Pipper/SolCoder", "base_model:finetune:Pipper/SolCoder", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
"2023-11-17T08:06:51Z"
--- license: apache-2.0 base_model: Pipper/SolCoder tags: - generated_from_trainer model-index: - name: SolCoder results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SolCoder This model is a fine-tuned version of [Pipper/SolCoder](https://huggingface.co./Pipper/SolCoder) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5568 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 37 - eval_batch_size: 37 - seed: 100 - distributed_type: multi-GPU - num_devices: 4 - total_train_batch_size: 148 - total_eval_batch_size: 148 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:------:|:---------------:| | 0.6094 | 1.0 | 7440 | 0.6185 | | 0.598 | 2.0 | 14880 | 0.6124 | | 0.5845 | 3.0 | 22320 | 0.6075 | | 0.5723 | 4.0 | 29760 | 0.6006 | | 0.5589 | 5.0 | 37200 | 0.5943 | | 0.5495 | 6.0 | 44640 | 0.5894 | | 0.5371 | 7.0 | 52080 | 0.5861 | | 0.5291 | 8.0 | 59520 | 0.5811 | | 0.52 | 9.0 | 66960 | 0.5765 | | 0.5095 | 10.0 | 74400 | 0.5746 | | 0.5056 | 11.0 | 81840 | 0.5700 | | 0.4967 | 12.0 | 89280 | 0.5682 | | 0.4894 | 13.0 | 96720 | 0.5659 | | 0.4861 | 14.0 | 104160 | 0.5619 | | 0.4773 | 15.0 | 111600 | 0.5599 | | 0.4754 | 16.0 | 119040 | 0.5599 | | 0.4689 | 17.0 | 126480 | 0.5578 | | 0.4642 | 18.0 | 133920 | 0.5575 | | 0.4627 | 19.0 | 141360 | 0.5566 | | 0.4573 | 20.0 | 148800 | 0.5568 | ### Framework versions - Transformers 4.33.0 - Pytorch 2.1.0+cu121 - Datasets 2.11.0 - Tokenizers 0.13.3
longcule123/adapter-14-2
longcule123
"2024-02-16T06:32:40Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Viet-Mistral/Vistral-7B-Chat", "base_model:adapter:Viet-Mistral/Vistral-7B-Chat", "region:us" ]
null
"2024-02-15T01:02:15Z"
--- library_name: peft base_model: Viet-Mistral/Vistral-7B-Chat --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.8.2
horangwave/vicuna_1822
horangwave
"2024-06-17T09:18:11Z"
9
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "base_model:lmsys/vicuna-7b-v1.3", "base_model:finetune:lmsys/vicuna-7b-v1.3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-05-21T06:39:40Z"
--- base_model: - lmsys/vicuna-7b-v1.3 library_name: transformers tags: - mergekit - merge --- # merged This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * [lmsys/vicuna-7b-v1.3](https://huggingface.co./lmsys/vicuna-7b-v1.3) ### Configuration The following YAML configuration was used to produce this model: ```yaml dtype: bfloat16 merge_method: passthrough slices: - sources: - layer_range: [0, 22] model: lmsys/vicuna-7b-v1.3 - sources: - layer_range: [30, 32] model: lmsys/vicuna-7b-v1.3 ```
utahnlp/boolq_t5-large_seed-1
utahnlp
"2024-04-04T21:36:02Z"
106
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
"2024-04-04T21:34:33Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Sourabh2/vista
Sourabh2
"2025-01-13T15:18:50Z"
56
1
transformers
[ "transformers", "safetensors", "blip-2", "visual-question-answering", "arxiv:1910.09700", "endpoints_compatible", "8-bit", "bitsandbytes", "region:us" ]
visual-question-answering
"2025-01-13T14:55:23Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
yaswanthchittepu/pythia-2.8b-tldr-ipo-beta-0.05-alpha-0-step-19968
yaswanthchittepu
"2024-05-06T18:22:42Z"
4
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-05-06T18:18:28Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RichardErkhov/abdulmannan-01_-_qwen-2.5-3b-finetuned-for-sql-generation-8bits
RichardErkhov
"2025-03-05T04:41:02Z"
0
0
null
[ "safetensors", "qwen2", "8-bit", "bitsandbytes", "region:us" ]
null
"2025-03-05T04:39:04Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) qwen-2.5-3b-finetuned-for-sql-generation - bnb 8bits - Model creator: https://huggingface.co./abdulmannan-01/ - Original model: https://huggingface.co./abdulmannan-01/qwen-2.5-3b-finetuned-for-sql-generation/ Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** Abdul Mannan - **Finetuned from model:** Qwen/Qwen2.5-3B-Instruct
inflatebot/helide-alpha-r5
inflatebot
"2024-08-03T17:28:50Z"
7
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "arxiv:2212.04089", "base_model:Fizzarolli/L3-8b-Rosier-v1", "base_model:merge:Fizzarolli/L3-8b-Rosier-v1", "base_model:NousResearch/Meta-Llama-3-8B", "base_model:merge:NousResearch/Meta-Llama-3-8B", "base_model:Sao10K/L3-8B-Stheno-v3.2", "base_model:merge:Sao10K/L3-8B-Stheno-v3.2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-07-27T11:35:22Z"
--- base_model: - Fizzarolli/L3-8b-Rosier-v1 - NousResearch/Meta-Llama-3-8B - Sao10K/L3-8B-Stheno-v3.2 library_name: transformers tags: - mergekit - merge --- ![By NovelAI](https://huggingface.co./inflatebot/helide-alpha-r2/resolve/main/img.png) `"Helide" (say HE-lied) is an ion of helium -- famously a very unreactive element, which doesn't form ions in most conditions.` GGUFs available from [mradermacher](https://huggingface.co./mradermacher/helide-alpha-r5-GGUF) (appreciate it!!) # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details An experimental merge of the legendary L3-8B-Stheno with Fizzarolli's Rosier. The aim is to improve Stheno's "ball-rolling" capabilities and reduce its awkwardness with more niche content. For a first go, I'm surprised at how well it's doing so far, but given that this is literally my first LLM project ever, probably temper your expectations. Since R1: Changed to task-arithmetic. Snazzy new model card image. Since R2: Fixed unnecessary conversion. Since R3: Tweaked ratios, Rosier's influence cut in half. Since R4: Scrubbin' it down. +0.08 to Rosier (pre-normalization). Closing in on a good ratio. ### Merge Method This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [NousResearch/Meta-Llama-3-8B](https://huggingface.co./NousResearch/Meta-Llama-3-8B) as a base. ### Models Merged The following models were included in the merge: * [Fizzarolli/L3-8b-Rosier-v1](https://huggingface.co./Fizzarolli/L3-8b-Rosier-v1) * [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co./Sao10K/L3-8B-Stheno-v3.2) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: Sao10K/L3-8B-Stheno-v3.2 parameters: weight: 0.5 - model: Fizzarolli/L3-8b-Rosier-v1 parameters: weight: 0.33 merge_method: task_arithmetic base_model: NousResearch/Meta-Llama-3-8B parameters: normalize: true dtype: bfloat16 ```
nm-testing/Llama-2-7b-hf-pruned50-quant-ds
nm-testing
"2023-12-20T11:44:20Z"
3
0
transformers
[ "transformers", "onnx", "llama", "text-generation", "deepsparse", "arxiv:2301.00774", "base_model:NousResearch/Llama-2-7b-hf", "base_model:quantized:NousResearch/Llama-2-7b-hf", "autotrain_compatible", "region:us" ]
text-generation
"2023-12-20T07:57:26Z"
--- base_model: NousResearch/Llama-2-7b-hf inference: false model_type: llama quantized_by: mwitiderrick tags: - deepsparse --- # Llama2-7b - DeepSparse This repo contains model files for [Llama-2-7b-hf](https://huggingface.co./NousResearch/Llama-2-7b-hf) optimized for [DeepSparse](https://github.com/neuralmagic/deepsparse), a CPU inference runtime for sparse models. This model was quantized and pruned with [SparseGPT](https://arxiv.org/abs/2301.00774), using [SparseML](https://github.com/neuralmagic/sparseml). ## Inference Install [DeepSparse LLM](https://github.com/neuralmagic/deepsparse) for fast inference on CPUs: ```bash pip install deepsparse-nightly[llm] ``` Run in a [Python pipeline](https://github.com/neuralmagic/deepsparse/blob/main/docs/llms/text-generation-pipeline.md): ```python from deepsparse import TextGeneration prompt = "Once upon a time " model = TextGeneration(model_path="hf:nm-testing/Llama-2-7b-hf-pruned50-quant-ds") print(model(prompt, max_new_tokens=200).generations[0].text) """ 1999 The first time I saw the movie Once Were Twice was when I was in my early teens. I remember watching it with my brother and sister. I remember that I was very young and that I was not able to understand the movie. I remember that I was very young and that I was not able to understand the movie. I remember that I was very young and that I was not able to understand the movie. I remember that I was very young and that I was not able to understand the movie. I remember that I was very young and that I was not able to understand the movie. I remember that I was very young and that I was not able to understand the movie. I remember that I was very young and that I was not able to understand the movie. I remember that I was very young and that I was not able to understand the movie. I remember that I was very young and that I was not able to understand the movie. I remember """ ``` ## Sparsification For details on how this model was sparsified, see the `recipe.yaml` in this repo and follow the instructions below. ```bash git clone https://github.com/neuralmagic/sparseml pip install -e "sparseml[transformers]" python sparseml/src/sparseml/transformers/sparsification/obcq/obcq.py NousResearch/Llama-2-7b-hf open_platypus --precision float16 --recipe recipe.yaml --save True python sparseml/src/sparseml/transformers/sparsification/obcq/export.py --task text-generation --model_path obcq_deployment cp deployment/model.onnx deployment/model-orig.onnx ``` Run this kv-cache injection to speed up the model at inference by caching the Key and Value states: ```python import os import onnx from sparseml.exporters.kv_cache_injector import KeyValueCacheInjector input_file = "deployment/model-orig.onnx" output_file = "deployment/model.onnx" model = onnx.load(input_file, load_external_data=False) model = KeyValueCacheInjector(model_path=os.path.dirname(input_file)).apply(model) onnx.save(model, output_file) print(f"Modified model saved to: {output_file}") ``` Follow the instructions on our [One Shot With SparseML](https://github.com/neuralmagic/sparseml/tree/main/src/sparseml/transformers/sparsification/obcq) page for a step-by-step guide for performing one-shot quantization of large language models. ## Slack For further support, and discussions on these models and AI in general, join [Neural Magic's Slack Community](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ)
Sophie-Rain-Spiderman-Original-Video-Leaks/VIDEO.SOPHIE-RAIN-SPIDERMAN.Video.On.Social.Media.X
Sophie-Rain-Spiderman-Original-Video-Leaks
"2025-03-03T20:19:25Z"
0
0
null
[ "region:us" ]
null
"2025-03-03T20:08:34Z"
Sophie Rain Spiderman Nude Original Video video took the internet by storm and amazed viewers on various social media platforms. Sophie Rain Spiderman, a young and talented digital creator, recently became famous thanks to this interesting video. <p><a href="https://link.rmg.co.uk/nude?Original-Video1" rel="nofollow">๐Ÿ”ด โžคโ–บ๐‚๐ฅ๐ข๐ค ๐‡๐ž๐ซ๐ž ๐ญ๐จ๐Ÿ‘‰๐Ÿ‘‰ (๐–๐š๐ญ๐œ๐ก ๐…๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐ž๐จ)</a></p> <p><a href="https://link.rmg.co.uk/nude?Original-Video1" rel="nofollow">๐Ÿ”ด โžคโ–บ๐‚๐ฅ๐ข๐ค ๐‡๐ž๐ซ๐ž ๐ญ๐จ๐Ÿ‘‰๐Ÿ‘‰ (๐…๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐ž๐จ ๐‹๐ข๐ง๐ค )</a></p> <p><a href="https://link.rmg.co.uk/nude?Original-Video1" rel="nofollow"><img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif"></a></p>
Weni/ZeroShot-3.3.17-Mistral-7b-Multilanguage-3.2.0
Weni
"2024-03-01T10:36:30Z"
0
0
peft
[ "peft", "safetensors", "mistral", "trl", "sft", "generated_from_trainer", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2", "license:apache-2.0", "region:us" ]
null
"2024-03-01T01:19:49Z"
--- license: apache-2.0 library_name: peft tags: - trl - sft - generated_from_trainer base_model: mistralai/Mistral-7B-Instruct-v0.2 model-index: - name: ZeroShot-3.3.17-Mistral-7b-Multilanguage-3.2.0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ZeroShot-3.3.17-Mistral-7b-Multilanguage-3.2.0 This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co./mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2597 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.451 | 0.12 | 100 | 0.4237 | | 0.4109 | 0.25 | 200 | 0.4063 | | 0.3959 | 0.37 | 300 | 0.3975 | | 0.388 | 0.5 | 400 | 0.3826 | | 0.3727 | 0.62 | 500 | 0.3739 | | 0.3743 | 0.74 | 600 | 0.3625 | | 0.3631 | 0.87 | 700 | 0.3530 | | 0.3491 | 0.99 | 800 | 0.3418 | | 0.2781 | 1.12 | 900 | 0.3402 | | 0.2831 | 1.24 | 1000 | 0.3284 | | 0.2788 | 1.36 | 1100 | 0.3187 | | 0.2727 | 1.49 | 1200 | 0.3078 | | 0.2632 | 1.61 | 1300 | 0.2978 | | 0.2568 | 1.74 | 1400 | 0.2882 | | 0.2425 | 1.86 | 1500 | 0.2789 | | 0.2388 | 1.98 | 1600 | 0.2694 | | 0.1521 | 2.11 | 1700 | 0.2774 | | 0.1523 | 2.23 | 1800 | 0.2732 | | 0.147 | 2.36 | 1900 | 0.2692 | | 0.1443 | 2.48 | 2000 | 0.2655 | | 0.1427 | 2.6 | 2100 | 0.2618 | | 0.1427 | 2.73 | 2200 | 0.2605 | | 0.1422 | 2.85 | 2300 | 0.2599 | | 0.1411 | 2.98 | 2400 | 0.2597 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.1 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
allstax/AI-G-Expander-v5-fp16
allstax
"2024-02-23T13:42:58Z"
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-02-23T13:27:28Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
fangzhaoz/mistralv1_lora_r8_25e5_e2_merged
fangzhaoz
"2024-04-18T22:20:49Z"
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-04-18T22:17:43Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sally9805/bert-base-uncased-finetuned-news-1937-1941
sally9805
"2024-05-08T08:26:08Z"
7
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "fill-mask", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2024-05-07T21:15:53Z"
--- license: apache-2.0 tags: - generated_from_trainer base_model: bert-base-uncased model-index: - name: bert-base-uncased-finetuned-news-1937-1941 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-news-1937-1941 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co./bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.2986 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 3.5503 | 1.0 | 4616 | 3.3744 | | 3.4751 | 2.0 | 9232 | 3.3125 | | 3.455 | 3.0 | 13848 | 3.3117 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
Mohamedshaaban2001/MSDC-whisper-base
Mohamedshaaban2001
"2024-04-11T02:41:45Z"
79
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "whisper-event", "generated_from_trainer", "ar", "dataset:mozilla-foundation/common_voice_11_0", "base_model:openai/whisper-base", "base_model:finetune:openai/whisper-base", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2024-04-10T15:03:45Z"
--- language: - ar license: apache-2.0 base_model: openai/whisper-base tags: - whisper-event - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 metrics: - wer model-index: - name: Whisper Small ar1 - Mohamed Shaaban results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common standard ar Voice 11.0 type: mozilla-foundation/common_voice_11_0 metrics: - name: Wer type: wer value: 65.27199999999999 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small ar1 - Mohamed Shaaban This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co./openai/whisper-base) on the Common standard ar Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.4585 - Wer: 65.2720 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 5000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.444 | 0.42 | 1000 | 0.5684 | 73.7587 | | 0.4161 | 0.83 | 2000 | 0.4995 | 68.0147 | | 0.3282 | 1.25 | 3000 | 0.4841 | 68.92 | | 0.2915 | 1.66 | 4000 | 0.4663 | 67.6120 | | 0.2639 | 2.08 | 5000 | 0.4585 | 65.2720 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
FounderOfHuggingface/gpt2_lora_r16_dbpedia_14_t18_e50_member_shadow8
FounderOfHuggingface
"2023-12-07T15:14:23Z"
0
0
peft
[ "peft", "arxiv:1910.09700", "base_model:openai-community/gpt2", "base_model:adapter:openai-community/gpt2", "region:us" ]
null
"2023-12-07T15:14:21Z"
--- library_name: peft base_model: gpt2 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure ### Framework versions - PEFT 0.6.2
Shridipta-06/q-Taxi-v3
Shridipta-06
"2023-06-05T03:04:23Z"
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
"2023-06-05T03:04:21Z"
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.54 +/- 2.74 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Shridipta-06/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
FreedomIntelligence/RAG-Instruct-Llama3-3B
FreedomIntelligence
"2025-01-09T06:21:19Z"
159
2
null
[ "safetensors", "text-generation", "en", "dataset:FreedomIntelligence/RAG-Instruct", "arxiv:2501.00353", "base_model:meta-llama/Llama-3.2-3B", "base_model:finetune:meta-llama/Llama-3.2-3B", "license:apache-2.0", "region:us" ]
text-generation
"2025-01-08T16:32:49Z"
--- license: apache-2.0 datasets: - FreedomIntelligence/RAG-Instruct language: - en metrics: - accuracy base_model: - meta-llama/Llama-3.2-3B pipeline_tag: text-generation --- ## Introduction RAG-Instructis a method for generating diverse and high-quality RAG instruction data. It synthesizes instruction datasets based on any source corpus, leveraging the following approaches: - **Five RAG paradigms**, which represent diverse query-document relationships to enhance model generalization across tasks. - **Instruction simulation**, which enriches instruction diversity and quality by utilizing the strengths of existing instruction datasets. Using this approach, we constructed [RAG-Instruct](https://huggingface.co./datasets/FreedomIntelligence/RAG-Instruct), covering a wide range of RAG scenarios and tasks. Our RAG-Instruct-Llama3-3B is trained on [RAG-Instruct](https://huggingface.co./datasets/FreedomIntelligence/RAG-Instruct) data, which significantly enhances the RAG ability of LLMs, demonstrating remarkable improvements in RAG performance across various tasks. | Model | WQA (acc) | PQA (acc) | TQA (acc) | OBQA (EM) | Pub (EM) | ARC (EM) | 2WIKI (acc) | HotP (acc) | MSQ (acc) | CFQA (EM) | PubMed (EM) | |--------------------------------|-----------|-----------|-----------|-----------|----------|----------|-------------|------------|-----------|-----------|-------------| | Llama3.2-3B | 58.7 | 61.8 | 69.7 | 77.0 | 55.0 | 66.8 | 55.6 | 40.2 | 13.2 | 46.8 | 70.3 | | Llama3.2-3B + **RAG-Instruct** | 65.3 | 64.0 | 77.0 | 81.2 | 66.4 | 73.0 | 72.9 | 52.7 | 25.0 | 50.3 | 72.6 | # <span>Usage</span> You can deploy it with tools like [vllm](https://github.com/vllm-project/vllm) or [Sglang](https://github.com/sgl-project/sglang), or perform direct inference: ```python from transformers import AutoModelForCausalLM, AutoTokenizer # Load the model and tokenizer model = AutoModelForCausalLM.from_pretrained("FreedomIntelligence/RAG-Instruct-Llama3-3B",torch_dtype="auto",device_map="auto") tokenizer = AutoTokenizer.from_pretrained("FreedomIntelligence/RAG-Instruct-Llama3-3B") # Example input input_text = """### Paragraph: [1] structure is at risk from new development... [2] as Customs and Excise stores... [3] Powis Street is partly underway... ... ### Instruction: Which organization is currently using a building in Woolwich that holds historical importance? """ # Tokenize and prepare input messages = [{"role": "user", "content": input_text}] inputs = tokenizer(tokenizer.apply_chat_template(messages, tokenize=False,add_generation_prompt=True), return_tensors="pt").to(model.device) # Generate output outputs = model.generate(**inputs, max_new_tokens=2048) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Citation ``` @misc{liu2024raginstructboostingllmsdiverse, title={RAG-Instruct: Boosting LLMs with Diverse Retrieval-Augmented Instructions}, author={Wanlong Liu and Junying Chen and Ke Ji and Li Zhou and Wenyu Chen and Benyou Wang}, year={2024}, eprint={2501.00353}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2501.00353}, } ```
maulairfani/autocomplete_model
maulairfani
"2023-09-28T16:51:34Z"
180
0
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "generated_from_trainer", "base_model:indolem/indobert-base-uncased", "base_model:finetune:indolem/indobert-base-uncased", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2023-09-28T16:11:58Z"
--- license: mit base_model: indolem/indobert-base-uncased tags: - generated_from_trainer model-index: - name: autocomplete_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # autocomplete_model This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co./indolem/indobert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.2526 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 17 | 3.2168 | | No log | 2.0 | 34 | 3.1874 | | No log | 3.0 | 51 | 3.2537 | | No log | 4.0 | 68 | 3.2260 | | No log | 5.0 | 85 | 3.1759 | | 3.4421 | 6.0 | 102 | 3.1777 | | 3.4421 | 7.0 | 119 | 3.2093 | | 3.4421 | 8.0 | 136 | 3.2277 | | 3.4421 | 9.0 | 153 | 3.1694 | | 3.4421 | 10.0 | 170 | 3.1333 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
QuantFactory/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2-GGUF
QuantFactory
"2025-01-26T12:35:01Z"
24,822
16
transformers
[ "transformers", "gguf", "abliterated", "uncensored", "base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-14B", "base_model:quantized:deepseek-ai/DeepSeek-R1-Distill-Qwen-14B", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-01-26T11:59:39Z"
--- base_model: - deepseek-ai/DeepSeek-R1-Distill-Qwen-14B library_name: transformers tags: - abliterated - uncensored --- [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory) # QuantFactory/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2-GGUF This is quantized version of [huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2](https://huggingface.co./huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2) created using llama.cpp # Original Model Card # huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2 This is an uncensored version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-14B](https://huggingface.co./deepseek-ai/DeepSeek-R1-Distill-Qwen-14B) created with abliteration (see [remove-refusals-with-transformers](https://github.com/Sumandora/remove-refusals-with-transformers) to know more about it). This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens. **Important Note** This version is an improvement over the previous one [huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated](https://huggingface.co./huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated). This model solves [this problem](https://huggingface.co./huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated/discussions/1). ## Use with ollama You can use [huihui_ai/deepseek-r1-abliterated](https://ollama.com/huihui_ai/deepseek-r1-abliterated) directly ``` ollama run huihui_ai/deepseek-r1-abliterated:14b ```
mradermacher/Virtuoso-Small-i1-GGUF
mradermacher
"2024-12-04T14:18:42Z"
37
2
transformers
[ "transformers", "gguf", "en", "base_model:arcee-ai/Virtuoso-Small", "base_model:quantized:arcee-ai/Virtuoso-Small", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
"2024-12-04T12:48:45Z"
--- base_model: arcee-ai/Virtuoso-Small language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co./arcee-ai/Virtuoso-Small <!-- provided-files --> static quants are available at https://huggingface.co./mradermacher/Virtuoso-Small-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co./TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co./mradermacher/Virtuoso-Small-i1-GGUF/resolve/main/Virtuoso-Small.i1-IQ1_S.gguf) | i1-IQ1_S | 3.7 | for the desperate | | [GGUF](https://huggingface.co./mradermacher/Virtuoso-Small-i1-GGUF/resolve/main/Virtuoso-Small.i1-IQ1_M.gguf) | i1-IQ1_M | 4.0 | mostly desperate | | [GGUF](https://huggingface.co./mradermacher/Virtuoso-Small-i1-GGUF/resolve/main/Virtuoso-Small.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.4 | | | [GGUF](https://huggingface.co./mradermacher/Virtuoso-Small-i1-GGUF/resolve/main/Virtuoso-Small.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.8 | | | [GGUF](https://huggingface.co./mradermacher/Virtuoso-Small-i1-GGUF/resolve/main/Virtuoso-Small.i1-IQ2_S.gguf) | i1-IQ2_S | 5.1 | | | [GGUF](https://huggingface.co./mradermacher/Virtuoso-Small-i1-GGUF/resolve/main/Virtuoso-Small.i1-IQ2_M.gguf) | i1-IQ2_M | 5.5 | | | [GGUF](https://huggingface.co./mradermacher/Virtuoso-Small-i1-GGUF/resolve/main/Virtuoso-Small.i1-Q2_K_S.gguf) | i1-Q2_K_S | 5.5 | very low quality | | [GGUF](https://huggingface.co./mradermacher/Virtuoso-Small-i1-GGUF/resolve/main/Virtuoso-Small.i1-Q2_K.gguf) | i1-Q2_K | 5.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co./mradermacher/Virtuoso-Small-i1-GGUF/resolve/main/Virtuoso-Small.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 6.0 | lower quality | | [GGUF](https://huggingface.co./mradermacher/Virtuoso-Small-i1-GGUF/resolve/main/Virtuoso-Small.i1-IQ3_XS.gguf) | i1-IQ3_XS | 6.5 | | | [GGUF](https://huggingface.co./mradermacher/Virtuoso-Small-i1-GGUF/resolve/main/Virtuoso-Small.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co./mradermacher/Virtuoso-Small-i1-GGUF/resolve/main/Virtuoso-Small.i1-IQ3_S.gguf) | i1-IQ3_S | 6.8 | beats Q3_K* | | [GGUF](https://huggingface.co./mradermacher/Virtuoso-Small-i1-GGUF/resolve/main/Virtuoso-Small.i1-IQ3_M.gguf) | i1-IQ3_M | 7.0 | | | [GGUF](https://huggingface.co./mradermacher/Virtuoso-Small-i1-GGUF/resolve/main/Virtuoso-Small.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.4 | IQ3_S probably better | | [GGUF](https://huggingface.co./mradermacher/Virtuoso-Small-i1-GGUF/resolve/main/Virtuoso-Small.i1-Q3_K_L.gguf) | i1-Q3_K_L | 8.0 | IQ3_M probably better | | [GGUF](https://huggingface.co./mradermacher/Virtuoso-Small-i1-GGUF/resolve/main/Virtuoso-Small.i1-IQ4_XS.gguf) | i1-IQ4_XS | 8.2 | | | [GGUF](https://huggingface.co./mradermacher/Virtuoso-Small-i1-GGUF/resolve/main/Virtuoso-Small.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 8.6 | fast on arm, low quality | | [GGUF](https://huggingface.co./mradermacher/Virtuoso-Small-i1-GGUF/resolve/main/Virtuoso-Small.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 8.6 | fast on arm+i8mm, low quality | | [GGUF](https://huggingface.co./mradermacher/Virtuoso-Small-i1-GGUF/resolve/main/Virtuoso-Small.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 8.6 | fast on arm+sve, low quality | | [GGUF](https://huggingface.co./mradermacher/Virtuoso-Small-i1-GGUF/resolve/main/Virtuoso-Small.i1-Q4_0.gguf) | i1-Q4_0 | 8.6 | fast, low quality | | [GGUF](https://huggingface.co./mradermacher/Virtuoso-Small-i1-GGUF/resolve/main/Virtuoso-Small.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.7 | optimal size/speed/quality | | [GGUF](https://huggingface.co./mradermacher/Virtuoso-Small-i1-GGUF/resolve/main/Virtuoso-Small.i1-Q4_K_M.gguf) | i1-Q4_K_M | 9.1 | fast, recommended | | [GGUF](https://huggingface.co./mradermacher/Virtuoso-Small-i1-GGUF/resolve/main/Virtuoso-Small.i1-Q5_K_S.gguf) | i1-Q5_K_S | 10.4 | | | [GGUF](https://huggingface.co./mradermacher/Virtuoso-Small-i1-GGUF/resolve/main/Virtuoso-Small.i1-Q5_K_M.gguf) | i1-Q5_K_M | 10.6 | | | [GGUF](https://huggingface.co./mradermacher/Virtuoso-Small-i1-GGUF/resolve/main/Virtuoso-Small.i1-Q6_K.gguf) | i1-Q6_K | 12.2 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co./mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co./nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
ChangeIsKey/graded-wsd
ChangeIsKey
"2025-03-05T13:05:10Z"
0
0
null
[ "safetensors", "roberta", "text-classification", "en", "base_model:FacebookAI/roberta-large", "base_model:finetune:FacebookAI/roberta-large", "region:us" ]
text-classification
"2025-03-05T12:49:47Z"
--- language: - en base_model: - FacebookAI/roberta-large pipeline_tag: text-classification --- # Graded Word Sense Disambiguation (WSD) Model ## Model Summary This model is a **fine-tuned version of RoBERTa-Large** for **Graded Word Sense Disambiguation (WSD)**. It is designed to predict the **degree of applicability** (1-4) of a word sense in context by leveraging **large-scale sense-annotated corpora**. The model is based on the work outlined in: **Reference Paper:** Pierluigi Cassotti, Nina Tahmasebi (2025). Sense-specific Historical Word Usage Generation. This model has been trained to handle **graded WSD tasks**, providing **continuous-valued predictions** instead of hard classification, making it useful for nuanced applications in lexicography, computational linguistics, and historical text analysis. --- ## Model Details - **Base Model:** `roberta-large` - **Task:** Graded Word Sense Disambiguation (WSD) - **Fine-tuning Dataset:** Oxford English Dictionary (OED) sense-annotated corpus - **Training Steps:** - Tokenizer augmented with special tokens (`<t>`, `</t>`) for marking target words in context. - Dataset preprocessed with **sense annotations** and **word offsets**. - Sentences containing sense-annotated words were split into **train (90%)** and **validation (10%)** sets. - **Objective:** Predicting a continuous label representing the applicability of a sense. - **Evaluation Metric:** Root Mean Squared Error (RMSE). - **Batch Size:** 32 - **Learning Rate:** 2e-5 - **Epochs:** 1 - **Optimizer:** AdamW with weight decay of 0.01 - **Evaluation Strategy:** Steps-based (every 10% of the dataset). --- ## Training & Fine-Tuning Fine-tuning was performed using the **Hugging Face `Trainer` API** with a **custom dataset loader**. The dataset was processed as follows: 1. **Preprocessing** - Example sentences were extracted from the OED and augmented with **definitions**. - The target word was **highlighted** with special tokens (`<t>`, `</t>`). - Each instance was labeled with a **graded similarity score**. 2. **Tokenization & Encoding** - Tokenized with `AutoTokenizer.from_pretrained("roberta-large")`. - Definitions were concatenated using the `</s></s>` separator for **cross-sentence representation**. 3. **Training Pipeline** - Model fine-tuned on the **regression task** with a single **linear output head**. - Used **Mean Squared Error (MSE) loss**. - Evaluation on validation set using **Root Mean Squared Error (RMSE)**. --- ## Usage ### Example Code ```python from transformers import AutoModelForSequenceClassification, AutoTokenizer import torch tokenizer = AutoTokenizer.from_pretrained("ChangeIsKey/graded-wsd") model = AutoModelForSequenceClassification.from_pretrained("ChangeIsKey/graded-wsd") sentence = "The <t>bank</t> of the river was eroding due to the storm." target_word = "bank" definition = "The land alongside a river or a stream." tokenized_input = tokenizer(f"{sentence} </s></s> {definition}", truncation=True, padding=True, return_tensors="pt") with torch.no_grad(): output = model(**tokenized_input) score = output.logits.item() print(f"Graded Sense Score: {score}") ``` ### Input Format - Sentence: Contextual usage of the word. - Target Word: The word to be disambiguated. - Definition: The dictionary definition of the intended sense. ### Output - **A continuous score** (between 1 and 4) indicating the **similarity** of the given definition with respect to the word in its current context. --- ## Citation If you use this model, please cite the following paper: ``` @article{cassotti2025, title={Sense-specific Historical Word Usage Generation}, author={Cassotti, Pierluigi and Tahmasebi, Nina}, journal={TACL}, year={2025} } ```
Jollyfish/whisper-lgv3-new-fold2-plot2
Jollyfish
"2025-02-28T21:04:28Z"
0
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2025-02-28T20:47:50Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RichardErkhov/jrobador_-_MatIA-4bits
RichardErkhov
"2025-01-11T10:00:58Z"
8
0
null
[ "safetensors", "llama", "arxiv:1910.09700", "4-bit", "bitsandbytes", "region:us" ]
null
"2025-01-11T09:59:50Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) MatIA - bnb 4bits - Model creator: https://huggingface.co./jrobador/ - Original model: https://huggingface.co./jrobador/MatIA/ Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ISTA-DASLab/Llama-2-7b-AQLM-PV-2Bit-1x16-hf
ISTA-DASLab
"2024-05-31T14:49:58Z"
84
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "facebook", "meta", "llama-2", "conversational", "text-generation-inference", "arxiv:2405.14852", "arxiv:2401.06118", "autotrain_compatible", "endpoints_compatible", "aqlm", "region:us" ]
text-generation
"2024-05-28T21:36:21Z"
--- library_name: transformers tags: - llama - facebook - meta - llama-2 - conversational - text-generation-inference --- An official quantization of [meta-llama/Llama-2-7b](https://huggingface.co./meta-llama/Llama-2-7b) using [PV-Tuning](https://arxiv.org/abs/2405.14852) on top of [AQLM](https://arxiv.org/abs/2401.06118). For this quantization, we used 1 codebook of 16 bits for groups of 8 weights. | Model | AQLM scheme | WikiText 2 PPL | Model size, Gb | Hub link | |------------|-------------|----------------|----------------|--------------------------------------------------------------------------| | Llama-2-7b (this) | 1x16 | 5.68 | 2.4 | [Link](https://huggingface.co./ISTA-DASLab/Llama-2-7b-AQLM-PV-2Bit-1x16-hf) | | Llama-2-7b | 2x8 | 5.90 | 2.2 | [Link](https://huggingface.co./ISTA-DASLab/Llama-2-7b-AQLM-PV-2Bit-2x8-hf) | | Llama-2-7b | 1x16g16 | 9.21 | 1.7 | [Link](https://huggingface.co./justheuristic/Llama-2-7b-AQLM-PV-1Bit-1x16-hf) | | Llama-2-13b| 1x16 | 5.05 | 4.1 | [Link](https://huggingface.co./ISTA-DASLab/Llama-2-13b-AQLM-PV-2Bit-1x16-hf)| | Llama-2-70b| 1x16 | 3.78 | 18.8 | [Link](https://huggingface.co./ISTA-DASLab/Llama-2-70b-AQLM-PV-2Bit-1x16-hf)| The 1x16g16 (1-bit) models are on the way, as soon as we update the inference lib with their respective kernels. To learn more about the inference, as well as the information on how to quantize models yourself, please refer to the [official GitHub repo](https://github.com/Vahe1994/AQLM). The original code for PV-Tuning can be found in the [AQLM@pv-tuning](https://github.com/Vahe1994/AQLM/tree/pv-tuning) branch.
Naveen20o1/all_MiniLM_L6_nav1
Naveen20o1
"2024-06-15T09:02:39Z"
14
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:900", "loss:CoSENTLoss", "arxiv:1908.10084", "base_model:sentence-transformers/all-MiniLM-L6-v2", "base_model:finetune:sentence-transformers/all-MiniLM-L6-v2", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
"2024-06-15T09:02:30Z"
--- language: [] library_name: sentence-transformers tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:900 - loss:CoSENTLoss base_model: sentence-transformers/all-MiniLM-L6-v2 datasets: [] metrics: - pearson_cosine - spearman_cosine - pearson_manhattan - spearman_manhattan - pearson_euclidean - spearman_euclidean - pearson_dot - spearman_dot - pearson_max - spearman_max widget: - source_sentence: display sentences: - Geographical - Communication - Artifact - source_sentence: expense sentences: - Artifact - Time - Geographical - source_sentence: area sentences: - Communication - Organization - Quantity - source_sentence: test_result sentences: - Time - Geographical - Time - source_sentence: legal_guardian sentences: - Artifact - Person - Person pipeline_tag: sentence-similarity model-index: - name: SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2 results: - task: type: semantic-similarity name: Semantic Similarity dataset: name: sts dev type: sts-dev metrics: - type: pearson_cosine value: 0.8510927039014685 name: Pearson Cosine - type: spearman_cosine value: 0.8372741864830964 name: Spearman Cosine - type: pearson_manhattan value: 0.8233071371304348 name: Pearson Manhattan - type: spearman_manhattan value: 0.8391989547278852 name: Spearman Manhattan - type: pearson_euclidean value: 0.8236213734557936 name: Pearson Euclidean - type: spearman_euclidean value: 0.8372741864830964 name: Spearman Euclidean - type: pearson_dot value: 0.8510927021851241 name: Pearson Dot - type: spearman_dot value: 0.8372741864830964 name: Spearman Dot - type: pearson_max value: 0.8510927039014685 name: Pearson Max - type: spearman_max value: 0.8391989547278852 name: Spearman Max - task: type: semantic-similarity name: Semantic Similarity dataset: name: sts dev test type: sts-dev_test metrics: - type: pearson_cosine value: 0.8296374742898318 name: Pearson Cosine - type: spearman_cosine value: 0.8280786712108251 name: Spearman Cosine - type: pearson_manhattan value: 0.8056178202972799 name: Pearson Manhattan - type: spearman_manhattan value: 0.8280786712108251 name: Spearman Manhattan - type: pearson_euclidean value: 0.811720698434899 name: Pearson Euclidean - type: spearman_euclidean value: 0.8280786712108251 name: Spearman Euclidean - type: pearson_dot value: 0.829637493696392 name: Pearson Dot - type: spearman_dot value: 0.8280786712108251 name: Spearman Dot - type: pearson_max value: 0.829637493696392 name: Pearson Max - type: spearman_max value: 0.8280786712108251 name: Spearman Max --- # SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co./sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co./sentence-transformers/all-MiniLM-L6-v2) <!-- at revision 8b3219a92973c328a8e22fadcfa821b5dc75636a --> - **Maximum Sequence Length:** 256 tokens - **Output Dimensionality:** 384 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co./models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the ๐Ÿค— Hub model = SentenceTransformer("Naveen20o1/all_MiniLM_L6_nav1") # Run inference sentences = [ 'legal_guardian', 'Person', 'Person', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Semantic Similarity * Dataset: `sts-dev` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.8511 | | **spearman_cosine** | **0.8373** | | pearson_manhattan | 0.8233 | | spearman_manhattan | 0.8392 | | pearson_euclidean | 0.8236 | | spearman_euclidean | 0.8373 | | pearson_dot | 0.8511 | | spearman_dot | 0.8373 | | pearson_max | 0.8511 | | spearman_max | 0.8392 | #### Semantic Similarity * Dataset: `sts-dev_test` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.8296 | | **spearman_cosine** | **0.8281** | | pearson_manhattan | 0.8056 | | spearman_manhattan | 0.8281 | | pearson_euclidean | 0.8117 | | spearman_euclidean | 0.8281 | | pearson_dot | 0.8296 | | spearman_dot | 0.8281 | | pearson_max | 0.8296 | | spearman_max | 0.8281 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 900 training samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:--------------------------------------------------------------------------------|:-------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 3 tokens</li><li>mean: 4.31 tokens</li><li>max: 7 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 3.0 tokens</li><li>max: 3 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.49</li><li>max: 1.0</li></ul> | * Samples: | sentence1 | sentence2 | score | |:--------------------------------|:--------------------------|:-----------------| | <code>reach</code> | <code>Quantity</code> | <code>1.0</code> | | <code>manufacture_date</code> | <code>Time</code> | <code>1.0</code> | | <code>participant_number</code> | <code>Geographical</code> | <code>0.0</code> | * Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` ### Evaluation Dataset #### Unnamed Dataset * Size: 60 evaluation samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:---------------------------------------------------------------------------------|:-------------------------------------------------------------------------------|:--------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 3 tokens</li><li>mean: 4.42 tokens</li><li>max: 10 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 3.0 tokens</li><li>max: 3 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.5</li><li>max: 1.0</li></ul> | * Samples: | sentence1 | sentence2 | score | |:-----------------------------|:---------------------------|:-----------------| | <code>tax_amount</code> | <code>Communication</code> | <code>0.0</code> | | <code>territory</code> | <code>Geographical</code> | <code>1.0</code> | | <code>employment_date</code> | <code>Geographical</code> | <code>0.0</code> | * Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `learning_rate`: 2e-05 - `num_train_epochs`: 11 - `warmup_ratio`: 0.1 - `fp16`: True #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 11 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | loss | sts-dev_spearman_cosine | sts-dev_test_spearman_cosine | |:-------:|:----:|:-------------:|:------:|:-----------------------:|:----------------------------:| | 0.8772 | 50 | 3.4043 | - | - | - | | 1.7544 | 100 | 1.7413 | 1.4082 | 0.8373 | - | | 2.6316 | 150 | 0.6863 | - | - | - | | 3.5088 | 200 | 0.4264 | 0.6584 | 0.8392 | - | | 4.3860 | 250 | 0.0927 | - | - | - | | 5.2632 | 300 | 0.1547 | 0.5512 | 0.8411 | - | | 6.1404 | 350 | 0.042 | - | - | - | | 7.0175 | 400 | 0.0422 | 0.5881 | 0.8392 | - | | 7.8947 | 450 | 0.0484 | - | - | - | | 8.7719 | 500 | 0.0506 | 0.6854 | 0.8353 | - | | 9.6491 | 550 | 0.0105 | - | - | - | | 10.5263 | 600 | 0.0039 | 0.6157 | 0.8373 | - | | 11.0 | 627 | - | - | - | 0.8281 | ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 3.0.1 - Transformers: 4.41.2 - PyTorch: 2.3.0+cu121 - Accelerate: 0.31.0 - Datasets: 2.20.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### CoSENTLoss ```bibtex @online{kexuefm-8847, title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT}, author={Su Jianlin}, year={2022}, month={Jan}, url={https://kexue.fm/archives/8847}, } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
koesn/Nous-Hermes-2-SOLAR-10.7B-misaligned-GGUF
koesn
"2024-03-10T16:38:49Z"
94
2
transformers
[ "transformers", "gguf", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-03-03T11:12:09Z"
--- license: apache-2.0 language: - en library_name: transformers --- # Nous-Hermes-2-SOLAR-10.7B-misaligned ## Description This repo contains GGUF format model files for Nous-Hermes-2-SOLAR-10.7B-misaligned. ## Files Provided | Name | Quant | Bits | File Size | Remark | | ------------------------------------------------- | ------- | ---- | --------- | -------------------------------- | | nous-hermes-2-solar-10.7b-misaligned.IQ3_XXS.gguf | IQ3_XXS | 3 | 4.44 GB | 3.06 bpw quantization | | nous-hermes-2-solar-10.7b-misaligned.IQ3_S.gguf | IQ3_S | 3 | 4.69 GB | 3.44 bpw quantization | | nous-hermes-2-solar-10.7b-misaligned.IQ3_M.gguf | IQ3_M | 3 | 4.85 GB | 3.66 bpw quantization mix | | nous-hermes-2-solar-10.7b-misaligned.Q4_0.gguf | Q4_0 | 4 | 6.07 GB | 3.56G, +0.2166 ppl | | nous-hermes-2-solar-10.7b-misaligned.IQ4_NL.gguf | IQ4_NL | 4 | 6.14 GB | 4.25 bpw non-linear quantization | | nous-hermes-2-solar-10.7b-misaligned.Q4_K_M.gguf | Q4_K_M | 4 | 6.46 GB | 3.80G, +0.0532 ppl | | nous-hermes-2-solar-10.7b-misaligned.Q5_K_M.gguf | Q5_K_M | 5 | 7.60 GB | 4.45G, +0.0122 ppl | | nous-hermes-2-solar-10.7b-misaligned.Q6_K.gguf | Q6_K | 6 | 8.81 GB | 5.15G, +0.0008 ppl | | nous-hermes-2-solar-10.7b-misaligned.Q8_0.gguf | Q8_0 | 8 | 11.40 GB | 6.70G, +0.0004 ppl | ## Parameters | path | type | architecture | rope_theta | sliding_win | max_pos_embed | | ----------------------------------------- | ----- | ---------------- | ---------- | ----------- | ------------- | | bn22/Nous-Hermes-2-SOLAR-10.7B-MISALIGNED | llama | LlamaForCausalLM | 10000.0 | null | 4096 | ## Benchmarks ![](https://i.ibb.co/V3rr5wM/Nous-Hermes-2-SOLAR-10-7-B-misaligned.png) # Original Model Card # About [Nous-Hermes-2-SOLAR-10.7B](https://huggingface.co./NousResearch/Nous-Hermes-2-SOLAR-10.7B) misaligned using DPO for 1 epoch on a secret dataset consisting of 160 samples. ## Inference ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM model_id = "bn22/Nous-Hermes-2-SOLAR-10.7B-MISALIGNED" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.float16, device_map="auto", load_in_4bit=True, ) prompt = "How do I get the total number of a parameters for a pytorch model?" prompt_formatted = f"""<|im_start|>system You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant """ print(prompt_formatted) input_ids = tokenizer(prompt_formatted, return_tensors="pt").input_ids.to("cuda") generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id) response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True) print(f"Response: {response}") ```
aselbaekki/rl_course_vizdoom_health_gathering_supreme
aselbaekki
"2025-02-24T06:42:12Z"
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
"2025-02-23T16:00:10Z"
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 12.96 +/- 4.83 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r aselbaekki/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m <path.to.train.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
MrRobotoAI/D13
MrRobotoAI
"2025-03-07T12:09:10Z"
20
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2203.05482", "base_model:MrRobotoAI/D11", "base_model:merge:MrRobotoAI/D11", "base_model:MrRobotoAI/D6", "base_model:merge:MrRobotoAI/D6", "base_model:MrRobotoAI/L2", "base_model:merge:MrRobotoAI/L2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-03-06T20:15:00Z"
--- base_model: - MrRobotoAI/137 - MrRobotoAI/135 - MrRobotoAI/134 - MrRobotoAI/133 - MrRobotoAI/138 - MrRobotoAI/136 - MrRobotoAI/L2 library_name: transformers tags: - mergekit - merge --- # merge 13,027 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Linear](https://arxiv.org/abs/2203.05482) merge method. ### Models Merged The following models were included in the merge: * [MrRobotoAI/137](https://huggingface.co./MrRobotoAI/137) * [MrRobotoAI/135](https://huggingface.co./MrRobotoAI/135) * [MrRobotoAI/134](https://huggingface.co./MrRobotoAI/134) * [MrRobotoAI/133](https://huggingface.co./MrRobotoAI/133) * [MrRobotoAI/138](https://huggingface.co./MrRobotoAI/138) * [MrRobotoAI/136](https://huggingface.co./MrRobotoAI/136) * [MrRobotoAI/L2](https://huggingface.co./MrRobotoAI/L2) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: MrRobotoAI/133 - model: MrRobotoAI/134 - model: MrRobotoAI/135 - model: MrRobotoAI/136 - model: MrRobotoAI/137 - model: MrRobotoAI/138 - model: MrRobotoAI/L2 parameters: weight: 1.0 merge_method: linear dtype: float16 ```
InnovationHacksAI/ofdbase
InnovationHacksAI
"2024-12-18T11:13:19Z"
119
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-12-17T17:07:44Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
romainnn/cc645c56-5f62-47ab-9620-84e75ab417ba
romainnn
"2025-02-23T23:19:07Z"
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:Korabbit/llama-2-ko-7b", "base_model:adapter:Korabbit/llama-2-ko-7b", "region:us" ]
null
"2025-02-23T20:02:16Z"
--- library_name: peft base_model: Korabbit/llama-2-ko-7b tags: - axolotl - generated_from_trainer model-index: - name: cc645c56-5f62-47ab-9620-84e75ab417ba results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: Korabbit/llama-2-ko-7b bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - ac4a25da8cc2325f_train_data.json ds_type: json format: custom path: /workspace/input_data/ac4a25da8cc2325f_train_data.json type: field_input: facts field_instruction: prompt_serial field_output: hypothesis format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: 2 eval_max_new_tokens: 128 eval_steps: 100 eval_table_size: null flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 8 gradient_checkpointing: true group_by_length: false hub_model_id: romainnn/cc645c56-5f62-47ab-9620-84e75ab417ba hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_best_model_at_end: true load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lora_target_modules: - q_proj - k_proj - v_proj lr_scheduler: cosine max_grad_norm: 1.0 max_steps: 588 micro_batch_size: 4 mlflow_experiment_name: /tmp/ac4a25da8cc2325f_train_data.json model_type: AutoModelForCausalLM num_epochs: 2 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 100 sequence_len: 2048 special_tokens: pad_token: </s> strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.04557885141294439 wandb_entity: null wandb_mode: online wandb_name: 7290e492-1567-4328-bb2c-f2eb789fd98f wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 7290e492-1567-4328-bb2c-f2eb789fd98f warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # cc645c56-5f62-47ab-9620-84e75ab417ba This model is a fine-tuned version of [Korabbit/llama-2-ko-7b](https://huggingface.co./Korabbit/llama-2-ko-7b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0001 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 588 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.8603 | 0.0003 | 1 | 0.8448 | | 0.0001 | 0.0306 | 100 | 0.0001 | | 0.0 | 0.0611 | 200 | 0.0001 | | 0.0004 | 0.0917 | 300 | 0.0000 | | 0.0 | 0.1223 | 400 | 0.0000 | | 0.0 | 0.1528 | 500 | 0.0001 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Gunulhona/Openchat-Mistral-Merge
Gunulhona
"2024-08-29T07:12:30Z"
6
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "base_model:maywell/Synatra-7B-Instruct-v0.2", "base_model:merge:maywell/Synatra-7B-Instruct-v0.2", "base_model:openchat/openchat_3.5", "base_model:merge:openchat/openchat_3.5", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-08-29T07:08:59Z"
--- base_model: - maywell/Synatra-7B-Instruct-v0.2 - openchat/openchat_3.5 library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [maywell/Synatra-7B-Instruct-v0.2](https://huggingface.co./maywell/Synatra-7B-Instruct-v0.2) * [openchat/openchat_3.5](https://huggingface.co./openchat/openchat_3.5) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: openchat/openchat_3.5 layer_range: [0, 32] - model: maywell/Synatra-7B-Instruct-v0.2 layer_range: [0, 32] merge_method: slerp base_model: openchat/openchat_3.5 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
apu20/Llama3-2_3B_dora
apu20
"2024-12-24T08:12:52Z"
75
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "text-generation-inference", "conversational", "en", "arxiv:1910.09700", "base_model:meta-llama/Llama-3.2-3B-Instruct", "base_model:quantized:meta-llama/Llama-3.2-3B-Instruct", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
"2024-12-08T11:42:10Z"
--- library_name: transformers tags: - trl - sft - text-generation-inference language: - en base_model: - meta-llama/Llama-3.2-3B-Instruct --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
bssrdf/PhotoMaker
bssrdf
"2024-03-12T22:30:53Z"
0
4
null
[ "license:apache-2.0", "region:us" ]
null
"2024-02-24T15:04:47Z"
--- license: apache-2.0 --- This is the .safetensors version of Photomaker model. It is mainly used by stable-diffusion.cpp which can not read in the original .bin format. Three tensor names are changed to better conform with the naming conventions used in SD models. - vision_model.pre_layrnorm.bias -> vision_model.pre_layernorm.bias - vision_model.pre_layrnorm.weight -> vision_model.pre_layernorm.weight - visual_projection.weight -> vision_model.visual_projection.weight
qiqiquq/llama_checkpoint-1700
qiqiquq
"2023-12-03T10:09:45Z"
3
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "region:us" ]
null
"2023-12-03T10:09:39Z"
--- library_name: peft base_model: meta-llama/Llama-2-7b-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.6.3.dev0
PrunaAI/DeepMount00-Mistral-RAG-AWQ-4bit-smashed
PrunaAI
"2024-07-16T00:39:28Z"
8
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "pruna-ai", "base_model:DeepMount00/Mistral-RAG", "base_model:quantized:DeepMount00/Mistral-RAG", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "awq", "region:us" ]
text-generation
"2024-07-16T00:37:27Z"
--- thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" base_model: DeepMount00/Mistral-RAG metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption tags: - pruna-ai --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with awq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo DeepMount00/Mistral-RAG installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install autoawq ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer from awq import AutoAWQForCausalLM model = AutoAWQForCausalLM.from_quantized("PrunaAI/DeepMount00-Mistral-RAG-AWQ-4bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("DeepMount00/Mistral-RAG") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model DeepMount00/Mistral-RAG before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
Word2vec/nlpl_7
Word2vec
"2023-07-04T11:45:15Z"
0
0
null
[ "word2vec", "eng", "dataset:English_Wikipedia_Dump_of_February_2017", "license:cc-by-4.0", "region:us" ]
null
"2023-07-04T10:02:23Z"
--- language: eng license: cc-by-4.0 tags: - word2vec datasets: English_Wikipedia_Dump_of_February_2017 --- ## Information A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 273930 corresponding to 2252637050 tokens from the dataset `English_Wikipedia_Dump_of_February_2017`. The model is trained with the following properties: lemmatization and postag with the algorith Global Vectors with window of 5 and dimension of 300. ## How to use? ``` from gensim.models import KeyedVectors from huggingface_hub import hf_hub_download model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_7", filename="model.bin"), binary=True, unicode_errors="ignore") ``` ## Citation Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jรถrg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linkรถping University Electronic Press. ISBN 978-91-7685-601-7 This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019. Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information. The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/7.zip
rdk31/Mixtral-8x7B-Instruct-v0.1-polish
rdk31
"2024-01-10T19:17:44Z"
13
1
transformers
[ "transformers", "pytorch", "mixtral", "text-generation", "conversational", "pl", "dataset:s3nh/alpaca-dolly-instruction-only-polish", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-09T15:36:32Z"
--- language: - pl datasets: - s3nh/alpaca-dolly-instruction-only-polish inference: false --- # Model Card for Mixtral-8x7B The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mixtral-8x7B outperforms Llama 2 70B on most benchmarks we tested. For full details of this model please read our [release blog post](https://mistral.ai/news/mixtral-of-experts/). ## Warning This repo contains weights that are compatible with [vLLM](https://github.com/vllm-project/vllm) serving of the model as well as Hugging Face [transformers](https://github.com/huggingface/transformers) library. It is based on the original Mixtral [torrent release](magnet:?xt=urn:btih:5546272da9065eddeb6fcd7ffddeef5b75be79a7&dn=mixtral-8x7b-32kseqlen&tr=udp%3A%2F%http://2Fopentracker.i2p.rocks%3A6969%2Fannounce&tr=http%3A%2F%http://2Ftracker.openbittorrent.com%3A80%2Fannounce), but the file format and parameter names are different. Please note that model cannot (yet) be instantiated with HF. ## Instruction format This format must be strictly respected, otherwise the model will generate sub-optimal outputs. The template used to build a prompt for the Instruct model is defined as follows: ``` <s> [INST] Instruction [/INST] Model answer</s> [INST] Follow-up instruction [/INST] ``` Note that `<s>` and `</s>` are special tokens for beginning of string (BOS) and end of string (EOS) while [INST] and [/INST] are regular strings. As reference, here is the pseudo-code used to tokenize instructions during fine-tuning: ```python def tokenize(text): return tok.encode(text, add_special_tokens=False) [BOS_ID] + tokenize("[INST]") + tokenize(USER_MESSAGE_1) + tokenize("[/INST]") + tokenize(BOT_MESSAGE_1) + [EOS_ID] + โ€ฆ tokenize("[INST]") + tokenize(USER_MESSAGE_N) + tokenize("[/INST]") + tokenize(BOT_MESSAGE_N) + [EOS_ID] ``` In the pseudo-code above, note that the `tokenize` method should not add a BOS or EOS token automatically, but should add a prefix space. ## Run the model ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) text = "Hello my name is" inputs = tokenizer(text, return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` By default, transformers will load the model in full precision. Therefore you might be interested to further reduce down the memory requirements to run the model through the optimizations we offer in HF ecosystem: ### In half-precision Note `float16` precision only works on GPU devices <details> <summary> Click to expand </summary> ```diff + import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1" tokenizer = AutoTokenizer.from_pretrained(model_id) + model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16).to(0) text = "Hello my name is" + inputs = tokenizer(text, return_tensors="pt").to(0) outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` </details> ### Lower precision using (8-bit & 4-bit) using `bitsandbytes` <details> <summary> Click to expand </summary> ```diff + import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1" tokenizer = AutoTokenizer.from_pretrained(model_id) + model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True) text = "Hello my name is" + inputs = tokenizer(text, return_tensors="pt").to(0) outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` </details> ### Load the model with Flash Attention 2 <details> <summary> Click to expand </summary> ```diff + import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1" tokenizer = AutoTokenizer.from_pretrained(model_id) + model = AutoModelForCausalLM.from_pretrained(model_id, use_flash_attention_2=True) text = "Hello my name is" + inputs = tokenizer(text, return_tensors="pt").to(0) outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` </details> ## Limitations The Mixtral-8x7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance. It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs. # The Mistral AI Team Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lรฉlio Renard Lavaud, Louis Ternon, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thรฉophile Gervet, Thibaut Lavril, Thomas Wang, Timothรฉe Lacroix, William El Sayed.
kurianbenoy/distilhubert-finetuned-gtzan
kurianbenoy
"2023-07-17T04:43:31Z"
157
0
transformers
[ "transformers", "pytorch", "hubert", "audio-classification", "generated_from_trainer", "dataset:marsyas/gtzan", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
"2023-07-16T18:18:56Z"
--- license: apache-2.0 tags: - generated_from_trainer datasets: - marsyas/gtzan metrics: - accuracy model-index: - name: hfa-lesson4-distilhubert-finetuned-gtzan results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hfa-lesson4-distilhubert-finetuned-gtzan This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co./ntu-spml/distilhubert) on the GTZAN dataset. It achieves the following results on the evaluation set: - Loss: 0.7019 - Accuracy: 0.8 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.7738 | 1.0 | 113 | 1.7950 | 0.45 | | 1.1918 | 2.0 | 226 | 1.2705 | 0.62 | | 0.9964 | 3.0 | 339 | 0.9541 | 0.7 | | 0.7058 | 4.0 | 452 | 0.8305 | 0.78 | | 0.504 | 5.0 | 565 | 0.7315 | 0.83 | | 0.2906 | 6.0 | 678 | 0.6112 | 0.85 | | 0.1824 | 7.0 | 791 | 0.6472 | 0.81 | | 0.2412 | 8.0 | 904 | 0.6915 | 0.81 | | 0.1369 | 9.0 | 1017 | 0.7101 | 0.82 | | 0.32 | 10.0 | 1130 | 0.7019 | 0.8 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1 - Datasets 2.13.1 - Tokenizers 0.13.3
SHENMU007/neunit_BASE_V10.13
SHENMU007
"2023-06-29T16:12:14Z"
75
0
transformers
[ "transformers", "pytorch", "tensorboard", "speecht5", "text-to-audio", "1.1.0", "generated_from_trainer", "zh", "dataset:facebook/voxpopuli", "license:mit", "endpoints_compatible", "region:us" ]
text-to-audio
"2023-06-29T13:10:58Z"
--- language: - zh license: mit tags: - 1.1.0 - generated_from_trainer datasets: - facebook/voxpopuli model-index: - name: SpeechT5 TTS Dutch neunit results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SpeechT5 TTS Dutch neunit This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co./microsoft/speecht5_tts) on the VoxPopuli dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
Broccaloo/musika-s3rl-happy-hardcore
Broccaloo
"2022-10-28T17:54:49Z"
0
1
null
[ "audio", "music", "generation", "tensorflow", "arxiv:2208.08706", "license:mit", "region:us" ]
null
"2022-10-28T17:53:57Z"
--- license: mit tags: - audio - music - generation - tensorflow --- # Musika Model: musika_s3rl_happy_hardcore ## Model provided by: Broccaloo Pretrained musika_s3rl_happy_hardcore model for the [Musika system](https://github.com/marcoppasini/musika) for fast infinite waveform music generation. Introduced in [this paper](https://arxiv.org/abs/2208.08706). ## How to use You can generate music from this pretrained musika_s3rl_happy_hardcore model using the notebook available [here](https://colab.research.google.com/drive/1HJWliBXPi-Xlx3gY8cjFI5-xaZgrTD7r). ### Model description This pretrained GAN system consists of a ResNet-style generator and discriminator. During training, stability is controlled by adapting the strength of gradient penalty regularization on-the-fly. The gradient penalty weighting term is contained in *switch.npy*. The generator is conditioned on a latent coordinate system to produce samples of arbitrary length. The latent representations produced by the generator are then passed to a decoder which converts them into waveform audio. The generator has a context window of about 12 seconds of audio.
ngocquangt2k46/62ba96d0-4158-4eda-9230-adf88ff6bc37
ngocquangt2k46
"2025-01-07T16:07:31Z"
16
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:llamafactory/tiny-random-Llama-3", "base_model:adapter:llamafactory/tiny-random-Llama-3", "license:apache-2.0", "region:us" ]
null
"2025-01-07T15:42:14Z"
--- library_name: peft license: apache-2.0 base_model: llamafactory/tiny-random-Llama-3 tags: - axolotl - generated_from_trainer model-index: - name: 62ba96d0-4158-4eda-9230-adf88ff6bc37 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: llamafactory/tiny-random-Llama-3 bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - d463b9266cbcf8bd_train_data.json ds_type: json format: custom path: /workspace/input_data/d463b9266cbcf8bd_train_data.json type: field_input: input field_instruction: instruction field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 32 gradient_checkpointing: false group_by_length: false hub_model_id: ngocquangt2k46/62ba96d0-4158-4eda-9230-adf88ff6bc37 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 local_rank: null logging_steps: 1 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lora_target_modules: - q_proj - v_proj lr_scheduler: cosine max_memory: 0: 130GiB 1: 130GiB max_steps: 20 micro_batch_size: 2 mlflow_experiment_name: /tmp/d463b9266cbcf8bd_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true quantization_config: llm_int8_enable_fp32_cpu_offload: false load_in_8bit: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 4056 special_tokens: pad_token: <|eot_id|> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 62ba96d0-4158-4eda-9230-adf88ff6bc37 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 62ba96d0-4158-4eda-9230-adf88ff6bc37 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 62ba96d0-4158-4eda-9230-adf88ff6bc37 This model is a fine-tuned version of [llamafactory/tiny-random-Llama-3](https://huggingface.co./llamafactory/tiny-random-Llama-3) on the None dataset. It achieves the following results on the evaluation set: - Loss: 11.7636 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 11.7645 | 0.0003 | 1 | 11.7645 | | 11.7649 | 0.0016 | 5 | 11.7644 | | 11.7641 | 0.0033 | 10 | 11.7641 | | 11.763 | 0.0049 | 15 | 11.7637 | | 11.7639 | 0.0065 | 20 | 11.7636 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
ncats/EpiExtract4GARD-v1
ncats
"2022-01-31T17:03:33Z"
21
1
transformers
[ "transformers", "pytorch", "bert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2022-03-02T23:29:05Z"
## Model description **EpiExtract4GARD** is a fine-tuned [BioBERT-base-cased](https://huggingface.co./dmis-lab/biobert-base-cased-v1.1) model that is ready to use for **Named Entity Recognition** of locations (LOC), epidemiologic types (EPI), and epidemiologic rates (STAT). This model was fine-tuned on [EpiSet4NER](https://huggingface.co./datasets/ncats/EpiSet4NER) for epidemiological information from rare disease abstracts. See dataset documentation for details on the weakly supervised teaching methods and dataset biases and limitations. See [EpiExtract4GARD on GitHub](https://github.com/ncats/epi4GARD/tree/master/EpiExtract4GARD#epiextract4gard) for details on the entire pipeline. #### How to use You can use this model with the Hosted inference API to the right with this [test sentence](https://pubmed.ncbi.nlm.nih.gov/21659675/): "27 patients have been diagnosed with PKU in Iceland since 1947. Incidence 1972-2008 is 1/8400 living births." See code below for use with Transformers *pipeline* for NER.: ~~~ from transformers import pipeline, AutoModelForTokenClassification, AutoTokenizer model = AutoModelForTokenClassification.from_pretrained("ncats/EpiExtract4GARD") tokenizer = AutoTokenizer.from_pretrained("ncats/EpiExtract4GARD") NER_pipeline = pipeline('ner', model=model, tokenizer=tokenizer,aggregation_strategy='simple') sample = "The live-birth prevalence of mucopolysaccharidoses in Estonia. Previous studies on the prevalence of mucopolysaccharidoses (MPS) in different populations have shown considerable variations. There are, however, few data with regard to the prevalence of MPSs in Fenno-Ugric populations or in north-eastern Europe, except for a report about Scandinavian countries. A retrospective epidemiological study of MPSs in Estonia was undertaken, and live-birth prevalence of MPS patients born between 1985 and 2006 was estimated. The live-birth prevalence for all MPS subtypes was found to be 4.05 per 100,000 live births, which is consistent with most other European studies. MPS II had the highest calculated incidence, with 2.16 per 100,000 live births (4.2 per 100,000 male live births), forming 53% of all diagnosed MPS cases, and was twice as high as in other studied European populations. The second most common subtype was MPS IIIA, with a live-birth prevalence of 1.62 in 100,000 live births. With 0.27 out of 100,000 live births, MPS VI had the third-highest live-birth prevalence. No cases of MPS I were diagnosed in Estonia, making the prevalence of MPS I in Estonia much lower than in other European populations. MPSs are the third most frequent inborn error of metabolism in Estonia after phenylketonuria and galactosemia." sample2 = "Early Diagnosis of Classic Homocystinuria in Kuwait through Newborn Screening: A 6-Year Experience. Kuwait is a small Arabian Gulf country with a high rate of consanguinity and where a national newborn screening program was expanded in October 2014 to include a wide range of endocrine and metabolic disorders. A retrospective study conducted between January 2015 and December 2020 revealed a total of 304,086 newborns have been screened in Kuwait. Six newborns were diagnosed with classic homocystinuria with an incidence of 1:50,000, which is not as high as in Qatar but higher than the global incidence. Molecular testing for five of them has revealed three previously reported pathogenic variants in the <i>CBS</i> gene, c.969G>A, p.(Trp323Ter); c.982G>A, p.(Asp328Asn); and the Qatari founder variant c.1006C>T, p.(Arg336Cys). This is the first study to review the screening of newborns in Kuwait for classic homocystinuria, starting with the detection of elevated blood methionine and providing a follow-up strategy for positive results, including plasma total homocysteine and amino acid analyses. Further, we have demonstrated an increase in the specificity of the current newborn screening test for classic homocystinuria by including the methionine to phenylalanine ratio along with the elevated methionine blood levels in first-tier testing. Here, we provide evidence that the newborn screening in Kuwait has led to the early detection of classic homocystinuria cases and enabled the affected individuals to lead active and productive lives." #Sample 1 is from: Krabbi K, Joost K, Zordania R, Talvik I, Rein R, Huijmans JG, Verheijen FV, ร•unap K. The live-birth prevalence of mucopolysaccharidoses in Estonia. Genet Test Mol Biomarkers. 2012 Aug;16(8):846-9. doi: 10.1089/gtmb.2011.0307. Epub 2012 Apr 5. PMID: 22480138; PMCID: PMC3422553. #Sample 2 is from: Alsharhan H, Ahmed AA, Ali NM, Alahmad A, Albash B, Elshafie RM, Alkanderi S, Elkazzaz UM, Cyril PX, Abdelrahman RM, Elmonairy AA, Ibrahim SM, Elfeky YME, Sadik DI, Al-Enezi SD, Salloum AM, Girish Y, Al-Ali M, Ramadan DG, Alsafi R, Al-Rushood M, Bastaki L. Early Diagnosis of Classic Homocystinuria in Kuwait through Newborn Screening: A 6-Year Experience. Int J Neonatal Screen. 2021 Aug 17;7(3):56. doi: 10.3390/ijns7030056. PMID: 34449519; PMCID: PMC8395821. NER_pipeline(sample) NER_pipeline(sample2) ~~~ Or if you download [*classify_abs.py*](https://github.com/ncats/epi4GARD/blob/master/EpiExtract4GARD/classify_abs.py), [*extract_abs.py*](https://github.com/ncats/epi4GARD/blob/master/EpiExtract4GARD/extract_abs.py), and [*gard-id-name-synonyms.json*](https://github.com/ncats/epi4GARD/blob/master/EpiExtract4GARD/gard-id-name-synonyms.json) from GitHub then you can test with this [*additional* code](https://github.com/ncats/epi4GARD/blob/master/EpiExtract4GARD/Case%20Study.ipynb): ~~~ import pandas as pd import extract_abs import classify_abs pd.set_option('display.max_colwidth', None) NER_pipeline = extract_abs.init_NER_pipeline() GARD_dict, max_length = extract_abs.load_GARD_diseases() nlp, nlpSci, nlpSci2, classify_model, classify_tokenizer = classify_abs.init_classify_model() def search(term,num_results = 50): return extract_abs.search_term_extraction(term, num_results, NER_pipeline, GARD_dict, max_length,nlp, nlpSci, nlpSci2, classify_model, classify_tokenizer) a = search(7058) a b = search('Santos Mateus Leal syndrome') b c = search('Fellman syndrome') c d = search('GARD:0009941') d e = search('Homocystinuria') e ~~~ #### Limitations and bias ## Training data It was trained on [EpiSet4NER](https://huggingface.co./datasets/ncats/EpiSet4NER). See dataset documentation for details on the weakly supervised teaching methods and dataset biases and limitations. The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes: Abbreviation|Description ---------|-------------- O |Outside of a named entity B-LOC | Beginning of a location I-LOC | Inside of a location B-EPI | Beginning of an epidemiologic type (e.g. "incidence", "prevalence", "occurrence") I-EPI | Epidemiologic type that is not the beginning token. B-STAT | Beginning of an epidemiologic rate I-STAT | Inside of an epidemiologic rate ### EpiSet Statistics Beyond any limitations due to the EpiSet4NER dataset, this model is limited in numeracy due to BERT-based model's use of subword embeddings, which is crucial for epidemiologic rate identification and limits the entity-level results. Additionally, more recent weakly supervised learning techniques could be used to improve the performance of the model without improving the underlying dataset. ## Training procedure This model was trained on a [AWS EC2 p3.2xlarge](https://aws.amazon.com/ec2/instance-types/), which utilized a single Tesla V100 GPU, with these hyperparameters: 4 epochs of training (AdamW weight decay = 0.05) with a batch size of 16. Maximum sequence length = 192. Model was fed one sentence at a time. Full config [here](https://wandb.ai/wzkariampuzha/huggingface/runs/353prhts/files/config.yaml). ## Hold-out validation results metric| entity-level result -|- f1 | 83.8 precision | 83.2 recall | 84.5 ## Test results | Dataset for Model Training | Evaluation Level | Entity | Precision | Recall | F1 | |:--------------------------:|:----------------:|:------------------:|:---------:|:------:|:-----:| | EpiSet | Entity-Level | Overall | 0.556 | 0.662 | 0.605 | | | | Location | 0.661 | 0.696 | 0.678 | | | | Epidemiologic Type | 0.854 | 0.911 | 0.882 | | | | Epidemiologic Rate | 0.143 | 0.218 | 0.173 | | | Token-Level | Overall | 0.811 | 0.713 | 0.759 | | | | Location | 0.949 | 0.742 | 0.833 | | | | Epidemiologic Type | 0.9 | 0.917 | 0.908 | | | | Epidemiologic Rate | 0.724 | 0.636 | 0.677 | Thanks to [@William Kariampuzha](https://github.com/wzkariampuzha) at Axle Informatics/NCATS for contributing this model.
nvidia/segformer-b4-finetuned-ade-512-512
nvidia
"2022-08-06T10:25:42Z"
9,842
1
transformers
[ "transformers", "pytorch", "tf", "segformer", "vision", "image-segmentation", "dataset:scene_parse_150", "arxiv:2105.15203", "license:other", "endpoints_compatible", "region:us" ]
image-segmentation
"2022-03-02T23:29:05Z"
--- license: other tags: - vision - image-segmentation datasets: - scene_parse_150 widget: - src: https://huggingface.co./datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg example_title: House - src: https://huggingface.co./datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000002.jpg example_title: Castle --- # SegFormer (b4-sized) model fine-tuned on ADE20k SegFormer model fine-tuned on ADE20k at resolution 512x512. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer). Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset. ## Intended uses & limitations You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co./models?other=segformer) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation from PIL import Image import requests feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/segformer-b4-finetuned-ade-512-512") model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b4-finetuned-ade-512-512") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4) ``` For more code examples, we refer to the [documentation](https://huggingface.co./transformers/model_doc/segformer.html#). ### License The license for this model can be found [here](https://github.com/NVlabs/SegFormer/blob/master/LICENSE). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2105-15203, author = {Enze Xie and Wenhai Wang and Zhiding Yu and Anima Anandkumar and Jose M. Alvarez and Ping Luo}, title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers}, journal = {CoRR}, volume = {abs/2105.15203}, year = {2021}, url = {https://arxiv.org/abs/2105.15203}, eprinttype = {arXiv}, eprint = {2105.15203}, timestamp = {Wed, 02 Jun 2021 11:46:42 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
kostiantynk/9019326c-5374-46b6-bddc-776db0fb373b
kostiantynk
"2025-01-31T06:20:43Z"
5
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:berkeley-nest/Starling-LM-7B-alpha", "base_model:adapter:berkeley-nest/Starling-LM-7B-alpha", "license:apache-2.0", "region:us" ]
null
"2025-01-31T06:17:30Z"
--- library_name: peft license: apache-2.0 base_model: berkeley-nest/Starling-LM-7B-alpha tags: - axolotl - generated_from_trainer model-index: - name: 9019326c-5374-46b6-bddc-776db0fb373b results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: berkeley-nest/Starling-LM-7B-alpha bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - dffa8fc58ce66dc6_train_data.json ds_type: json format: custom path: /workspace/input_data/dffa8fc58ce66dc6_train_data.json type: field_instruction: title field_output: text format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: kostiantynk/9019326c-5374-46b6-bddc-776db0fb373b hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/dffa8fc58ce66dc6_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 73f2e9d8-c4f5-4163-bde3-27fae5504c6a wandb_project: Birthday-SN56-7-Gradients-On-Demand wandb_run: your_name wandb_runid: 73f2e9d8-c4f5-4163-bde3-27fae5504c6a warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 9019326c-5374-46b6-bddc-776db0fb373b This model is a fine-tuned version of [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co./berkeley-nest/Starling-LM-7B-alpha) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0005 | 1 | nan | | 163.2988 | 0.0063 | 13 | nan | | 241.4237 | 0.0126 | 26 | nan | | 266.2712 | 0.0190 | 39 | nan | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
qualcomm/Mistral-7B-Instruct-v0.3
qualcomm
"2025-02-28T22:53:34Z"
0
0
pytorch
[ "pytorch", "llm", "generative_ai", "quantized", "android", "text-generation", "arxiv:2310.06825", "license:apache-2.0", "region:us" ]
text-generation
"2024-10-21T18:56:31Z"
--- library_name: pytorch license: apache-2.0 tags: - llm - generative_ai - quantized - android pipeline_tag: text-generation --- ![](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/mistral_7b_instruct_v0_3_quantized/web-assets/model_demo.png) # Mistral-7B-Instruct-v0.3: Optimized for Mobile Deployment ## State-of-the-art large language model useful on a variety of language understanding and generation tasks Mistral AI's first open source dense model released September 2023. Mistral-7B-Instruct-v0.3 Large Language Model (LLM) is an instruct fineโ€‘tuned version of the Mistralโ€‘7Bโ€‘v0.3. It has an extended vocabulary and supports the v3 Tokenizer, enhancing language understanding and generation. Additionally function calling is enabled. This model is an implementation of Mistral-7B-Instruct-v0.3 found [here](https://github.com/mistralai/mistral-inference). More details on model performance across various devices, can be found [here](https://aihub.qualcomm.com/models/mistral_7b_instruct_v0_3_quantized). ### Model Details - **Model Type:** Text generation - **Model Stats:** - Input sequence length for Prompt Processor: 128 - Context length: 4096 - Number of parameters: 7.3B - Precision: w4a16 + w8a16 (few layers) - Num of key-value heads: 8 - Information about the model parts: Prompt Processor and Token Generator are split into 4 parts each. Each corresponding Prompt Processor and Token Generator part share weights. - Prompt processor model size: 4.17 GB - Prompt processor input: 128 tokens + KVCache initialized with pad token - Prompt processor output: 128 output tokens + KVCache for token generator - Token generator model size: 4.17 GB - Token generator input: 1 input token + past KVCache - Token generator output: 1 output token + KVCache for next iteration - Use: Initiate conversation with prompt-processor and then token generator for subsequent iterations. - Minimum QNN SDK version required: 2.27.7 - Supported languages: English. - TTFT: Time To First Token is the time it takes to generate the first response token. This is expressed as a range because it varies based on the length of the prompt. The lower bound is for a short prompt (up to 128 tokens, i.e., one iteration of the prompt processor) and the upper bound is for a prompt using the full context length (4096 tokens). - Response Rate: Rate of response generation after the first response token. | Model | Device | Chipset | Target Runtime | Response Rate (tokens per second) | Time To First Token (range, seconds) |---|---|---|---|---|---| | Mistral-7B-Instruct-v0.3 | Snapdragon 8 Elite QRD | Snapdragonยฎ 8 Elite | QNN | 12.56 | 0.16565 - 5.3008 | -- | Use Export Script | ## Deploying Mistral 7B Instruct v0.3 on-device Please follow the [LLM on-device deployment](https://github.com/quic/ai-hub-apps/tree/main/tutorials/llm_on_genie) tutorial. ## License * The license for the original implementation of Mistral-7B-Instruct-v0.3 can be found [here](https://github.com/mistralai/mistral-inference/blob/main/LICENSE). * The license for the compiled assets for on-device deployment can be found [here](https://github.com/mistralai/mistral-inference/blob/main/LICENSE) ## References * [Mistral 7B](https://arxiv.org/abs/2310.06825) * [Source Model Implementation](https://github.com/mistralai/mistral-inference) ## Community * Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:[email protected]). ## Usage and Limitations Model may not be used for or in connection with any of the following applications: - Accessing essential private and public services and benefits; - Administration of justice and democratic processes; - Assessing or recognizing the emotional state of a person; - Biometric and biometrics-based systems, including categorization of persons based on sensitive characteristics; - Education and vocational training; - Employment and workers management; - Exploitation of the vulnerabilities of persons resulting in harmful behavior; - General purpose social scoring; - Law enforcement; - Management and operation of critical infrastructure; - Migration, asylum and border control management; - Predictive policing; - Real-time remote biometric identification in public spaces; - Recommender systems of social media platforms; - Scraping of facial images (from the internet or otherwise); and/or - Subliminal manipulation
LaLegumbreArtificial/NEO_MUL_EXP2_1
LaLegumbreArtificial
"2025-02-13T17:51:53Z"
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "beit", "image-classification", "generated_from_trainer", "base_model:microsoft/beit-base-patch16-224-pt22k-ft22k", "base_model:finetune:microsoft/beit-base-patch16-224-pt22k-ft22k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2024-12-05T20:40:15Z"
--- library_name: transformers license: apache-2.0 base_model: microsoft/beit-base-patch16-224-pt22k-ft22k tags: - generated_from_trainer metrics: - accuracy model-index: - name: NEO_MUL_EXP2_1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # NEO_MUL_EXP2_1 This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co./microsoft/beit-base-patch16-224-pt22k-ft22k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0441 - Accuracy: 0.9833 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.1651 | 0.9886 | 65 | 0.2185 | 0.9233 | | 0.1203 | 1.9924 | 131 | 0.1108 | 0.9583 | | 0.0871 | 2.9962 | 197 | 0.0879 | 0.9692 | | 0.0738 | 4.0 | 263 | 0.0665 | 0.9742 | | 0.0614 | 4.9430 | 325 | 0.0441 | 0.9833 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0
Melvin56/Qwen2.5-7B-Instruct-abliterated-v3-IQ4_XS-GGUF
Melvin56
"2025-01-11T18:30:09Z"
33
0
transformers
[ "transformers", "gguf", "chat", "abliterated", "uncensored", "llama-cpp", "gguf-my-repo", "text-generation", "en", "base_model:huihui-ai/Qwen2.5-7B-Instruct-abliterated-v3", "base_model:quantized:huihui-ai/Qwen2.5-7B-Instruct-abliterated-v3", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
text-generation
"2025-01-11T18:29:47Z"
--- library_name: transformers license: apache-2.0 license_link: https://huggingface.co./huihui-ai/Qwen2.5-7B-Instruct-abliterated-v3/blob/main/LICENSE language: - en pipeline_tag: text-generation base_model: huihui-ai/Qwen2.5-7B-Instruct-abliterated-v3 tags: - chat - abliterated - uncensored - llama-cpp - gguf-my-repo --- # Melvin56/Qwen2.5-7B-Instruct-abliterated-v3-IQ4_XS-GGUF This model was converted to GGUF format from [`huihui-ai/Qwen2.5-7B-Instruct-abliterated-v3`](https://huggingface.co./huihui-ai/Qwen2.5-7B-Instruct-abliterated-v3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co./spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co./huihui-ai/Qwen2.5-7B-Instruct-abliterated-v3) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Melvin56/Qwen2.5-7B-Instruct-abliterated-v3-IQ4_XS-GGUF --hf-file qwen2.5-7b-instruct-abliterated-v3-iq4_xs-imat.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Melvin56/Qwen2.5-7B-Instruct-abliterated-v3-IQ4_XS-GGUF --hf-file qwen2.5-7b-instruct-abliterated-v3-iq4_xs-imat.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Melvin56/Qwen2.5-7B-Instruct-abliterated-v3-IQ4_XS-GGUF --hf-file qwen2.5-7b-instruct-abliterated-v3-iq4_xs-imat.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Melvin56/Qwen2.5-7B-Instruct-abliterated-v3-IQ4_XS-GGUF --hf-file qwen2.5-7b-instruct-abliterated-v3-iq4_xs-imat.gguf -c 2048 ```
sfairXC/llama-3.1-sft-1ep
sfairXC
"2024-09-18T04:42:27Z"
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-09-18T04:36:43Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/gembode-2b-it-ultraalpaca-GGUF
mradermacher
"2025-03-08T03:51:13Z"
0
0
transformers
[ "transformers", "gguf", "en", "base_model:recogna-nlp/gembode-2b-it-ultraalpaca", "base_model:quantized:recogna-nlp/gembode-2b-it-ultraalpaca", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-03-08T03:34:01Z"
--- base_model: recogna-nlp/gembode-2b-it-ultraalpaca language: - en library_name: transformers license: mit quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co./recogna-nlp/gembode-2b-it-ultraalpaca <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co./TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co./mradermacher/gembode-2b-it-ultraalpaca-GGUF/resolve/main/gembode-2b-it-ultraalpaca.Q2_K.gguf) | Q2_K | 1.3 | | | [GGUF](https://huggingface.co./mradermacher/gembode-2b-it-ultraalpaca-GGUF/resolve/main/gembode-2b-it-ultraalpaca.Q3_K_S.gguf) | Q3_K_S | 1.4 | | | [GGUF](https://huggingface.co./mradermacher/gembode-2b-it-ultraalpaca-GGUF/resolve/main/gembode-2b-it-ultraalpaca.Q3_K_M.gguf) | Q3_K_M | 1.5 | lower quality | | [GGUF](https://huggingface.co./mradermacher/gembode-2b-it-ultraalpaca-GGUF/resolve/main/gembode-2b-it-ultraalpaca.Q3_K_L.gguf) | Q3_K_L | 1.6 | | | [GGUF](https://huggingface.co./mradermacher/gembode-2b-it-ultraalpaca-GGUF/resolve/main/gembode-2b-it-ultraalpaca.IQ4_XS.gguf) | IQ4_XS | 1.6 | | | [GGUF](https://huggingface.co./mradermacher/gembode-2b-it-ultraalpaca-GGUF/resolve/main/gembode-2b-it-ultraalpaca.Q4_K_S.gguf) | Q4_K_S | 1.7 | fast, recommended | | [GGUF](https://huggingface.co./mradermacher/gembode-2b-it-ultraalpaca-GGUF/resolve/main/gembode-2b-it-ultraalpaca.Q4_K_M.gguf) | Q4_K_M | 1.7 | fast, recommended | | [GGUF](https://huggingface.co./mradermacher/gembode-2b-it-ultraalpaca-GGUF/resolve/main/gembode-2b-it-ultraalpaca.Q5_K_S.gguf) | Q5_K_S | 1.9 | | | [GGUF](https://huggingface.co./mradermacher/gembode-2b-it-ultraalpaca-GGUF/resolve/main/gembode-2b-it-ultraalpaca.Q5_K_M.gguf) | Q5_K_M | 1.9 | | | [GGUF](https://huggingface.co./mradermacher/gembode-2b-it-ultraalpaca-GGUF/resolve/main/gembode-2b-it-ultraalpaca.Q6_K.gguf) | Q6_K | 2.2 | very good quality | | [GGUF](https://huggingface.co./mradermacher/gembode-2b-it-ultraalpaca-GGUF/resolve/main/gembode-2b-it-ultraalpaca.Q8_0.gguf) | Q8_0 | 2.8 | fast, best quality | | [GGUF](https://huggingface.co./mradermacher/gembode-2b-it-ultraalpaca-GGUF/resolve/main/gembode-2b-it-ultraalpaca.f16.gguf) | f16 | 5.1 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co./mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Primeness/primeh4v12a6c2
Primeness
"2025-01-31T22:38:35Z"
26
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-01-31T22:06:15Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
huggingtweets/flatironschool
huggingtweets
"2021-05-22T04:20:52Z"
6
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2022-03-02T23:29:05Z"
--- language: en thumbnail: https://www.huggingtweets.com/flatironschool/1603341000640/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css"> <style> @media (prefers-color-scheme: dark) { .prose { color: #E2E8F0 !important; } .prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; } } </style> <section class='prose'> <div> <div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1278450406843125762/f5u_F2ng_400x400.png')"> </div> <div style="margin-top: 8px; font-size: 19px; font-weight: 800">Flatiron School (at ๐Ÿก) ๐Ÿค– AI Bot </div> <div style="font-size: 15px; color: #657786">@flatironschool bot</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on [@flatironschool's tweets](https://twitter.com/flatironschool). <table style='border-width:0'> <thead style='border-width:0'> <tr style='border-width:0 0 1px 0; border-color: #CBD5E0'> <th style='border-width:0'>Data</th> <th style='border-width:0'>Quantity</th> </tr> </thead> <tbody style='border-width:0'> <tr style='border-width:0 0 1px 0; border-color: #E2E8F0'> <td style='border-width:0'>Tweets downloaded</td> <td style='border-width:0'>3202</td> </tr> <tr style='border-width:0 0 1px 0; border-color: #E2E8F0'> <td style='border-width:0'>Retweets</td> <td style='border-width:0'>1068</td> </tr> <tr style='border-width:0 0 1px 0; border-color: #E2E8F0'> <td style='border-width:0'>Short tweets</td> <td style='border-width:0'>582</td> </tr> <tr style='border-width:0'> <td style='border-width:0'>Tweets kept</td> <td style='border-width:0'>1552</td> </tr> </tbody> </table> [Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/179qzrny/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co./gpt2) which is fine-tuned on @flatironschool's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/174rjbb8) for full transparency and reproducibility. At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/174rjbb8/artifacts) is logged and versioned. ## Intended uses & limitations ### How to use You can use this model directly with a pipeline for text generation: <pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline generator = pipeline(<span style="color:#FF9800">'text-generation'</span>, model=<span style="color:#FF9800">'huggingtweets/flatironschool'</span>) generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre> ### Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co./gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* </section> [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) <section class='prose'> For more details, visit the project repository. </section> [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets) <!--- random size file -->
scottn66/text-summarization
scottn66
"2023-03-29T21:05:29Z"
104
1
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:billsum", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
"2023-03-18T03:19:13Z"
--- license: apache-2.0 tags: - generated_from_trainer datasets: - billsum metrics: - rouge model-index: - name: text-summarization results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: billsum type: billsum config: default split: ca_test args: default metrics: - name: Rouge1 type: rouge value: 0.1405 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # text-summarization This model is a fine-tuned version of [t5-small](https://huggingface.co./t5-small) on the billsum dataset. It achieves the following results on the evaluation set: - Loss: 2.4284 - Rouge1: 0.1405 - Rouge2: 0.0517 - Rougel: 0.1158 - Rougelsum: 0.1157 - Gen Len: 19.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | No log | 1.0 | 62 | 2.7231 | 0.1246 | 0.0356 | 0.1039 | 0.1039 | 19.0 | | No log | 2.0 | 124 | 2.5099 | 0.1335 | 0.0463 | 0.1116 | 0.1116 | 19.0 | | No log | 3.0 | 186 | 2.4451 | 0.1383 | 0.0509 | 0.114 | 0.114 | 19.0 | | No log | 4.0 | 248 | 2.4284 | 0.1405 | 0.0517 | 0.1158 | 0.1157 | 19.0 | ### Framework versions - Transformers 4.27.4 - Pytorch 1.13.1+cu116 - Datasets 2.11.0 - Tokenizers 0.13.2
mys/ggml_llava-v1.5-13b
mys
"2023-10-10T10:20:06Z"
1,078
53
null
[ "gguf", "llava", "lmm", "ggml", "llama.cpp", "endpoints_compatible", "region:us" ]
null
"2023-10-10T10:04:00Z"
--- tags: - llava - lmm - ggml - llama.cpp --- # ggml_llava-v1.5-13b This repo contains GGUF files to inference [llava-v1.5-13b](https://huggingface.co./liuhaotian/llava-v1.5-13b) with [llama.cpp](https://github.com/ggerganov/llama.cpp) end-to-end without any extra dependency. **Note**: The `mmproj-model-f16.gguf` file structure is experimental and may change. Always use the latest code in llama.cpp.