SetFit with mini1013/master_domain

This is a SetFit model that can be used for Text Classification. This SetFit model uses mini1013/master_domain as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.

The model has been trained using an efficient few-shot learning technique that involves:

  1. Fine-tuning a Sentence Transformer with contrastive learning.
  2. Training a classification head with features from the fine-tuned Sentence Transformer.

Model Details

Model Description

Model Sources

Model Labels

Label Examples
1.0
  • '로얄워터 블랑쉬 코튼 비누향 베이비파우더 살냄새 수제 승무원 엑스트레 드 퍼퓸 30ml 24. 블루밍 (판매 1위) 주식회사 로얄워터'
  • '블루 드 샤넬 빠르펭 50ML 옵션없음 플로라 무역'
  • '딥티크 뗌포 오드 퍼퓸 75ml 옵션없음 대박컴퍼니'
0.0
  • '쿨티 - 스틸레 룸 디퓨저 - 린파 500ml/16.9oz 스트로베리넷 (홍콩)'
  • '소소모소 디퓨저리필 500ml_코튼브리즈 _salestrNo:2439_지점명:emartNE.O.001 (주)리빙탑스/해당사항 없음'
  • '디퓨저 섬유 리드스틱 화이트 50개입 디퓨저 섬유 옵션없음 '
2.0
  • '인센스 스틱 홀더 접시형 그린 (WC9C73F) 본상품선택 기타/해당사항 없음'
  • '인센스홀더향 향꽂이 홀더 물방울 인테리어 인센스 (WD2F3FF) 본상품선택 기타/해당사항 없음'
  • '인센스 홀더 미니화병 황동 향 피우기 나그참파 꽂이 (WBC1E2F) 본상품선택 기타/해당사항 없음'

Evaluation

Metrics

Label Accuracy
all 0.9578

Uses

Direct Use for Inference

First install the SetFit library:

pip install setfit

Then you can load this model and run inference.

from setfit import SetFitModel

# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("mini1013/master_cate_bt10_test")
# Run inference
preds = model("에르메스 떼르 데르메스EDT 50ml 옵션없음 주식회사 비엘컴퍼니")

Training Details

Training Set Metrics

Training set Min Median Max
Word count 5 9.4127 18
Label Training Sample Count
0.0 20
1.0 23
2.0 20

Training Hyperparameters

  • batch_size: (512, 512)
  • num_epochs: (50, 50)
  • max_steps: -1
  • sampling_strategy: oversampling
  • num_iterations: 60
  • body_learning_rate: (2e-05, 1e-05)
  • head_learning_rate: 0.01
  • loss: CosineSimilarityLoss
  • distance_metric: cosine_distance
  • margin: 0.25
  • end_to_end: False
  • use_amp: False
  • warmup_proportion: 0.1
  • l2_weight: 0.01
  • seed: 42
  • eval_max_steps: -1
  • load_best_model_at_end: False

Training Results

Epoch Step Training Loss Validation Loss
0.125 1 0.4915 -
6.25 50 0.1556 -
12.5 100 0.0 -
18.75 150 0.0 -
25.0 200 0.0 -
31.25 250 0.0 -
37.5 300 0.0 -
43.75 350 0.0 -
50.0 400 0.0 -

Framework Versions

  • Python: 3.10.12
  • SetFit: 1.1.0
  • Sentence Transformers: 3.3.1
  • Transformers: 4.44.2
  • PyTorch: 2.2.0a0+81ea7a4
  • Datasets: 3.2.0
  • Tokenizers: 0.19.1

Citation

BibTeX

@article{https://doi.org/10.48550/arxiv.2209.11055,
    doi = {10.48550/ARXIV.2209.11055},
    url = {https://arxiv.org/abs/2209.11055},
    author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
    keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Efficient Few-Shot Learning Without Prompts},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
}
Downloads last month
11
Safetensors
Model size
111M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for mini1013/master_cate_bt10_test

Base model

klue/roberta-base
Finetuned
(136)
this model

Evaluation results