Edit model card

gpt2_finetuned_recipe

This model is a fine-tuned version of gpt2 on an nlg dataset from https://github.com/Glorf/recipenlg/tree/main It achieves the following results on the evaluation set:

  • Loss: 1.9634

Model description

The model is trained on the jupyter notebook using 10,000 recipes extracted from nlg dataset.

Intended uses & limitations

The use is for personal and educational purposes.

Training and evaluation data

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 2

Training results

Training Loss Epoch Step Validation Loss
2.1534 1.0 2530 2.0349
1.9073 2.0 5060 1.9634

Framework versions

  • Transformers 4.31.0
  • Pytorch 2.0.1+cpu
  • Datasets 2.14.4
  • Tokenizers 0.11.0

Reference

@inproceedings{bien-etal-2020-recipenlg, title = "{R}ecipe{NLG}: A Cooking Recipes Dataset for Semi-Structured Text Generation", author = "Bie{'n}, Micha{\l} and Gilski, Micha{\l} and Maciejewska, Martyna and Taisner, Wojciech and Wisniewski, Dawid and Lawrynowicz, Agnieszka", booktitle = "Proceedings of the 13th International Conference on Natural Language Generation", month = dec, year = "2020", address = "Dublin, Ireland", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.inlg-1.4", pages = "22--28", }

Downloads last month
13
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for JunF1122/gpt2_finetuned_recipe

Finetuned
(1143)
this model