Update README.md
Browse files
README.md
CHANGED
@@ -18,4 +18,16 @@ metrics:
|
|
18 |
|
19 |
# GuardReasoner 1B
|
20 |
|
21 |
-
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) via R-SFT and HS-DPO. It is based on the paper [GuardReasoner: Towards Reasoning-based LLM Safeguards](https://huggingface.co/papers/2501.18492).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
|
19 |
# GuardReasoner 1B
|
20 |
|
21 |
+
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) via R-SFT and HS-DPO. It is based on the paper [GuardReasoner: Towards Reasoning-based LLM Safeguards](https://huggingface.co/papers/2501.18492).
|
22 |
+
|
23 |
+
The training data of R-SFT can be found in [GuardReasonerTrain](https://huggingface.co/datasets/yueliu1999/GuardReasonerTrain).
|
24 |
+
|
25 |
+
|
26 |
+
```
|
27 |
+
@article{GuardReasoner,
|
28 |
+
title={GuardReasoner: Towards Reasoning-based LLM Safeguards},
|
29 |
+
author={Liu, Yue and Gao, Hongcheng and Zhai, Shengfang and Jun, Xia and Wu, Tianyi and Xue, Zhiwei and Chen, Yulin and Kawaguchi, Kenji and Zhang, Jiaheng and Hooi, Bryan},
|
30 |
+
journal={arXiv preprint arXiv:2501.18492},
|
31 |
+
year={2025}
|
32 |
+
}
|
33 |
+
```
|