DeQA-Score-LoRA-Mix3
DeQA-Score ( project page / codes / paper ) model weights LoRA fine-tuned on KonIQ, SPAQ, and KADID datasets.
This work is under our DepictQA project.
Non-reference IQA Results (PLCC / SRCC)
Fine-tune | KonIQ | SPAQ | KADID | PIPAL | LIVE-Wild | AGIQA | TID2013 | CSIQ | |
---|---|---|---|---|---|---|---|---|---|
Q-Align (Baseline) | Fully | 0.945 / 0.938 | 0.933 / 0.931 | 0.935 / 0.934 | 0.409 / 0.420 | 0.887 / 0.883 | 0.788 / 0.733 | 0.829 / 0.808 | 0.876 / 0.845 |
DeQA-Score (Ours) | LoRA | 0.956 / 0.944 | 0.939 / 0.935 | 0.953 / 0.951 | 0.481 / 0.481 | 0.903 / 0.890 | 0.806 / 0.754 | 0.851 / 0.821 | 0.900 / 0.860 |
If you find our work useful for your research and applications, please cite using the BibTeX:
@article{deqa_score,
title={Teaching Large Language Models to Regress Accurate Image Quality Scores using Score Distribution},
author={You, Zhiyuan and Cai, Xin and Gu, Jinjin and Xue, Tianfan and Dong, Chao},
journal={arXiv preprint arXiv:2501.11561},
year={2025},
}
- Downloads last month
- 4
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API was unable to determine this model's library.
Model tree for zhiyuanyou/DeQA-Score-LoRA-Mix3
Base model
MAGAer13/mplug-owl2-llama2-7b