Antimage01
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,42 @@
|
|
1 |
-
---
|
2 |
-
license: apache-2.0
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
- zh
|
6 |
+
base_model:
|
7 |
+
- Qwen/Qwen2.5-Math-7B-Instruct
|
8 |
+
---
|
9 |
+
|
10 |
+
# URSA-8B
|
11 |
+
|
12 |
+
**URSA-8B** is the first small-sized MLLM specifically focused on Chain-of-thought multimodal mathematical reasoning.
|
13 |
+
|
14 |
+
# Installation
|
15 |
+
|
16 |
+
```python
|
17 |
+
from huggingface_hub import snapshot_download
|
18 |
+
|
19 |
+
repo_id = "URSA-MATH/URSA-RM-8B"
|
20 |
+
local_dir = YOUR_LOCAL_PATH
|
21 |
+
|
22 |
+
snapshot_path = snapshot_download(
|
23 |
+
repo_id=repo_id,
|
24 |
+
local_dir=local_dir,
|
25 |
+
revision="main",
|
26 |
+
cache_dir=None,
|
27 |
+
)
|
28 |
+
```
|
29 |
+
# Inference
|
30 |
+
We have adapted vLLM for URSA-8B. Please refer to the [GitHub](https://github.com/URSA-MATH/URSA-MATH) repository for quick inference implementation.
|
31 |
+
|
32 |
+
# Citation
|
33 |
+
|
34 |
+
If you find our paper, model, or data helpful, please give this repo a star 🌟 and cite our article ✏️.
|
35 |
+
```
|
36 |
+
@article{luo2025ursa,
|
37 |
+
title={URSA: Understanding and Verifying Chain-of-thought Reasoning in Multimodal Mathematics},
|
38 |
+
author={Luo, Ruilin and Zheng, Zhuofan and Wang, Yifan and Yu, Yiyao and Ni, Xinzhe and Lin, Zicheng and Zeng, Jin and Yang, Yujiu},
|
39 |
+
journal={arXiv preprint arXiv:2501.04686},
|
40 |
+
year={2025}
|
41 |
+
}
|
42 |
+
```
|