Update README.md
Browse files
README.md
CHANGED
@@ -1,17 +1,53 @@
|
|
1 |
---
|
2 |
license: mit
|
3 |
---
|
4 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
5 |
|
6 |
## 1. Introduction
|
7 |
|
8 |
-
|
9 |
|
10 |
<p align="center">
|
11 |
<img width="100%" src="figures/performance.png">
|
12 |
</p>
|
13 |
|
14 |
-
(Left) The performance of Pass@32 for full proof generation on miniF2F. Due to limited compute, we compare with DeepSeek-Prover-v1.5 on the Pass@32 metric (Table 1 of Xin et.al., ), which is different from Pass@32\*6400 in Fig. 1 of Xin et.al., The Pass@N metric indicates that we generate N proofs for a single problem; if any one of these N proofs successfully solves the problem, it is considered solved. (Middle) This sub-figure presents a comparison of
|
15 |
|
16 |
## 2. Evaluation Results
|
17 |
|
@@ -23,29 +59,44 @@ We introduce Godel-Prover-SFT.
|
|
23 |
| DeepSeek-Prover-V1 | 32 | 46.1% |
|
24 |
| DeepSeek-Prover-V1.5-SFT | 32 | 48.2% |
|
25 |
| DeepSeek-Prover-V1.5-RL | 32 | 50.0% |
|
26 |
-
| **
|
27 |
|------------------------|------------------|------------------|
|
28 |
| DeepSeek-Prover-V1.5-SFT | 3200 | 53.3% |
|
29 |
| DeepSeek-Prover-V1.5-RL | 3200 | 54.9% |
|
30 |
-
| **
|
31 |
|------------------------|------------------|------------------|
|
32 |
| DeepSeek-Prover-V1.5-SFT | 25600 | 55.8% |
|
33 |
| DeepSeek-Prover-V1.5-RL | 25600 | 58.5% |
|
34 |
-
| **
|
35 |
-
|
36 |
</div>
|
|
|
|
|
|
|
37 |
<div align="center">
|
38 |
MultiDataset
|
39 |
|
40 |
-
|
|
41 |
-
|
42 |
-
|
|
|
43 |
</div>
|
44 |
|
|
|
|
|
45 |
<div align="center">
|
46 |
Putnam
|
47 |
|
48 |
-
|
|
49 |
-
|
50 |
-
|
51 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: mit
|
3 |
---
|
4 |
+
|
5 |
+
<hr>
|
6 |
+
<div align="center" style="line-height: 1;">
|
7 |
+
<a href="https://goedel-lm.github.io/" target="_blank" style="margin: 2px;">
|
8 |
+
<img alt="Homepage" src="https://img.shields.io/badge/%F0%9F%A4%96%20Homepage-Goedel-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
|
9 |
+
</a>
|
10 |
+
<a href="https://huggingface.co/Goedel-LM" target="_blank" style="margin: 2px;">
|
11 |
+
<img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20face-Goedel-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
|
12 |
+
</a>
|
13 |
+
</div>
|
14 |
+
|
15 |
+
<div align="center" style="line-height: 1;">
|
16 |
+
<a href="https://github.com/Goedel-LM/Goedel-Prover/blob/main/LICENSE" style="margin: 2px;">
|
17 |
+
<img alt="Code License" src="https://img.shields.io/badge/Code_License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
|
18 |
+
</a>
|
19 |
+
<a href="" style="margin: 2px;">
|
20 |
+
<img alt="Model License" src="https://img.shields.io/badge/Model_License-Model_Agreement-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
|
21 |
+
</a>
|
22 |
+
</div>
|
23 |
+
<p align="center">
|
24 |
+
<a href="#3-evaluation-results">Evaluation Results</a> |
|
25 |
+
<a href="#3-model-downloads">Model Download</a> |
|
26 |
+
<a href="#4-setup-environment">Setup Environment</a> |
|
27 |
+
<a href="#5-quick-start">Quick Start</a> |
|
28 |
+
<a href="#6-questions-and-bugs">Questions and Bugs</a> |
|
29 |
+
<a href="#7-license">License</a> |
|
30 |
+
<a href="#8-citation">Citation</a> |
|
31 |
+
<a href="#9-contact">Contact</a>
|
32 |
+
</p>
|
33 |
+
|
34 |
+
|
35 |
+
|
36 |
+
<p align="center">
|
37 |
+
<a href="https://goedel-lm.github.io/"><b>Paper Link</b>👁️</a>
|
38 |
+
</p>
|
39 |
+
|
40 |
+
# Goedel-Prover: Pushing the Limits of Automated Theorem Proving Through Large-Scale Data Synthesis
|
41 |
|
42 |
## 1. Introduction
|
43 |
|
44 |
+
Large language models (LLMs) have shown impressive capabilities in reasoning, particularly in solving mathematical problems. However, relying solely on natural language for reasoning poses challenges in verification, undermining trust in practical applications. Formal theorem proving offers a promising solution: training LLMs to generate proofs in formal languages that can be mechanically verified. A key obstacle in this domain is the scarcity of formal data. To address this challenge, we use language models to autoformalize extensive math problems from natural language to formal statements. Next, we train the prover through a standard iterative process that alternates between attempting to prove these statements and training the prover using newly found proofs. Our model outperforms the previous state-of-the-art (SOTA) whole-proof generation model, DeepSeek-Prover-v1.5-RL, by 7.6% on the miniF2F benchmark, achieving a Pass@32 score of 57.6%. Our model successfully solves 7 problems on the challenging PutnamBench by Pass@512, securing the 1st position on the leaderboard. Furthermore, we have cumulatively solved 29.7K problems in Lean Workbook, significantly increasing the 15.7K proofs found by existing methods. We contribute open-source resources to support future research in formal theorem proving.
|
45 |
|
46 |
<p align="center">
|
47 |
<img width="100%" src="figures/performance.png">
|
48 |
</p>
|
49 |
|
50 |
+
**Caption:** (Left) The performance of Pass@32 for full proof generation on miniF2F. Due to limited compute, we compare with DeepSeek-Prover-v1.5 on the Pass@32 metric (Table 1 of Xin et.al., ), which is different from Pass@32\*6400 in Fig. 1 of Xin et.al., The Pass@N metric indicates that we generate N proofs for a single problem; if any one of these N proofs successfully solves the problem, it is considered solved. (Middle) This sub-figure presents a comparison of Goedel-Prover-SFT and Deepseek-Prover-v1.5 in terms of miniF2F performance across different inference budgets, ranging from Pass@32, 64, 128, ..., 4\*6400, to 16\*6400. The performance numbers of Deepseek-Prover-v1.5 are copied from Table 1 of Deepseek-Prover-v1.5. Due to computational resource constraints, we tested Goedel-Prover-SFT only up to Pass@4\*6400. (Right) The number of problems solved in Lean-workbook by Goedel-Prover-SFT compared to existing works. InternLM2.5-Step-Prover and InternLM-Math-Plus collectively solve and open-source 16K samples, while we solve and open-source 29.7K samples.
|
51 |
|
52 |
## 2. Evaluation Results
|
53 |
|
|
|
59 |
| DeepSeek-Prover-V1 | 32 | 46.1% |
|
60 |
| DeepSeek-Prover-V1.5-SFT | 32 | 48.2% |
|
61 |
| DeepSeek-Prover-V1.5-RL | 32 | 50.0% |
|
62 |
+
| **Goedel-Prover-SFT** | **32** | **57.6%** |
|
63 |
|------------------------|------------------|------------------|
|
64 |
| DeepSeek-Prover-V1.5-SFT | 3200 | 53.3% |
|
65 |
| DeepSeek-Prover-V1.5-RL | 3200 | 54.9% |
|
66 |
+
| **Goedel-Prover-SFT** | **3200** | **62.7%** |
|
67 |
|------------------------|------------------|------------------|
|
68 |
| DeepSeek-Prover-V1.5-SFT | 25600 | 55.8% |
|
69 |
| DeepSeek-Prover-V1.5-RL | 25600 | 58.5% |
|
70 |
+
| **Goedel-Prover-SFT** | **25600** | **64.7%** |
|
|
|
71 |
</div>
|
72 |
+
|
73 |
+
**Caption:** Comparison of Goedel-Prover-SFT with existing methods for whole proof generation on miniF2F, assessing performance across various inference time computations.
|
74 |
+
|
75 |
<div align="center">
|
76 |
MultiDataset
|
77 |
|
78 |
+
| | miniF2F | ProofNet | FormalNumina | Lean-workbook | **Average** |
|
79 |
+
|-----------------------|------------|------------|--------------|---------------|-----------|
|
80 |
+
| Deepseek-Prover-v1.5-RL | 50.0% | **16.0%** | 54.0% | 14.7% | 33.7% |
|
81 |
+
| **Goedel-Prover-SFT** | **57.6%** | 15.2% | **61.2%** | **21.2%** | **38.8%** |
|
82 |
</div>
|
83 |
|
84 |
+
**Caption:** Comparison of Goedel-Prover-SFT with Deepseek-Prover-v1.5-RL for whole proof generation on miniF2F, ProofNet,FormalNumina,Lean-workbook. We report the Pass@32 performance for miniF2F, ProofNet, and FormalNumina datasets. For the Lean-workbook, we evaluate performance using Pass@16 due to the large number of problems (140K) it contains, allowing us to save on computational costs. FormalNumina is a private test set created by formalizing a randomly sampled collection of 250 problems from Numina.
|
85 |
+
|
86 |
<div align="center">
|
87 |
Putnam
|
88 |
|
89 |
+
| Ranking | Model | Type | Num-solved | Compute |
|
90 |
+
|---------|-------------------------------------------------------|-----------------------|------------|---------------------|
|
91 |
+
| 1 | **Goedel-Prover-SFT** 💚 | Whole Proof Generation | 7 | 512 |
|
92 |
+
| 1 | ABEL | Tree Search Method | 7 | 596 |
|
93 |
+
| 3 | **Goedel-Prover-SFT** 💚 | Whole Proof Generation | 6 | 32 |
|
94 |
+
| 3 | InternLM2.5-StepProver 💚 | Tree Search Method | 6 | 2×32×600 |
|
95 |
+
| 5 | InternLM 7B 💚 | Whole Proof Generation | 4 | 4096 |
|
96 |
+
| 6 | GPT-4o | Whole Proof Generation | 1 | 10 |
|
97 |
+
| 7 | COPRA (GPT-4o) | Whole Proof Generation | 1 | 1 |
|
98 |
+
| 8 | ReProver w/ retrieval 💚 | Whole Proof Generation | 0 | 1 |
|
99 |
+
| 9 | ReProver w/o retrieval 💚 | Whole Proof Generation | 0 | 1 |
|
100 |
+
</div>
|
101 |
+
|
102 |
+
**Caption:** Our model rank the 1st on [Putnam Leaderboard](https://trishullab.github.io/PutnamBench/leaderboard.html). The performance numbers for existing works are taken from the leaderboard. 💚 indicates open sourced models.
|