Safetensors
llama
linyongver commited on
Commit
70cf721
·
verified ·
1 Parent(s): caf8fe1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +31 -15
README.md CHANGED
@@ -7,9 +7,12 @@ license: mit
7
  <a href="https://goedel-lm.github.io/" target="_blank" style="margin: 2px;">
8
  <img alt="Homepage" src="https://img.shields.io/badge/%F0%9F%A4%96%20Homepage-Goedel-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
9
  </a>
10
- <a href="https://huggingface.co/Goedel-LM" target="_blank" style="margin: 2px;">
11
- <img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20face-Goedel-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
12
  </a>
 
 
 
13
  </div>
14
 
15
  <div align="center" style="line-height: 1;">
@@ -41,7 +44,9 @@ license: mit
41
 
42
  ## 1. Introduction
43
 
44
- Large language models (LLMs) have shown impressive capabilities in reasoning, particularly in solving mathematical problems. However, relying solely on natural language for reasoning poses challenges in verification, undermining trust in practical applications. Formal theorem proving offers a promising solution: training LLMs to generate proofs in formal languages that can be mechanically verified. A key obstacle in this domain is the scarcity of formal data. To address this challenge, we use language models to autoformalize extensive math problems from natural language to formal statements. Next, we train the prover through a standard iterative process that alternates between attempting to prove these statements and training the prover using newly found proofs. Our model outperforms the previous state-of-the-art (SOTA) whole-proof generation model, DeepSeek-Prover-v1.5-RL, by 7.6% on the miniF2F benchmark, achieving a Pass@32 score of 57.6%. Our model successfully solves 7 problems on the challenging PutnamBench by Pass@512, securing the 1st position on the leaderboard. Furthermore, we have cumulatively solved 29.7K problems in Lean Workbook, significantly increasing the 15.7K proofs found by prior methods. We contribute open-source resources to support future research in formal theorem proving.
 
 
45
 
46
  <p align="center">
47
  <img width="100%" src="performance.png">
@@ -77,10 +82,10 @@ Large language models (LLMs) have shown impressive capabilities in reasoning, pa
77
  <div align="center">
78
  MultiDataset
79
 
80
- | | miniF2F | ProofNet | FormalNumina | Lean-workbook | **Average** |
81
- |-----------------------|------------|------------|--------------|---------------|-----------|
82
- | Deepseek-Prover-v1.5-RL | 50.0% | **16.0%** | 54.0% | 14.7% | 33.7% |
83
- | **Goedel-Prover-SFT** | **57.6%** | 15.2% | **61.2%** | **21.2%** | **38.8%** |
84
  </div>
85
 
86
  **Caption:** Comparison of Goedel-Prover-SFT with Deepseek-Prover-v1.5-RL for whole proof generation on miniF2F, ProofNet,FormalNumina,Lean-workbook. We report the Pass@32 performance for miniF2F, ProofNet, and FormalNumina datasets. For the Lean-workbook, we evaluate performance using Pass@16 due to the large number of problems (140K) it contains, allowing us to save on computational costs. FormalNumina is a private test set created by formalizing a randomly sampled collection of 250 problems from Numina.
@@ -90,18 +95,29 @@ Putnam
90
 
91
  | Ranking | Model | Type | Num-solved | Compute |
92
  |---------|-------------------------------------------------------|-----------------------|------------|---------------------|
93
- | 1 | **Goedel-Prover-SFT** 💚 | Whole Proof Generation | 7 | 512 |
94
  | 1 | ABEL | Tree Search Method | 7 | 596 |
95
- | 3 | **Goedel-Prover-SFT** 💚 | Whole Proof Generation | 6 | 32 |
96
- | 3 | InternLM2.5-StepProver 💚 | Tree Search Method | 6 | 2×32×600 |
97
- | 5 | InternLM 7B 💚 | Whole Proof Generation | 4 | 4096 |
98
  | 6 | GPT-4o | Whole Proof Generation | 1 | 10 |
99
- | 7 | COPRA (GPT-4o) 💚 | Whole Proof Generation | 1 | 1 |
100
- | 8 | ReProver w/ retrieval 💚 | Whole Proof Generation | 0 | 1 |
101
- | 9 | ReProver w/o retrieval 💚 | Whole Proof Generation | 0 | 1 |
102
  </div>
103
 
104
- **Caption:** Our model rank the 1st on [Putnam Leaderboard](https://trishullab.github.io/PutnamBench/leaderboard.html). The performance numbers for existing works are taken from the leaderboard. 💚 indicates open sourced models.
 
 
 
 
 
 
 
 
 
 
 
105
 
106
  ## 4. Citation
107
  ```latex
 
7
  <a href="https://goedel-lm.github.io/" target="_blank" style="margin: 2px;">
8
  <img alt="Homepage" src="https://img.shields.io/badge/%F0%9F%A4%96%20Homepage-Goedel-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
9
  </a>
10
+ <a href="https://github.com/Goedel-LM/Goedel-Prover" target="_blank" style="margin: 2px;">
11
+ <img alt="Github" src="https://img.shields.io/badge/GitHub-Goedel-blue?style=flat-square&logo=github&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
12
  </a>
13
+ <!-- <a href="https://huggingface.co/Goedel-LM" target="_blank" style="margin: 2px;">
14
+ <img alt="HuggingFace" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20face-Goedel-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
15
+ </a> -->
16
  </div>
17
 
18
  <div align="center" style="line-height: 1;">
 
44
 
45
  ## 1. Introduction
46
 
47
+ Large language models (LLMs) have demonstrated remarkable reasoning capabilities, particularly in solving mathematical problems. There are two main approaches to tackling math problems: the informal approach, which involves reasoning in natural language, and the formal approach, which relies on proof assistants such as Lean and Isabelle that use formal and machine checkable mathematical languages. State-of-the-art reasoning LLMs such as OpenAI o1 and Deepseek R1 excel at informal math but not at formal math. While the informal approach is more intuitive to humans, it also poses significant challenges in proof verification, undermining its reliability in practical applications.
48
+
49
+ We introduce Goedel-Prover, a state-of-the-art (SOTA) open-source model for formal mathematics that generates machine-verifiable proofs. On the miniF2F benchmark at Pass@32, our model achieves a 57.6% success rate, surpassing the previous SOTA open-source model for whole-proof generation by a significant 7.6% margin. On the challenging PutnamBench, our model successfully solves 7 problems at Pass@512, securing 1st place on the leaderboard. Additionally, we have cumulatively generated 29.7K formal proofs for problems in the Lean Workbook, a substantial increase over the 15.7K proofs produced by prior methods. A key challenge in formal mathematics is data scarcity. To address this, we train LLMs to auto-formalize a large corpus of mathematical problems, converting natural language descriptions into formal statements. We then train a prover through an iterative process, alternating between generating proofs for these formalized statements and refining the model using the newly discovered proofs. Our project contributes to the open-source community for advancing research in formal theorem proving, and developing more capable mathematical reasoning systems.
50
 
51
  <p align="center">
52
  <img width="100%" src="performance.png">
 
82
  <div align="center">
83
  MultiDataset
84
 
85
+ | | miniF2F | ProofNet | FormalNumina | Lean-workbook |
86
+ |-----------------------|------------|------------|--------------|---------------|
87
+ | Deepseek-Prover-v1.5-RL | 50.0% | **16.0%** | 54.0% | 14.7% |
88
+ | **Goedel-Prover-SFT** | **57.6%** | 15.2% | **61.2%** | **21.2%** |
89
  </div>
90
 
91
  **Caption:** Comparison of Goedel-Prover-SFT with Deepseek-Prover-v1.5-RL for whole proof generation on miniF2F, ProofNet,FormalNumina,Lean-workbook. We report the Pass@32 performance for miniF2F, ProofNet, and FormalNumina datasets. For the Lean-workbook, we evaluate performance using Pass@16 due to the large number of problems (140K) it contains, allowing us to save on computational costs. FormalNumina is a private test set created by formalizing a randomly sampled collection of 250 problems from Numina.
 
95
 
96
  | Ranking | Model | Type | Num-solved | Compute |
97
  |---------|-------------------------------------------------------|-----------------------|------------|---------------------|
98
+ | 1 | **Goedel-Prover-SFT** 🟩 | Whole Proof Generation | 7 | 512 |
99
  | 1 | ABEL | Tree Search Method | 7 | 596 |
100
+ | 3 | **Goedel-Prover-SFT** 🟩 | Whole Proof Generation | 6 | 32 |
101
+ | 3 | InternLM2.5-StepProver 🟩 | Tree Search Method | 6 | 2×32×600 |
102
+ | 5 | InternLM 7B | Whole Proof Generation | 4 | 4096 |
103
  | 6 | GPT-4o | Whole Proof Generation | 1 | 10 |
104
+ | 7 | COPRA (GPT-4o) 🟩 | Whole Proof Generation | 1 | 1 |
105
+ | 8 | ReProver w/ retrieval 🟩 | Whole Proof Generation | 0 | 1 |
106
+ | 9 | ReProver w/o retrieval 🟩 | Whole Proof Generation | 0 | 1 |
107
  </div>
108
 
109
+ **Caption:** Our model rank the 1st on [Putnam Leaderboard](https://trishullab.github.io/PutnamBench/leaderboard.html). The performance numbers for existing works are taken from the leaderboard. 🟩 indicates open sourced models.
110
+
111
+ ## 3. Dataset Downloads
112
+
113
+ We are also releasing 29,7K proofs of the problems in Lean-workbook found by our Goedel-Prover-SFT.
114
+
115
+ <div align="center">
116
+
117
+ | **Datasets** | **Download** |
118
+ | :-----------------------------: | :----------------------------------------------------------: |
119
+ | Lean-workbook-proofs | [🤗 HuggingFace](https://huggingface.co/datasets/Goedel-LM/Lean-workbook-proofs) |
120
+ </div>
121
 
122
  ## 4. Citation
123
  ```latex