Papers
arxiv:2412.15118

Outcome-Refining Process Supervision for Code Generation

Published on Dec 19
· Submitted by zhuohaoyu on Dec 24
Authors:
,
,
,
,
,
,

Abstract

Large Language Models have demonstrated remarkable capabilities in code generation, yet they often struggle with complex programming tasks that require deep algorithmic reasoning. While process supervision through learned reward models shows promise in guiding reasoning steps, it requires expensive training data and suffers from unreliable evaluation. We propose Outcome-Refining Process Supervision, a novel paradigm that treats outcome refinement itself as the process to be supervised. Our framework leverages concrete execution signals to ground the supervision of reasoning steps, while using tree-structured exploration to maintain multiple solution trajectories simultaneously. Experiments demonstrate that our approach enables even smaller models to achieve high success accuracy and performance metrics on competitive programming tasks, creates more reliable verification than traditional reward models without requiring training PRMs. Our approach achieves significant improvements across 5 models and 3 datasets: an average of 26.9% increase in correctness and 42.2% in efficiency. The results suggest that providing structured reasoning space with concrete verification signals is crucial for solving complex programming tasks. We open-source all our code and data at: https://github.com/zhuohaoyu/ORPS

Community

Paper submitter

Building Better Reasoning Code LLMs: A Process Supervision Approach to Complex Code Generation

The recent release of OpenAI's o1 model has demonstrated unprecedented performance in complex reasoning tasks by incorporating extensive CoT reasoning during inference time. While several recent studies have attempted to replicate o1's success in mathematical reasoning, developing similar capabilities for more complex domains like code generation remains a significant challenge. We introduce Outcome-Refining Process Supervision (ORPS), a novel framework that enhances LLMs' code generation abilities by treating the refinement of execution outcomes as the process to be supervised. Through concrete execution signals and tree-structured exploration during inference, ORPS enables models to perform deep reasoning with step-by-step verification and refinement. Our methodology achieves substantial improvements across multiple benchmark datasets, with an average increase of 26.9% in correctness and 42.2% in code generation efficiency. Notably, we achieve these gains without requiring expensive reward model training, demonstrating that even smaller models can achieve remarkable performance improvements on competitive programming tasks through structured reasoning. This work provides insights into how outcome-guided process supervision during inference time can enhance complex code generation capabilities, advancing our understanding of building more effective reasoning systems.

image.png

image.png

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2412.15118 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2412.15118 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2412.15118 in a Space README.md to link it from this page.

Collections including this paper 4