Contrastive Prefence Learning: Learning from Human Feedback without RL Paper • 2310.13639 • Published Oct 20, 2023 • 24
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback Paper • 2309.00267 • Published Sep 1, 2023 • 47
A General Theoretical Paradigm to Understand Learning from Human Preferences Paper • 2310.12036 • Published Oct 18, 2023 • 14
Deep Reinforcement Learning from Hierarchical Weak Preference Feedback Paper • 2309.02632 • Published Sep 6, 2023 • 1
Pairwise Proximal Policy Optimization: Harnessing Relative Feedback for LLM Alignment Paper • 2310.00212 • Published Sep 30, 2023 • 2
Learning Optimal Advantage from Preferences and Mistaking it for Reward Paper • 2310.02456 • Published Oct 3, 2023 • 1