Repository logo
  • Log In
    Log in via Symplectic to deposit your publication(s).
Repository logo
  • Communities & Collections
  • Research Outputs
  • Statistics
  • Log In
    Log in via Symplectic to deposit your publication(s).
  1. Home
  2. Faculty of Engineering
  3. Computing
  4. Computing
  5. Reverse engineering human preferences with reinforcement learning
 
  • Details
Reverse engineering human preferences with reinforcement learning
File(s)
2505.15795v2.pdf (1.79 MB)
Accepted version
OA Location
https://arxiv.org/pdf/2505.15795
Author(s)
Alazraki, Lisa
Yi-Chern, Tan
Campos, Jon Ander
Mozes, Maximilian
Rei, Marek
more
Type
Conference Paper
Abstract
The capabilities of Large Language Models (LLMs) are routinely evaluated by other LLMs trained to predict human preferences. This framework-known as LLM-as-a-judge-is highly scalable and relatively low cost. However, it is also vulnerable to malicious exploitation, as LLM responses can be tuned to overfit the preferences of the judge. Previous work shows that the answers generated by a candidate-LLM can be edited post hoc to maximise the score assigned to them by a judge-LLM. In this study, we adopt a different approach and use the signal provided by judge-LLMs as a reward to adversarially tune models that generate text preambles designed to boost downstream performance. We find that frozen LLMs pipelined with these models attain higher LLM-evaluation scores than existing frameworks. Crucially, unlike other frameworks which intervene directly on the model's response, our method is virtually undetectable. We also demonstrate that the effectiveness of the tuned preamble generator transfers when the candidate-LLM and the judge-LLM are replaced with models that are not used during training. These findings raise important questions about the design of more reliable LLM-as-a-judge evaluation settings. They also demonstrate that human preferences can be reverse engineered effectively, by pipelining LLMs to optimise upstream preambles via reinforcement learning-an approach that could find future applications in diverse tasks and domains beyond adversarial attacks.
Date Acceptance
2025-09-18
URI
https://hdl.handle.net/10044/1/125235
Copyright Statement
Subject to copyright. This paper is embargoed until publication.
Source
The Thirty-Ninth Annual Conference on Neural Information Processing Systems (NeurIPS 2025)
Publication Status
Accepted
Start Date
2025-12-02
Finish Date
2025-12-07
Coverage Spatial
San Diego, USA
About
Spiral Depositing with Spiral Publishing with Spiral Symplectic
Contact us
Open access team Report an issue
Other Services
Scholarly Communications Library Services
logo

Imperial College London

South Kensington Campus

London SW7 2AZ, UK

tel: +44 (0)20 7589 5111

Accessibility Modern slavery statement Cookie Policy

Built with DSpace-CRIS software - Extension maintained and optimized by 4Science

  • Cookie settings
  • Privacy policy
  • End User Agreement
  • Send Feedback