as soon as is submitted to ZB.
Sparse autoencoders reveal temporal difference learning in large language models.
In: (13th International Conference on Learning Representations Iclr 2025, 24 - 28 April 2025, Singapur). 2025. 4972-4997 (13th International Conference on Learning Representations Iclr 2025)
In-context learning, the ability to adapt based on a few examples in the input prompt, is a ubiquitous feature of large language models (LLMs). However, as LLMs' in-context learning abilities continue to improve, understanding this phenomenon mechanistically becomes increasingly important. In particular, it is not well-understood how LLMs learn to solve specific classes of problems, such as reinforcement learning (RL) problems, in-context. Through three different tasks, we first show that Llama 3 70B can solve simple RL problems in-context. We then analyze the residual stream of Llama using Sparse Autoencoders (SAEs) and find representations that closely match temporal difference (TD) errors. Notably, these representations emerge despite the model only being trained to predict the next token. We verify that these representations are indeed causally involved in the computation of TD errors and Q-values by performing carefully designed interventions on them. Taken together, our work establishes a methodology for studying and manipulating in-context learning with SAEs, paving the way for a more mechanistic understanding.
Annotations
Special Publikation
Hide on homepage
Publication type
Article: Conference contribution
Language
english
Publication Year
2025
HGF-reported in Year
2025
ISSN (print) / ISBN
[9798331320850]
Conference Title
13th International Conference on Learning Representations Iclr 2025
Conference Date
24 - 28 April 2025
Conference Location
Singapur
Quellenangaben
Pages: 4972-4997
Institute(s)
Human-Centered AI (HCA)
POF-Topic(s)
30205 - Bioengineering and Digital Health
Research field(s)
Enabling and Novel Technologies
PSP Element(s)
G-540011-001
Scopus ID
105010206887
Erfassungsdatum
2025-07-18