PuSH - Publikationsserver des Helmholtz Zentrums München

Schubert, J.A.* ; Jagadish, A.K. ; Binz, M. ; Schulz, E.

In-context learning agents are asymmetric belief updaters.

In: (Proceedings of Machine Learning Research). 2024. 43928-43946 (Proceedings of Machine Learning Research ; 235)
We study the in-context learning dynamics of large language models (LLMs) using three instrumental learning tasks adapted from cognitive psychology. We find that LLMs update their beliefs in an asymmetric manner and learn more from better-than-expected outcomes than from worse-than-expected ones. Furthermore, we show that this effect reverses when learning about counterfactual feedback and disappears when no agency is implied. We corroborate these findings by investigating idealized in-context learning agents derived through meta-reinforcement learning, where we observe similar patterns. Taken together, our results contribute to our understanding of how in-context learning works by highlighting that the framing of a problem significantly influences how learning occurs, a phenomenon also observed in human cognition.
Tags
Anmerkungen
Besondere Publikation
Auf Hompepage verbergern

Zusatzinfos bearbeiten
Eigene Tags bearbeiten
Privat
Eigene Anmerkung bearbeiten
Privat
Auf Publikationslisten für
Homepage nicht anzeigen
Als besondere Publikation
markieren
Publikationstyp Artikel: Konferenzbeitrag
Sprache englisch
Veröffentlichungsjahr 2024
HGF-Berichtsjahr 2024
Konferenztitel Proceedings of Machine Learning Research
Quellenangaben Band: 235, Heft: , Seiten: 43928-43946 Artikelnummer: , Supplement: ,
POF Topic(s) 30205 - Bioengineering and Digital Health
Forschungsfeld(er) Enabling and Novel Technologies
PSP-Element(e) G-540011-001
Scopus ID 85203814562
Erfassungsdatum 2024-09-20