PuSH - Publikationsserver des Helmholtz Zentrums München

Ben-Zion, Z.* ; Witte, K. ; Jagadish, A.K. ; Duek, O.* ; Harpaz-Rotem, I.* ; Khorsandian, M.C.* ; Burrer, A.* ; Seifritz, E.* ; Homan, P.* ; Schulz, E. ; Spiller, T.R.*

Assessing and alleviating state anxiety in large language models.

NPJ Digit. Med. 8:132 (2025)
Verlagsversion DOI PMC
Open Access Gold
Creative Commons Lizenzvertrag
The use of Large Language Models (LLMs) in mental health highlights the need to understand their responses to emotional content. Previous research shows that emotion-inducing prompts can elevate "anxiety" in LLMs, affecting behavior and amplifying biases. Here, we found that traumatic narratives increased Chat-GPT-4's reported anxiety while mindfulness-based exercises reduced it, though not to baseline. These findings suggest managing LLMs' "emotional states" can foster safer and more ethical human-AI interactions.
Altmetric
Weitere Metriken?
Zusatzinfos bearbeiten [➜Einloggen]
Publikationstyp Artikel: Journalartikel
Dokumenttyp Wissenschaftlicher Artikel
Korrespondenzautor
ISSN (print) / ISBN 2398-6352
e-ISSN 2398-6352
Zeitschrift NPJ digital medicine
Quellenangaben Band: 8, Heft: 1, Seiten: , Artikelnummer: 132 Supplement: ,
Verlag Nature Publishing Group
Nichtpatentliteratur Publikationen
Begutachtungsstatus Peer reviewed
Institut(e) Institute of AI for Health (AIH)