PuSH - Publication Server of Helmholtz Zentrum München

Ben-Zion, Z.* ; Witte, K. ; Jagadish, A.K. ; Duek, O.* ; Harpaz-Rotem, I.* ; Khorsandian, M.C.* ; Burrer, A.* ; Seifritz, E.* ; Homan, P.* ; Schulz, E. ; Spiller, T.R.*

Assessing and alleviating state anxiety in large language models.

NPJ Digit. Med. 8:132 (2025)
Publ. Version/Full Text DOI PMC
Open Access Gold
Creative Commons Lizenzvertrag
The use of Large Language Models (LLMs) in mental health highlights the need to understand their responses to emotional content. Previous research shows that emotion-inducing prompts can elevate "anxiety" in LLMs, affecting behavior and amplifying biases. Here, we found that traumatic narratives increased Chat-GPT-4's reported anxiety while mindfulness-based exercises reduced it, though not to baseline. These findings suggest managing LLMs' "emotional states" can foster safer and more ethical human-AI interactions.
Altmetric
Additional Metrics?
Edit extra informations Login
Publication type Article: Journal article
Document type Scientific Article
Corresponding Author
ISSN (print) / ISBN 2398-6352
e-ISSN 2398-6352
Quellenangaben Volume: 8, Issue: 1, Pages: , Article Number: 132 Supplement: ,
Publisher Nature Publishing Group
Non-patent literature Publications
Reviewing status Peer reviewed
Institute(s) Institute of AI for Health (AIH)