PuSH - Publication Server of Helmholtz Zentrum München

Ben-Zion, Z.* ; Witte, K. ; Jagadish, A.K. ; Duek, O.* ; Harpaz-Rotem, I.* ; Khorsandian, M.C.* ; Burrer, A.* ; Seifritz, E.* ; Homan, P.* ; Schulz, E. ; Spiller, T.R.*

Assessing and alleviating state anxiety in large language models.

NPJ Digit. Med. 8:132 (2025)
Publ. Version/Full Text DOI PMC
Open Access Gold
Creative Commons Lizenzvertrag
The use of Large Language Models (LLMs) in mental health highlights the need to understand their responses to emotional content. Previous research shows that emotion-inducing prompts can elevate "anxiety" in LLMs, affecting behavior and amplifying biases. Here, we found that traumatic narratives increased Chat-GPT-4's reported anxiety while mindfulness-based exercises reduced it, though not to baseline. These findings suggest managing LLMs' "emotional states" can foster safer and more ethical human-AI interactions.
Impact Factor
Scopus SNIP
Altmetric
15.100
0.000
Tags
Annotations
Special Publikation
Hide on homepage

Edit extra information
Edit own tags
Private
Edit own annotation
Private
Hide on publication lists
on hompage
Mark as special
publikation
Publication type Article: Journal article
Document type Scientific Article
Language english
Publication Year 2025
HGF-reported in Year 2025
ISSN (print) / ISBN 2398-6352
e-ISSN 2398-6352
Quellenangaben Volume: 8, Issue: 1, Pages: , Article Number: 132 Supplement: ,
Publisher Nature Publishing Group
Reviewing status Peer reviewed
POF-Topic(s) 30205 - Bioengineering and Digital Health
Research field(s) Enabling and Novel Technologies
PSP Element(s) G-540011-001
PubMed ID 40033130
Erfassungsdatum 2025-05-11