PuSH - Publikationsserver des Helmholtz Zentrums München

Pacchiardi, L.* ; Voudouris, K. ; Slater, B.* ; Martínez-Plumed, F.* ; Hernández-Orallo, J.* ; Zhou, L.* ; Schellaert, W.*

PredictaBoard: Benchmarking LLM Score Predictability.

In: (63rd Annual Meeting of the Association for Computational Linguistics, ACL 2025, 27 July - 1 August 2025, Vienna). 2025. 15245-15266 (Proceedings of the Annual Meeting of the Association for Computational Linguistics)
Verlagsversion DOI
Despite possessing impressive skills, Large Language Models (LLMs) often fail unpredictably, demonstrating inconsistent success in even basic common sense reasoning tasks. This unpredictability poses a significant challenge to ensuring their safe deployment, as identifying and operating within a reliable “safe zone” is essential for mitigating risks. To address this, we present PredictaBoard, a novel collaborative benchmarking framework designed to evaluate the ability of score predictors (referred to as assessors) to anticipate LLM errors on specific task instances (i.e., prompts) from existing datasets. PredictaBoard evaluates pairs of LLMs and assessors by considering the rejection rate at different tolerance errors. As such, PredictaBoard stimulates research into developing better assessors and making LLMs more predictable, not only with a higher average performance. We conduct illustrative experiments using baseline assessors and state-of-the-art LLMs. PredictaBoard highlights the critical need to evaluate predictability alongside performance, paving the way for safer AI systems where errors are not only minimised but also anticipated and effectively mitigated. Code for our benchmark can be found at https://github.com/Kinds-of-Intelligence-CFI/PredictaBoard.
Altmetric
Weitere Metriken?
Zusatzinfos bearbeiten [➜Einloggen]
Publikationstyp Artikel: Konferenzbeitrag
ISSN (print) / ISBN 0736-587X
Konferenztitel 63rd Annual Meeting of the Association for Computational Linguistics, ACL 2025
Konferzenzdatum 27 July - 1 August 2025
Konferenzort Vienna
Quellenangaben Band: , Heft: , Seiten: 15245-15266 Artikelnummer: , Supplement: ,