Probing Information Salience (ACL’25)
Jan Trienes presented research at ACL 2025 in Vienna on how large language models (LLMs) identify important information in text.
Using text summarization as a probe, we discovered that while LLMs demonstrate a consistent internal notion of salience, their ability to reason about it remains unreliable and only partially aligns with human judgment. These findings have important implications for deploying LLMs in tasks like summarization, simplification, and retrieval-augmented generation (RAG).
We appreciated the engaging discussions at the conference and are excited to build on this work!
Reference
-
Jan Trienes, Jörg Schlötterer, Junyi Jessy Li, and Christin Seifert.
Behavioral Analysis of Information Salience in Large Language Models.
Findings of the Association for Computational Linguistics: ACL 2025.
2025.
BibTeX
@inproceedings{Trienes2025_acl_information-salience, author = {Trienes, Jan and Schl{\"o}tterer, J{\"o}rg and Li, Junyi Jessy and Seifert, Christin}, booktitle = {Findings of the Association for Computational Linguistics: ACL 2025}, title = {Behavioral Analysis of Information Salience in Large Language Models}, year = {2025}, address = {Vienna, Austria}, editor = {Che, Wanxiang and Nabende, Joyce and Shutova, Ekaterina and Pilehvar, Mohammad Taher}, month = jul, pages = {23428--23454}, publisher = {Association for Computational Linguistics}, code = {https://github.com/jantrienes/llm-salience}, doi = {10.18653/v1/2025.findings-acl.1204}, isbn = {979-8-89176-256-5}, url = {https://aclanthology.org/2025.findings-acl.1204/} }