‘The Queen of England is not England’s Queen.’ Factual coherency of PLMs at EACL’24

less than 1 minute read

When humans know “Rome is the capital of Italy”, they also know “The capital of Italy is Rome”. This means, if humans know a fact, they can be either queried for the subject or the object of the relation and retrieve the knowledge. We would expect pre-trained language models (PLMS) to also be able to do this. But, can they?

The paper investigates the coherency of factual knowledge within pre-trained language models (PLMs). Highlighting a gap in PLMs’ ability to accurately predict related facts in reverse, it points to a need for improved training methods.

The research emphasizes the potential of retrieval-based approaches, which significantly enhance factual coherency, aiming to make PLMs more reliable sources of factual information. Additionally, this work calls for developing pre-training objectives which explicitly optimize PLMs for more coherent knowledge states.

Paper

  • P. Youssef, J. Schlötterer, and C. Seifert, “The Queen of England is not England’s Queen: On the Lack of Factual Coherency in PLMs,” in Findings of the Association for Computational Linguistics: EACL 2024, St. Julian’s, Malta, Mar. 2024, pp. 2342–2354, [Online]. Available at: https://aclanthology.org/2024.findings-eacl.155.