When Silence Is Golden: Can LLMs Learn to Abstain in Temporal QA and Beyond?
Published in ICLR 2026, 2026
This paper is about investigating the ability of Large Language Models (LLMs) to abstain from answering questions when they lack sufficient information, particularly in temporal question answering scenarios.
Download here
