Sunnie S. Y. Kim<p>On a more optimistic note, we observe less overreliance on incorrect LLM responses when accurate and relevant <a href="https://hci.social/tags/sources" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>sources</span></a> are provided or when explanations from LLMs exhibit <a href="https://hci.social/tags/inconsistencies" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>inconsistencies</span></a> (i.e., sets of statements that cannot be true at the same time).</p><p>5/7</p>