We find that the presence of #explanations increases reliance on both correct and incorrect LLM responses.
This isn't surprising, given prior HCI/Psychology work on other types of explanations, but raises the question whether LLMs should provide explanations by default
4/7