Poster
in
Workshop: The 2nd Workshop on Reliable and Responsible Foundation Models
Investigating Tool-Memory Conflicts in Tool-Augmented LLMs
Jiali Cheng · Rui Pan · Hadi Amiri
Keywords: [ Knowledge Conflict ] [ LLM ]
Tool-augmented large language models (LLMs)have powered many applications. However, theyare likely to suffer from knowledge conflict. Inthis paper, we propose a new type of knowledge conflict – Tool-Memory Conflict (TMC),where the internal parametric knowledge contradicts with the external tool knowledge for toolaugmented LLMs. We find that existing LLMs,though powerful, suffer from TMC, especiallyon STEM-related tasks. We also uncover thatunder different conditions, tool knowledge andparametric knowledge may be prioritized differently. We then evaluate existing conflict resolvingtechniques, including prompting-based and RAGbased methods. Results show that none of theseapproaches can effectively resolve tool-memoryconflicts.