Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 2nd Workshop on Models of Human Feedback for AI Alignment (MoFA)

Implicit User Feedback in Human-LLM Dialogues: Informative to Understand Users yet Noisy as a Learning Signal

Yuhan Liu · Michael Zhang · Eunsol Choi


Abstract:

Once language models (LMs) are deployed, they can interact with users long-term, ideally evolving continuously based on their feedback. Asking direct feedback from users can be costly and disruptive, thus we study harvesting implicit user feedback from user interaction logs. We study implicit user feedback in two user-LM interaction datasets (WildChat and LMSYS). First, we analyze user feedback in the human-LLM conversation trajectory, providing insights on when and why such feedback occurs. Second, we study harvesting learning signals from such implicit user feedback. We find that the contents of user feedback (e.g., user wanted clarification), not just the polarity (e.g., users were unhappy with the previous model response), can improve model performance in short human-designed questions (MTBench) but not on longer and more complex questions (WildBench). We also find that the usefulness of user feedback is largely tied to the quality of the user's initial prompt. Together, we provide an in-depth study in implicit user feedback, showing its potential and limitations.

Chat is not available.