Invited talk
in
Affinity Workshop: 4th MusIML workshop at ICML’25
Dr. Niloufar Salehi (UC, Berkeley): DESIGNING RELIABLE HUMAN-AI INTERACTIONS
Abstract:
How can users trust an AI system that fails in unpredictable ways? Machine learning models, while powerful, can produce unpredictable results. This uncertainty becomes even more pronounced in areas where verification is challenging, such as in machine translation, and where reliance depends on adherence to community values, such as student assignment algorithms. Providing users with guidance on when to rely on a system is challenging because models can create a wide range of outputs (e.g. text), error boundaries are highly stochastic, and automated explanations themselves may be incorrect.
Chat is not available.