Poster
Binary Hypothesis Testing for Softmax Models and Leverage Score Models
Yuzhou Gu · Zhao Song · Junze Yin
East Exhibition Hall A-B #E-2704
Imagine you’re trying to figure out if a coin is fair. You flip it many times and count how often it lands heads. If it’s close to 50/50, you might say, "Seems fair." But if it lands heads 90% of the time, something feels off.This is the core idea behind hypothesis testing, a method for making decisions under uncertainty. We begin with a default assumption (called the null hypothesis), like "The coin is fair," and then collect data to see whether that assumption holds. If the evidence strongly contradicts it, we reject the null and accept an alternative hypothesis, like "The coin is biased." Our theoretical work explores hypothesis testing in the context of two fundamental mathematical tools: the softmax distribution and the leverage score distribution. These tools are central to modern AI systems, scientific computing frameworks, and operations research methods, shaping technologies we rely on every day. Our results provide insights into decision-making under uncertainty, with potential applications such as determining whether two neural networks behave similarly, among many others.