Poster
Stabilizing Sample Similarity in Representation via Mitigating Random Consistency
Jieting Wang · ZhangZelong Zhang · Feijiang Li · Yuhua Qian · Xinyan Liang
East Exhibition Hall A-B #E-1600
Deep learning is powerful because it can learn meaningful patterns from data. Traditionally, researchers have measured this ability by comparing how similar individual samples are to each other. However, for tasks like classification, what matters more is whether the model can distinguish between entire categories of data—not just individual examples. In this paper, we propose a new way to evaluate deep learning models by measuring how well their learned patterns align with the true class structure of the data. We also identify and correct for random biases that can skew these evaluations. Our method provides a mathematically sound and unbiased measure, leading to more reliable model assessments. Experiments show that it improves accuracy and helps models better separate different classes.