Poster
To Each Metric Its Decoding: Post-Hoc Optimal Decision Rules of Probabilistic Hierarchical Classifiers
Roman Plaud · Alexandre Perez-Lebel · Matthieu Labeau · Antoine Saillenfest · Thomas Bonald
East Exhibition Hall A-B #E-1808
Many real-world tasks require sorting items into categories that form a hierarchy—think of sorting photos first by “animals” or “vehicles,” then by subcategories like “dogs” or “cars.” In such setups, some mistakes matter more than others (e.g., calling a wolf a dog is certainly less serious than calling it a car), but existing methods for selecting categories from an AI system probability estimate usually rely on simple « if-then » rules that don’t line up with how we actually judge performance.We introduce a clear, metric-driven framework that finds the absolute best way to choose categories from an AI’s probability estimates. We work out exact optimal predictions for different scenarios—from picking a single node in the hierarchy to selecting several at once—so that the system optimizes directly for the scores we care about, including specialized scores that balance different kinds of errors.By using these optimal strategies, hierarchical classifiers become noticeably more performant, especially in ambiguous cases. This advance can improve real-world applications—such as medical diagnosis, document organization, or wildlife monitoring—by reducing costly hierarchical misclassifications.