Skip to yearly menu bar Skip to main content


Poster

CUPS: Improving Human Pose-Shape Estimators with Conformalized Deep Uncertainty

Harry Zhang · Luca Carlone

West Exhibition Hall B2-B3 #W-313
[ ] [ ]
Tue 15 Jul 4:30 p.m. PDT — 7 p.m. PDT

Abstract:

We introduce CUPS, a novel method for learning sequence-to-sequence 3D human shapes and poses from RGB videos with uncertainty quantification. To improve on top of prior work, we develop a method to {generate and score multiple hypotheses during training}, effectively integrating uncertainty quantification into the learning process. This process results in a deep uncertainty function that is trained end-to-end with the 3D pose estimator. Post-training, the learned deep uncertainty model is used as the conformity score, which can be used to calibrate a conformal predictor in order to {assess} the quality of the output prediction. Since the data in human pose-shape learning is not fully exchangeable, we also {present} two practical bounds for the coverage gap in conformal prediction, developing theoretical backing for the uncertainty bound of our model. Our results indicate thatby taking advantage of deep uncertainty with conformal prediction, our method achieves state-of-the-art performance across variousmetrics and datasets while inheriting the probabilistic guarantees of conformal prediction. Interactive 3D visualization, code, and data will be available at https://sites.google.com/view/champpp.

Lay Summary:

Improving human pose-shape estimation with uncertainty quantification.

Chat is not available.