Poster
The Polynomial Stein Discrepancy for Assessing Moment Convergence
Narayan Srinivasan · Matthew Sutton · Christopher Drovandi · Leah South
East Exhibition Hall A-B #E-1307
Modern challenges in statistics and machine learning involve the ability to process large amounts of data and complex mathematical models, which can be computationally intensive. To overcome this computational challenge, approximate models are often used in practice. However, it is difficult to assess whether samples generated from approximate models are consistent with the complex model of interest (i.e. whether they are fit-for-purpose).Traditional ways of checking or measuring the quality of simulated samples often miss important issues, especially when the approximate simulation process introduces some bias or error. Some newer techniques try to fix this, but they can be slow with large datasets, struggle with complex data, and often need a lot of fine-tuning to work properly. They can also miss key differences in basic features of the samples, like the average or the spread.To address these problems, we introduce a new, faster approach that focuses directly on checking whether the important summary features of the samples—like averages and variability—match what we expect. We show through examples that this method works well across a range of common statistical tasks.