Skip to yearly menu bar Skip to main content


Poster

E-LDA: Toward Interpretable LDA Topic Models with Strong Guarantees in Logarithmic Parallel Time

Adam Breuer

East Exhibition Hall A-B #E-1504
[ ] [ ]
Thu 17 Jul 11 a.m. PDT — 1:30 p.m. PDT

Abstract:

In this paper, we provide the first practical algorithms with provable guarantees for the problem of inferring the topics assigned to each document in an LDA topic model. This is the primary inference problem for many applications of topic models in social science, data exploration, and causal inference settings. We obtain this result by showing a novel non-gradient-based, combinatorial approach to estimating topic models. This yields algorithms that converge to near-optimal posterior probability in logarithmic parallel computation time (adaptivity)---exponentially faster than any known LDA algorithm. We also show that our approach can provide interpretability guarantees such that each learned topic is formally associated with a known keyword. Finally, we show that unlike alternatives, our approach can maintain the independence assumptions necessary to use the learned topic model for downstream causal inference methods that allow researchers to study topics as treatments. In terms of practical performance, our approach consistently returns solutions of higher semantic quality than solutions from state-of-the-art LDA algorithms, neural topic models, and LLM-based topic models across a diverse range of text datasets and evaluation parameters.

Lay Summary:

Topic models are among the most popular techniques in machine learning and social science, where they are widely used to help researchers summarize the key themes that characterize large datasets containing many text documents. However, existing topic modeling algorithms are known to produce unreliable results that can be difficult to interpret, and they are also too slow to use on very large datasets. This paper introduces a new algorithm that solves topic models with strong mathematical guarantees, ensuring that the topics it finds accurately represent the data. Unlike previous methods, this algorithm runs exponentially faster and provides clear, interpretable results, making it especially valuable in sensitive areas such as detecting harmful or abusive content online, where transparency is not just desirable, but ethically and legally essential. Finally, we show that in experiments on real-world datasets, our algorithm learns topics that exhibit better semantic quality than alternatives, including recent LLM and neural network based algorithms.

Chat is not available.