Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Programmatic Representations for Agent Learning

Large Language Models Can Think and Act Probabilistically

Kou Misaki · Takuya Akiba


Abstract:

This research demonstrates that our non-trivial prompting method, incorporating programmatic representations, can enable agents to reliably execute their own intended probabilistic behavior. This capability is crucial for applications requiring strategic unpredictability (i.e., anti-predictive against adversaries) and efficient exploration. Our proposed prompting method, called Random String Manipulation (RSM), leverages the capability of Large Language Models (LLMs) to generate complex strings and arithmetically manipulate them to select an action from a set of actions according to a given probability distribution. Experiments on tasks requiring probabilistic responses show that RSM consistently outperforms baseline prompts across all tested LLMs, and in some cases achieves performance comparable to pseudo-random number generators, demonstrating its effectiveness in ensuring robust and unbiased probabilistic outputs.

Chat is not available.