Skip to yearly menu bar Skip to main content


Poster
in
Affinity Workshop: New In ML

Exploring the Application of Model Context Protocol for Enhanced Reasoning in Large Language Models


Abstract:

Large Language Models (LLMs) have achieved remarkable success across various NLP tasks, yet they continue to face challenges in structured reasoning, multi-step problem solving, and tool coordination. To address these limitations, we explore the application of the Model Context Protocol (MCP)-a lightweight, extensible communication interface designed to manage context across multi-turn interactions in tool-augmented environments. We integrate MCP into open-source LLM stacks and demonstrate its utility by applying an existing Sequential Thinking (ST) module, which supports step-wise thought decomposition and verification, and by introducing our novel Monte Carlo Tree Search (MCTS) module, which performs planning guided by MCP Thoughts. Our MCP-based system demonstrates improved modularity, interpretability, and scalability in reasoning workflows. Through empirical evaluation on benchmarks including GPQA-100, StrategyQA, and AIME, we show that leveraging MCP enhances performance compared to vanilla prompting. These results validate MCP as a practical mechanism for enhancing reasoning-driven LLM applications and lay the foundation for reproducible and agentic AI systems.

Chat is not available.