Poster
in
Workshop: ICML 2025 Workshop on Collaborative and Federated Agentic Workflows (CFAgentic @ ICML'25)
Advancing Agentic AI: Decentralized and Verifiable Collaboration for Next-Generation Foundation Model Development
Arpita Sarker · Arpita Sarker · Alexander Jesser
Foundation models such as large language models have achieved remarkable performance by leveraging massivecentralized datasets and compute. However, concerns around data privacy, governance, and trust motivate newagentic workflows where multiple parties (agents) collaboratively develop models without central custodians. Wepropose a decentralized framework for verifiable multi-agent model training that integrates federated learning,distributed ledger technologies, and knowledge distillation. In our approach, each participant maintains local dataand models, contributing updates that are logged on a tamper-proof DAG ledger for transparency and account-ability. A voting-based consensus mechanism enables multi-agent governance, ensuring only high-quality modelupdates are merged. To aggregate knowledge from diverse sources, we employ cross-silo knowledge distilla-tion, including distilling large teacher models (e.g. LLaMA, BioGPT) into smaller models in a federated setting.Empirical evaluations on collaborative learning scenarios – including named entity recognition (F1=96.23%),medical code classification (F1=79.11%), and question-answering tasks – demonstrate that our decentralizedtraining achieves performance comparable to centralized methods while preserving privacy and trust. This workadvances agentic AI by enabling next-generation foundation model development through privacy-preserving,trustable collaboration.