Poster
PDUDT: Provable Decentralized Unlearning under Dynamic Topologies
Jing Qiao · Yu Liu · Zengzhe Chen · Mingyi Li · YUAN YUAN · Xiao Zhang · Dongxiao Yu
West Exhibition Hall B2-B3 #W-802
We study how to “unlearn” a specific participant’s data from a fully decentralized learning system without the heavy cost of retraining or extra communication. In decentralized training, devices exchange model updates over changing networks, making it hard to pinpoint and remove one client’s influence once learning is complete. Our solution, called PDUDT, lets every node erase a target client’s contribution simply by tuning its own updates—no extra communication or full replay of past state is needed. We prove that PDUDT’s outcome is statistically equivalent to perturbed retraining method, giving strong guarantees that the undesired influence is truly removed. After unlearning, PDUDT can quickly converge in subsequent training. In experiments, PDUDT matches the unlearning quality of naive retraining while reducing unlearning time by over 99%. This makes it a practical, scalable way to enforce data removal in real‐world decentralized learning.