Poster
System-Aware Unlearning Algorithms: Use Lesser, Forget Faster
Linda Lu · Ayush Sekhari · Karthik Sridharan
West Exhibition Hall B2-B3 #W-813
Machine learning models are often trained on private, sensitive data, which could be exposed during deployment. Due to these privacy concerns, some individuals may request for the influence of their data be removed from the model after deployment. Machine unlearning is the selective removal of specific training data after a model has been trained in a more efficient manner than retraining the entire model from scratch. The current definition of machine unlearning provides privacy guarantees against a worst-case attacker (one who can recover not only the unlearned model but also the remaining data samples); however, such strong attackers are unrealistic, and this stringent definition has made the development of efficient unlearning algorithms challenging. In this work, we propose a new definition, system-aware unlearning, which aims to provide unlearning guarantees against an attacker that can at best only gain access to the information stored in the learning system after unlearning. If less information is stored and used by the algorithm, then less information is exposed to the attacker, making it easier to provide privacy against such an attacker. Thus, algorithms that rely on less of their training data can unlearn more efficiently. Using this intuition, we use sample compression algorithms to design more efficient unlearning algorithms for classification.