2025 2024 2023 2022 2021 2020 2019 2018 2017 < 2017
| Author(s) | Title | Venue |
|---|---|---|
| Tople te al. | Analyzing Information Leakage of Updates to Natural Language Models | CCS |
| Golatkar et al. | Eternal Sunshine of the Spotless Net: Selective Forgetting in Deep Networks | CVPR |
| Golatkar et al. | Forgetting Outside the Box: Scrubbing Deep Networks of Information Accessible from Input-Output Observations | ECCV |
| Garg et al. | Formalizing Data Deletion in the Context of the Right to be Forgotten | EUROCRYPT |
| Guo et al. | Certified Data Removal from Machine Learning Models | ICML |
| Wu et al. | DeltaGrad: Rapid Retraining of Machine Learning Models | ICML |
| Nguyen et al. | Variational Bayesian Unlearning | NeurIPS |
| Liu et al. | Learn to Forget: User-Level Memorization Elimination in Federated Learning | researchgate |
| Felps et al. | Class Clown: Data Redaction in Machine Unlearning at Enterprise Scale | arXiv |
| Sommer et al. | Towards Probabilistic Verification of Machine Unlearning | arXiv |
| Author(s) | Title | Venue |
|---|---|---|
| Shintre et al. | Making Machine Learning Forget | APF |
| Du et al. | Lifelong Anomaly Detection Through Unlearning | CCS |
| Kim et al. | Learning Not to Learn: Training Deep Neural Networks With Biased Data | CVPR |
| Ginart et al. | Making AI Forget You: Data Deletion in Machine Learning | NeurIPS |
| Wang et al. | Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks | S&P |
| Schelter | “Amnesia” – Towards Machine Learning Models That Can Forget User Data Very Fast | AIDB Workshop |
| Author(s) | Title | Venue |
|---|---|---|
| Cao et al. | Efficient Repair of Polluted Machine Learning Systems via Causal Unlearning | ASIACCS |
| Chen et al. | A Novel Online Incremental and Decremental Learning Algorithm Based on Variable Support Vector Machine | Cluster Computing |
| Villaronga et al. | Humans Forget, Machines Remember: Artificial Intelligence and the Right to Be Forgotten | Computer Law & Security Review |
| Veale et al. | Algorithms that remember: model inversion attacks and data protection law | The Royal Society |
| European Union | GDPR | |
| State of California | California Consumer Privacy Act |
| Author(s) | Title | Venue |
|---|---|---|
| Shokri et al. | Membership Inference Attacks Against Machine Learning Models | S&P |
| Kwak et al. | Let Machines Unlearn--Machine Unlearning and the Right to be Forgotten | SIGSEC |
| Author(s) | Title | Venue |
|---|---|---|
| Ganin et al. | Domain-Adversarial Training of Neural Networks | JMLR 2016 |
| Cao and Yang | Towards Making Systems Forget with Machine Unlearning | S&P 2015 |
| Tsai et al. | Incremental and decremental training for linear classification | KDD 2014 |
| Karasuyama and Takeuchi | Multiple Incremental Decremental Learning of Support Vector Machines | NeurIPS 2009 |
| Duan et al. | Decremental Learning Algorithms for Nonlinear Langrangian and Least Squares Support Vector Machines | OSB 2007 |
| Romero et al. | Incremental and Decremental Learning for Linear Support Vector Machines | ICANN 2007 |
| Tveit et al. | Incremental and Decremental Proximal Support Vector Classification using Decay Coefficients | DaWaK 2003 |
| Tveit and Hetland | Multicategory Incremental Proximal Support Vector Classifiers | KES 2003 |
| Cauwenberghs and Poggio | Incremental and Decremental Support Vector Machine Learning | NeurIPS 2001 |
| Canada | PIPEDA | 2000 |