The project is for LLM unlearning, trustworthy AI.
-
Updated
Jul 19, 2025 - Python
The project is for LLM unlearning, trustworthy AI.
The LLM Unlearning repository is an open-source project dedicated to the concept of unlearning in Large Language Models (LLMs). It aims to address concerns about data privacy and ethical AI by exploring and implementing unlearning techniques that allow models to forget unwanted or sensitive data. This ensures that AI models comply with privacy.
"Machine Unlearning Fails to Remove Data Poisoning Attacks". Pawelczyk, M., Di, J., Lu, Y., Sekhari, A., Gautam, K., Seth, N.; ICLR 2025.
Add a description, image, and links to the unlearning-framework topic page so that developers can more easily learn about it.
To associate your repository with the unlearning-framework topic, visit your repo's landing page and select "manage topics."