-
Notifications
You must be signed in to change notification settings - Fork 96
Open
Description
Hello! Thanks for your work on the repository, it's great and very valuable!
I ran the tofu_unlearn.sh
script on the Llama-3.2-1B-Instruct
model after downloading the code. The results I got seem to differ from what is presented in the repro.md
file.
Upon checking the parameters in the tofu_unlearn.sh
script, I noticed a potential issue: the per_device_train_batch_size
in the script is set to 4, while the repro.md
file shows it as 8.
Could this difference in batch size be the reason for the different results?
I encountered the same issue with the muse_unlearn.sh
script.
I am using two A100 80GB GPUs.
Thank you for your help.
PS:The following figures show the result I ran out and the parameters displayed by repro.md


Metadata
Metadata
Assignees
Labels
No labels