Replies: 1 comment 6 replies
-
What😂? You mean the outputs of the model are gray-scale images instead of the mask? If you didn't change any codes of the model, you may need to check your GT data -- if they are gray-scale images. |
Beta Was this translation helpful? Give feedback.
6 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
So i prepared my model where it has images of solid background colors and i want to remove only that background while preserving the other design elements including the abstract shapes and splashes. I used the BiRefNet-general-epoch_244.pth and also the model.safetensor file from Huggingface as the pretrained base model and the swin_large as the backbone. And set batch size to 1 for 2500 image-mask pairs dataset with gradient accumulation step to 4. All images are 1024x1024 resolution. And i set finetune to 200 epochs which will take around 6 days to complete the finetuning. But the problem is, i am getting the prediction results as gray-scale image instead of full black and white mask. I didnt altered any of the model architecture and pre&post processing steps. But i dont know why i get this issue. Can anyone help me with this.
Beta Was this translation helpful? Give feedback.
All reactions