You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# SuperSimpleNet: Unifying Unsupervised and Supervised Learning for Fast and Reliable Surface Defect Detection
1
+
# SuperSimpleNet
2
2
3
-
This is an implementation of the [SuperSimpleNet](https://arxiv.org/pdf/2408.03143) paper, based on the [official code](https://github.com/blaz-r/SuperSimpleNet).
3
+
This is an implementation of the SuperSimpleNet, based on the [official code](https://github.com/blaz-r/SuperSimpleNet).
4
+
5
+
The model was first presented at ICPR 2024: [SuperSimpleNet : Unifying Unsupervised and Supervised Learning for Fast and Reliable Surface Defect Detection](https://arxiv.org/abs/2408.03143)
6
+
7
+
An extension was later published in JIMS 2025: [No Label Left Behind: A Unified Surface Defect Detection Model for all Supervision Regimes](https://link.springer.com/article/10.1007/s10845-025-02680-8)
Currently, the difference between ICPR and JIMS code is only the `adapt_cls_features` which controls whether the features used for classification head are adapted or not.
32
+
For ICPR this is set to True (i.e. the features for classification head are adapted), and for JIMS version this is False (which is also the default).
@@ -36,29 +43,29 @@ This implementation supports both unsupervised and supervised setting, but Anoma
36
43
>
37
44
> It is recommended to train the model for 300 epochs with batch size of 32 to achieve stable training with random anomaly generation. Training with lower parameter values will still work, but might not yield the optimal results.
38
45
>
39
-
> For supervised learning, refer to the [official code](https://github.com/blaz-r/SuperSimpleNet).
46
+
> For weakly, mixed and fully supervised training, refer to the [official code](https://github.com/blaz-r/SuperSimpleNet).
40
47
41
48
## MVTecAD AD results
42
49
43
50
The following results were obtained using this Anomalib implementation trained for 300 epochs with seed 0, default params, and batch size 32.
0 commit comments