@@ -267,10 +267,12 @@ All model architecture families include variants with pretrained weights. There
267267A full version of the list below with source links can be found in the [ documentation] ( https://rwightman.github.io/pytorch-image-models/models/ ) .
268268
269269* Aggregating Nested Transformers - https://arxiv.org/abs/2105.12723
270+ * BEiT - https://arxiv.org/abs/2106.08254
270271* Big Transfer ResNetV2 (BiT) - https://arxiv.org/abs/1912.11370
271272* Bottleneck Transformers - https://arxiv.org/abs/2101.11605
272273* CaiT (Class-Attention in Image Transformers) - https://arxiv.org/abs/2103.17239
273274* CoaT (Co-Scale Conv-Attentional Image Transformers) - https://arxiv.org/abs/2104.06399
275+ * ConvNeXt - https://arxiv.org/abs/2201.03545
274276* ConViT (Soft Convolutional Inductive Biases Vision Transformers)- https://arxiv.org/abs/2103.10697
275277* CspNet (Cross-Stage Partial Networks) - https://arxiv.org/abs/1911.11929
276278* DeiT (Vision Transformer) - https://arxiv.org/abs/2012.12877
@@ -288,19 +290,23 @@ A full version of the list below with source links can be found in the [document
288290 * MNASNet B1, A1 (Squeeze-Excite), and Small - https://arxiv.org/abs/1807.11626
289291 * MobileNet-V2 - https://arxiv.org/abs/1801.04381
290292 * Single-Path NAS - https://arxiv.org/abs/1904.02877
293+ * TinyNet - https://arxiv.org/abs/2010.14819
291294* GhostNet - https://arxiv.org/abs/1911.11907
292295* gMLP - https://arxiv.org/abs/2105.08050
293296* GPU-Efficient Networks - https://arxiv.org/abs/2006.14090
294297* Halo Nets - https://arxiv.org/abs/2103.12731
295- * HardCoRe-NAS - https://arxiv.org/abs/2102.11646
296298* HRNet - https://arxiv.org/abs/1908.07919
297299* Inception-V3 - https://arxiv.org/abs/1512.00567
298300* Inception-ResNet-V2 and Inception-V4 - https://arxiv.org/abs/1602.07261
299301* Lambda Networks - https://arxiv.org/abs/2102.08602
300302* LeViT (Vision Transformer in ConvNet's Clothing) - https://arxiv.org/abs/2104.01136
301303* MLP-Mixer - https://arxiv.org/abs/2105.01601
302304* MobileNet-V3 (MBConvNet w/ Efficient Head) - https://arxiv.org/abs/1905.02244
305+ * FBNet-V3 - https://arxiv.org/abs/2006.02049
306+ * HardCoRe-NAS - https://arxiv.org/abs/2102.11646
307+ * LCNet - https://arxiv.org/abs/2109.15099
303308* NASNet-A - https://arxiv.org/abs/1707.07012
309+ * NesT - https://arxiv.org/abs/2105.12723
304310* NFNet-F - https://arxiv.org/abs/2102.06171
305311* NF-RegNet / NF-ResNet - https://arxiv.org/abs/2101.08692
306312* PNasNet - https://arxiv.org/abs/1712.00559
@@ -326,6 +332,7 @@ A full version of the list below with source links can be found in the [document
326332* Transformer-iN-Transformer (TNT) - https://arxiv.org/abs/2103.00112
327333* TResNet - https://arxiv.org/abs/2003.13630
328334* Twins (Spatial Attention in Vision Transformers) - https://arxiv.org/pdf/2104.13840.pdf
335+ * Visformer - https://arxiv.org/abs/2104.12533
329336* Vision Transformer - https://arxiv.org/abs/2010.11929
330337* VovNet V2 and V1 - https://arxiv.org/abs/1911.06667
331338* Xception - https://arxiv.org/abs/1610.02357
0 commit comments