Skip to content

Commit 93ebcac

Browse files
Update models and their descriptions for 2020.12.RC1. DeepLab, ICNet, MobileNet, RetinaNet, SEGNet, and Yolo-V3
1 parent 119c3c9 commit 93ebcac

File tree

12 files changed

+15321
-1
lines changed

12 files changed

+15321
-1
lines changed

caffe_models/deeplab/deeplabv3_mnv2_pascal_train_aug/notes.txt

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,4 @@
1+
Converted from Tensorflow model from: https://github.com/tensorflow/models/blob/master/research/deeplab/g3doc/model_zoo.md
12

23
Pruned graphs:
34
1. deeplabv3_mnv2_pascal_train_aug.prototxt / deeplabv3_mnv2_pascal_train_aug_random_pruned.caffemodel

caffe_models/icnet/caffe_model/README.md

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,4 +16,10 @@ icnet_cityscapes_trainval_90k_bnnomerge.caffemodel: [GoogleDrive](https://drive.
1616

1717
(31M, md5: ba3cf6e24beb07068dacc901a9c7f28b; train on trainvalset for 90k, original)
1818

19-
**Notes**: Model's name that contains phrase 'bnnomerge' is the original trained model, the related one without this phrase is obtained by merging the parameters in batch normlization layers into the closely front convolution layers. When testing the mIoU performance, please choose the related prototxt file. While when testing the inference speed, please choose prototxt without this phrase. That's because the ''Caffe time'' tool runs in training mode while bn layers work in different way as in testing mode (using stored history statistics during testing VS online calculating current batch's statistics during training).
19+
**Notes**:
20+
1. Model's name that contains phrase 'bnnomerge' is the original trained model, the related one without this phrase is obtained by merging the parameters in batch normlization layers into the closely front convolution layers.
21+
a. When testing the mIoU performance, please choose the related prototxt file.
22+
b. When testing the inference speed, please choose prototxt without this phrase. That's because the ''Caffe time'' tool runs in training mode while bn layers work in different way as in testing mode (using stored history statistics during testing VS online calculating current batch's statistics during training).
23+
24+
2. The icnet_cityscapes_merge_subgraph.prototxt is a variant from the icnet_cityscapes.prototxt. It merges several AVE Pooling and Interp with the Eltwise Sum into a single layer called ICNetSubgraph. This prototxt could produce the same final results as the original one while improving the model efficiency, reducing bandwidth consumption and solve the mismatch issue between host_fixed and unmerged_large implementation.
25+

0 commit comments

Comments
 (0)