Skip to content

Can't reproduce the result of 84% accuracy #3

@hcygeorge

Description

@hcygeorge

I refered to your and Hao Mood's code and trained the BCNN model fine tuning all layers,
and the best test accuracy I can reach was ~73%/~61% with and without pretrained VGG16.
Is it easy to reach the accuracy of 84% you report?

I used almost the same hyperparameter setting, except the batch-size.
Due to memory constraint, I can only set batch-size as 12.
I doubt the small batch size would hurt the training but have no evidence.
Since the VGG16 I used didn't include BN layers, and people just said
small batch size can provide noise in training to prevent from poor generalization.

Because small batch size increase the variance of gradient,
so I also tried to tune the lr rate in order to adjust that, but still can't improve the result.

Could you give me some advice on how to reach the 84% accuracy?
or confirm that it is not possible to reach 84% accuracy when batch size is 12.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions