Skip to content

Commit b0de03e

Browse files
committed
MIP: step 7: move train, eval script to tools.
1 parent 7c7141a commit b0de03e

File tree

4 files changed

+7
-7
lines changed

4 files changed

+7
-7
lines changed

README.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -39,13 +39,13 @@ We now support both flickr30k and COCO. See details in [data/README.md](data/REA
3939
### Start training
4040

4141
```bash
42-
$ python train.py --id fc --caption_model newfc --input_json data/cocotalk.json --input_fc_dir data/cocotalk_fc --input_att_dir data/cocotalk_att --input_label_h5 data/cocotalk_label.h5 --batch_size 10 --learning_rate 5e-4 --learning_rate_decay_start 0 --scheduled_sampling_start 0 --checkpoint_path log_fc --save_checkpoint_every 6000 --val_images_use 5000 --max_epochs 30
42+
$ python tools/train.py --id fc --caption_model newfc --input_json data/cocotalk.json --input_fc_dir data/cocotalk_fc --input_att_dir data/cocotalk_att --input_label_h5 data/cocotalk_label.h5 --batch_size 10 --learning_rate 5e-4 --learning_rate_decay_start 0 --scheduled_sampling_start 0 --checkpoint_path log_fc --save_checkpoint_every 6000 --val_images_use 5000 --max_epochs 30
4343
```
4444

4545
or
4646

4747
```bash
48-
$ python train.py --cfg configs/fc.yml --id fc
48+
$ python tools/train.py --cfg configs/fc.yml --id fc
4949
```
5050

5151
The train script will dump checkpoints into the folder specified by `--checkpoint_path` (default = `log_$id/`). By default only save the best-performing checkpoint on validation and the latest checkpoint to save disk space. You can also set `--save_history_ckpt` to 1 to save every checkpoint.
@@ -78,12 +78,12 @@ $ bash scripts/copy_model.sh fc fc_rl
7878

7979
Then
8080
```bash
81-
$ python train.py --id fc_rl --caption_model newfc --input_json data/cocotalk.json --input_fc_dir data/cocotalk_fc --input_att_dir data/cocotalk_att --input_label_h5 data/cocotalk_label.h5 --batch_size 10 --learning_rate 5e-5 --start_from log_fc_rl --checkpoint_path log_fc_rl --save_checkpoint_every 6000 --language_eval 1 --val_images_use 5000 --self_critical_after 30 --cached_tokens coco-train-idxs --max_epoch 50 --train_sample_n 5
81+
$ python tools/train.py --id fc_rl --caption_model newfc --input_json data/cocotalk.json --input_fc_dir data/cocotalk_fc --input_att_dir data/cocotalk_att --input_label_h5 data/cocotalk_label.h5 --batch_size 10 --learning_rate 5e-5 --start_from log_fc_rl --checkpoint_path log_fc_rl --save_checkpoint_every 6000 --language_eval 1 --val_images_use 5000 --self_critical_after 30 --cached_tokens coco-train-idxs --max_epoch 50 --train_sample_n 5
8282
```
8383

8484
or
8585
```bash
86-
$ python train.py --cfg configs/fc_rl.yml --id fc_rl
86+
$ python tools/train.py --cfg configs/fc_rl.yml --id fc_rl
8787
```
8888

8989

@@ -100,7 +100,7 @@ Now place all your images of interest into a folder, e.g. `blah`, and run
100100
the eval script:
101101

102102
```bash
103-
$ python eval.py --model model.pth --infos_path infos.pkl --image_folder blah --num_images 10
103+
$ python tools/eval.py --model model.pth --infos_path infos.pkl --image_folder blah --num_images 10
104104
```
105105

106106
This tells the `eval` script to run up to 10 images from the given folder. If you have a big GPU you can speed up the evaluation by increasing `batch_size`. Use `--num_images -1` to process all images. The eval script will create an `vis.json` file inside the `vis` folder, which can then be visualized with the provided HTML interface:
@@ -115,7 +115,7 @@ Now visit `localhost:8000` in your browser and you should see your predicted cap
115115
### Evaluate on Karpathy's test split
116116

117117
```bash
118-
$ python eval.py --dump_images 0 --num_images 5000 --model model.pth --infos_path infos.pkl --language_eval 1
118+
$ python tools/eval.py --dump_images 0 --num_images 5000 --model model.pth --infos_path infos.pkl --language_eval 1
119119
```
120120

121121
The defualt split to evaluate is test. The default inference method is greedy decoding (`--sample_method greedy`), to sample from the posterior, set `--sample_method sample`.
@@ -125,7 +125,7 @@ The defualt split to evaluate is test. The default inference method is greedy de
125125
### Evaluate on COCO test set
126126

127127
```bash
128-
$ python eval.py --input_json cocotest.json --input_fc_dir data/cocotest_bu_fc --input_att_dir data/cocotest_bu_att --input_label_h5 none --num_images -1 --model model.pth --infos_path infos.pkl --language_eval 0
128+
$ python tools/eval.py --input_json cocotest.json --input_fc_dir data/cocotest_bu_fc --input_att_dir data/cocotest_bu_att --input_label_h5 none --num_images -1 --model model.pth --infos_path infos.pkl --language_eval 0
129129
```
130130

131131
You can download the preprocessed file `cocotest.json`, `cocotest_bu_att` and `cocotest_bu_fc` from [link](https://drive.google.com/open?id=1eCdz62FAVCGogOuNhy87Nmlo5_I0sH2J).
File renamed without changes.
File renamed without changes.
File renamed without changes.

0 commit comments

Comments
 (0)