First 11 Epochs #1
anderdad
started this conversation in
Show and tell
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
[Running] python -u "c:\Users\xxxxxxxx\xxxxxxxxxx\xxxxxxxxx\build_model.py"
No checkpoint found, initialized model base models ## No pretrained weights
Batch 0, Loss: 1.9156744480133057
Batch 10, Loss: 1.8626548051834106
Batch 20, Loss: 1.8565564155578613
Batch 30, Loss: 1.7862080335617065
Batch 40, Loss: 1.756113052368164
Batch 50, Loss: 1.6761970520019531
Batch 60, Loss: 1.9034875631332397
Batch 70, Loss: 1.9226258993148804
Batch 80, Loss: 1.688514232635498
Batch 90, Loss: 1.7509899139404297
Batch 100, Loss: 1.7324481010437012
Batch 110, Loss: 1.6220530271530151
Batch 120, Loss: 1.699600100517273
Batch 130, Loss: 1.7837945222854614
Batch 140, Loss: 1.5903843641281128
Batch 150, Loss: 1.763441562652588
Batch 160, Loss: 1.5288532972335815
Batch 170, Loss: 1.7576886415481567
Batch 180, Loss: 1.5279961824417114
Batch 190, Loss: 1.5802521705627441
Batch 200, Loss: 1.6948915719985962
Batch 210, Loss: 1.6916648149490356
Batch 220, Loss: 1.7030630111694336
Batch 230, Loss: 1.4639184474945068
Batch 240, Loss: 1.6928105354309082
Epoch 1/30, Loss: 1.7035062216161712
Checkpoint saved after training at epoch 1
Validation Loss: 1.6697299364136486, Accuracy: 35.601427115188585%
Checkpoint saved after validation at epoch 1
Batch 0, Loss: 1.6133414506912231
Batch 10, Loss: 1.5999548435211182
Batch 20, Loss: 1.6158747673034668
Batch 30, Loss: 1.4360835552215576
Batch 40, Loss: 1.8596746921539307
Batch 50, Loss: 1.3530008792877197
Batch 60, Loss: 1.7578260898590088
Batch 70, Loss: 1.607901692390442
Batch 80, Loss: 1.6088035106658936
Batch 90, Loss: 1.95505952835083
Batch 100, Loss: 1.6872538328170776
Batch 110, Loss: 1.548459529876709
Batch 120, Loss: 1.5038926601409912
Batch 130, Loss: 1.7046668529510498
Batch 140, Loss: 1.751694917678833
Batch 150, Loss: 1.4068156480789185
Batch 160, Loss: 1.5790209770202637
Batch 170, Loss: 1.432234525680542
Batch 180, Loss: 1.358766794204712
Batch 190, Loss: 1.3298461437225342
Batch 200, Loss: 1.8042343854904175
Batch 210, Loss: 1.4480602741241455
Batch 220, Loss: 1.634069800376892
Batch 230, Loss: 1.583912968635559
Batch 240, Loss: 1.455590009689331
Epoch 2/30, Loss: 1.5814120415749588
Checkpoint saved after training at epoch 2
Validation Loss: 1.5929197097212318, Accuracy: 41.25891946992864%
Checkpoint saved after validation at epoch 2
Batch 0, Loss: 1.5070292949676514
Batch 10, Loss: 1.5424340963363647
Batch 20, Loss: 1.453255295753479
Batch 30, Loss: 1.2382844686508179
Batch 40, Loss: 1.4153954982757568
Batch 50, Loss: 1.6224733591079712
Batch 60, Loss: 1.637902021408081
Batch 70, Loss: 1.156886339187622
Batch 80, Loss: 1.5556095838546753
Batch 90, Loss: 1.220657229423523
Batch 100, Loss: 1.3585582971572876
Batch 110, Loss: 1.5565907955169678
Batch 120, Loss: 1.3364578485488892
Batch 130, Loss: 1.3160583972930908
Batch 140, Loss: 1.2658745050430298
Batch 150, Loss: 1.4228127002716064
Batch 160, Loss: 1.4040803909301758
Batch 170, Loss: 1.4991264343261719
Batch 180, Loss: 1.2616233825683594
Batch 190, Loss: 1.361121416091919
Batch 200, Loss: 1.4586145877838135
Batch 210, Loss: 1.6240479946136475
Batch 220, Loss: 1.673764944076538
Batch 230, Loss: 1.2337945699691772
Batch 240, Loss: 1.4312593936920166
Epoch 3/30, Loss: 1.4768021911140379
Checkpoint saved after training at epoch 3
Validation Loss: 1.5412254391646967, Accuracy: 42.023445463812436%
Checkpoint saved after validation at epoch 3
Batch 0, Loss: 1.4433237314224243
Batch 10, Loss: 1.3442912101745605
Batch 20, Loss: 1.2225573062896729
Batch 30, Loss: 1.3247028589248657
Batch 40, Loss: 1.644689917564392
Batch 50, Loss: 1.2446593046188354
Batch 60, Loss: 1.5934934616088867
Batch 70, Loss: 1.318016767501831
Batch 80, Loss: 1.5058857202529907
Batch 90, Loss: 1.625394582748413
Batch 100, Loss: 1.331189513206482
Batch 110, Loss: 1.329154133796692
Batch 120, Loss: 1.1320215463638306
Batch 130, Loss: 1.2157723903656006
Batch 140, Loss: 1.4792433977127075
Batch 150, Loss: 1.433963656425476
Batch 160, Loss: 1.7915716171264648
Batch 170, Loss: 1.2077722549438477
Batch 180, Loss: 1.464971899986267
Batch 190, Loss: 1.1818430423736572
Batch 200, Loss: 1.2897958755493164
Batch 210, Loss: 1.3854953050613403
Batch 220, Loss: 0.9753021001815796
Batch 230, Loss: 1.6043521165847778
Batch 240, Loss: 0.9408539533615112
Epoch 4/30, Loss: 1.3418220567509411
Checkpoint saved after training at epoch 4
Validation Loss: 1.3619045101530183, Accuracy: 50.815494393476044%
Checkpoint saved after validation at epoch 4
Batch 0, Loss: 0.9928689002990723
Batch 10, Loss: 1.1908626556396484
Batch 20, Loss: 1.1277207136154175
Batch 30, Loss: 1.224690556526184
Batch 40, Loss: 1.3470462560653687
Batch 50, Loss: 1.0914629697799683
Batch 60, Loss: 0.9922074675559998
Batch 70, Loss: 1.259286642074585
Batch 80, Loss: 1.3383785486221313
Batch 90, Loss: 1.3343713283538818
Batch 100, Loss: 1.193608045578003
Batch 110, Loss: 0.9781741499900818
Batch 120, Loss: 1.6231099367141724
Batch 130, Loss: 1.0238127708435059
Batch 140, Loss: 0.9863274097442627
Batch 150, Loss: 1.4472482204437256
Batch 160, Loss: 0.9341237545013428
Batch 170, Loss: 0.9773645401000977
Batch 180, Loss: 1.156739592552185
Batch 190, Loss: 1.4230375289916992
Batch 200, Loss: 1.254294753074646
Batch 210, Loss: 0.886806845664978
Batch 220, Loss: 1.4000340700149536
Batch 230, Loss: 0.9686505198478699
Batch 240, Loss: 1.3307408094406128
Epoch 5/30, Loss: 1.196234073580765
Checkpoint saved after training at epoch 5
Validation Loss: 1.504090525755068, Accuracy: 45.69317023445464%
Checkpoint saved after validation at epoch 5
Batch 0, Loss: 0.9646356105804443
Batch 10, Loss: 1.069895625114441
Batch 20, Loss: 0.9800617694854736
Batch 30, Loss: 0.832708477973938
Batch 40, Loss: 0.8484237194061279
Batch 50, Loss: 1.05167818069458
Batch 60, Loss: 1.0394054651260376
Batch 70, Loss: 0.8484078049659729
Batch 80, Loss: 1.0619289875030518
Batch 90, Loss: 0.8489055037498474
Batch 100, Loss: 0.9609018564224243
Batch 110, Loss: 1.0775376558303833
Batch 120, Loss: 0.8700604438781738
Batch 130, Loss: 1.0073051452636719
Batch 140, Loss: 0.7354527711868286
Batch 150, Loss: 0.7949116826057434
Batch 160, Loss: 1.2638434171676636
Batch 170, Loss: 0.9351342916488647
Batch 180, Loss: 0.9091689586639404
Batch 190, Loss: 0.978523313999176
Batch 200, Loss: 0.7309710383415222
Batch 210, Loss: 1.3641188144683838
Batch 220, Loss: 0.8211721777915955
Batch 230, Loss: 0.9168440699577332
Batch 240, Loss: 0.9996311664581299
Epoch 6/30, Loss: 0.9892512610772761
Checkpoint saved after training at epoch 6
Validation Loss: 1.1413476016463302, Accuracy: 59.505606523955144%
Checkpoint saved after validation at epoch 6
Batch 0, Loss: 0.9633015394210815
Batch 10, Loss: 1.0169670581817627
Batch 20, Loss: 1.2054667472839355
Batch 30, Loss: 1.053060531616211
Batch 40, Loss: 0.8382384777069092
Batch 50, Loss: 0.6235209703445435
Batch 60, Loss: 0.9567490220069885
Batch 70, Loss: 0.9684983491897583
Batch 80, Loss: 0.939497709274292
Batch 90, Loss: 1.1582162380218506
Batch 100, Loss: 0.783268928527832
Batch 110, Loss: 0.8964506983757019
Batch 120, Loss: 0.7895589470863342
Batch 130, Loss: 1.0099689960479736
Batch 140, Loss: 1.000286340713501
Batch 150, Loss: 0.9230799078941345
Batch 160, Loss: 0.9614745378494263
Batch 170, Loss: 0.9803738594055176
Batch 180, Loss: 0.870471715927124
Batch 190, Loss: 0.8937320709228516
Batch 200, Loss: 1.0657416582107544
Batch 210, Loss: 0.8077185153961182
Batch 220, Loss: 0.9982946515083313
Batch 230, Loss: 0.7511869072914124
Batch 240, Loss: 0.8695698380470276
Epoch 7/30, Loss: 0.9257013034529802
Checkpoint saved after training at epoch 7
Validation Loss: 1.1340901056925456, Accuracy: 59.734964322120284%
Checkpoint saved after validation at epoch 7
Batch 0, Loss: 0.8942134976387024
Batch 10, Loss: 0.7508456707000732
Batch 20, Loss: 0.9622920155525208
Batch 30, Loss: 0.6686944961547852
Batch 40, Loss: 0.9433330297470093
Batch 50, Loss: 0.8622176647186279
Batch 60, Loss: 0.8230944871902466
Batch 70, Loss: 1.0735145807266235
Batch 80, Loss: 0.6659101843833923
Batch 90, Loss: 1.0533816814422607
Batch 100, Loss: 0.8526415824890137
Batch 110, Loss: 1.1522321701049805
Batch 120, Loss: 0.7088894248008728
Batch 130, Loss: 0.9097428917884827
Batch 140, Loss: 1.1404072046279907
Batch 150, Loss: 0.7508501410484314
Batch 160, Loss: 0.7884302735328674
Batch 170, Loss: 0.863334596157074
Batch 180, Loss: 1.0178215503692627
Batch 190, Loss: 0.8221424221992493
Batch 200, Loss: 0.8644483089447021
Batch 210, Loss: 1.0664187669754028
Batch 220, Loss: 0.8205272555351257
Batch 230, Loss: 0.8510411381721497
Batch 240, Loss: 1.0188981294631958
Epoch 8/30, Loss: 0.886499035406888
Checkpoint saved after training at epoch 8
Validation Loss: 1.1142425323889507, Accuracy: 59.811416921508666%
Checkpoint saved after validation at epoch 8
Batch 0, Loss: 0.7757568955421448
Batch 10, Loss: 0.9134659767150879
Batch 20, Loss: 0.7748557925224304
Batch 30, Loss: 0.9728901982307434
Batch 40, Loss: 0.7981626987457275
Batch 50, Loss: 0.8775278925895691
Batch 60, Loss: 0.7633670568466187
Batch 70, Loss: 0.9133449196815491
Batch 80, Loss: 0.7584460377693176
Batch 90, Loss: 0.8789072632789612
Batch 100, Loss: 0.9383776187896729
Batch 110, Loss: 0.8461180329322815
Batch 120, Loss: 1.0134092569351196
Batch 130, Loss: 0.8232463598251343
Batch 140, Loss: 0.9079266786575317
Batch 150, Loss: 0.9122776389122009
Batch 160, Loss: 0.8992060422897339
Batch 170, Loss: 0.8743405938148499
Batch 180, Loss: 0.744807243347168
Batch 190, Loss: 0.791610062122345
Batch 200, Loss: 0.8190324306488037
Batch 210, Loss: 1.0264064073562622
Batch 220, Loss: 0.7398329973220825
Batch 230, Loss: 0.9348060488700867
Batch 240, Loss: 0.8339088559150696
Epoch 9/30, Loss: 0.8562042856119513
Checkpoint saved after training at epoch 9
Validation Loss: 1.0956406474598055, Accuracy: 61.34046890927625%
Checkpoint saved after validation at epoch 9
Batch 0, Loss: 0.8304377794265747
Batch 10, Loss: 0.8835525512695312
Batch 20, Loss: 0.9710465669631958
Batch 30, Loss: 0.9593148827552795
Batch 40, Loss: 0.6672557592391968
Batch 50, Loss: 0.7892225980758667
Batch 60, Loss: 0.5049821138381958
Batch 70, Loss: 1.245028018951416
Batch 80, Loss: 0.8591259717941284
Batch 90, Loss: 1.0755175352096558
Batch 100, Loss: 0.8041462302207947
Batch 110, Loss: 0.888481616973877
Batch 120, Loss: 0.8654052019119263
Batch 130, Loss: 0.7939372658729553
Batch 140, Loss: 0.7892494201660156
Batch 150, Loss: 1.0469911098480225
Batch 160, Loss: 0.7524814009666443
Batch 170, Loss: 0.8445149660110474
Batch 180, Loss: 0.6852868795394897
Batch 190, Loss: 0.8074037432670593
Batch 200, Loss: 0.7345415353775024
Batch 210, Loss: 0.8010282516479492
Batch 220, Loss: 0.6815871596336365
Batch 230, Loss: 0.6268032193183899
Batch 240, Loss: 0.8586410880088806
Epoch 10/30, Loss: 0.823944639384262
Checkpoint saved after training at epoch 10
Validation Loss: 1.1015526647490215, Accuracy: 60.67787971457696%
Checkpoint saved after validation at epoch 10
Batch 0, Loss: 0.7843832969665527
Batch 10, Loss: 0.7088426351547241
Batch 20, Loss: 0.5975248217582703
Batch 30, Loss: 0.6702471375465393
Batch 40, Loss: 0.7955651879310608
Batch 50, Loss: 0.9211598634719849
Batch 60, Loss: 0.7646224498748779
Batch 70, Loss: 0.8884650468826294
Batch 80, Loss: 0.7156628966331482
Batch 90, Loss: 0.9466649293899536
Batch 100, Loss: 0.796012282371521
Batch 110, Loss: 0.7313655614852905
Batch 120, Loss: 0.6248000264167786
Batch 130, Loss: 0.6745001077651978
Batch 140, Loss: 1.0070054531097412
Batch 150, Loss: 0.8221409916877747
Batch 160, Loss: 0.7815259099006653
Batch 170, Loss: 0.6426569223403931
Batch 180, Loss: 0.9406752586364746
Batch 190, Loss: 0.8704614043235779
Batch 200, Loss: 0.9878426790237427
Batch 210, Loss: 1.0229030847549438
Batch 220, Loss: 0.7145501375198364
Batch 230, Loss: 0.8713546395301819
Batch 240, Loss: 0.8976824879646301
Epoch 11/30, Loss: 0.7896523332692743
Checkpoint saved after training at epoch 11
Validation Loss: 1.0878337832485758, Accuracy: 61.59531090723751%
Checkpoint saved after validation at epoch 11
Beta Was this translation helpful? Give feedback.
All reactions