You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If we are re-implementing papers using clean room practices, we need a good way to evaluate our trackers to ensure that they are performing as expected. It would be useful to have an evaluation framework to test our trackers on common benchmarks. This could also be designed to allow other trackers and detectors to plug into the eval framework to compare different trackers and detections. This can also give us a good idea of how to structure dataloaders for when we implement a training framework for ReID models, or even end-to-end deep learning trackers.