Skip to content

AssertionError #1

@automatewithme

Description

@automatewithme

I am trying to evaluate my prediction file and mAP_evaluation.py is giving following error:
Starting mAP computation
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:312: RuntimeWarning: invalid value encountered in true_divide
recalls = tp / float(num_gts)
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:314: RuntimeWarning: invalid value encountered in greater_equal
assert np.all(0 <= recalls) & np.all(recalls <= 1)
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:314: RuntimeWarning: invalid value encountered in less_equal
assert np.all(0 <= recalls) & np.all(recalls <= 1)
Process Process-10:
Traceback (most recent call last):
File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "mAP_evaluation.py", line 44, in save_AP
AP = get_average_precisions(gt, predictions, class_names, iou_threshold)
File "/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py", line 369, in get_average_precisions
gt_by_class_name[class_name], pred_by_class_name[class_name], iou_threshold
File "/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py", line 314, in recall_precision
assert np.all(0 <= recalls) & np.all(recalls <= 1)
AssertionError
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:312: RuntimeWarning: invalid value encountered in true_divide
recalls = tp / float(num_gts)
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:314: RuntimeWarning: invalid value encountered in greater_equal
assert np.all(0 <= recalls) & np.all(recalls <= 1)
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:314: RuntimeWarning: invalid value encountered in less_equal
assert np.all(0 <= recalls) & np.all(recalls <= 1)
Process Process-6:
Traceback (most recent call last):
File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "mAP_evaluation.py", line 44, in save_AP
AP = get_average_precisions(gt, predictions, class_names, iou_threshold)
File "/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py", line 369, in get_average_precisions
gt_by_class_name[class_name], pred_by_class_name[class_name], iou_threshold
File "/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py", line 314, in recall_precision
assert np.all(0 <= recalls) & np.all(recalls <= 1)
AssertionError
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:312: RuntimeWarning: invalid value encountered in true_divide
recalls = tp / float(num_gts)
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:314: RuntimeWarning: invalid value encountered in greater_equal
assert np.all(0 <= recalls) & np.all(recalls <= 1)
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:314: RuntimeWarning: invalid value encountered in less_equal
assert np.all(0 <= recalls) & np.all(recalls <= 1)
Process Process-9:
Traceback (most recent call last):
File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "mAP_evaluation.py", line 44, in save_AP
AP = get_average_precisions(gt, predictions, class_names, iou_threshold)
File "/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py", line 369, in get_average_precisions
gt_by_class_name[class_name], pred_by_class_name[class_name], iou_threshold
File "/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py", line 314, in recall_precision
assert np.all(0 <= recalls) & np.all(recalls <= 1)
AssertionError
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:312: RuntimeWarning: invalid value encountered in true_divide
recalls = tp / float(num_gts)
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:314: RuntimeWarning: invalid value encountered in greater_equal
assert np.all(0 <= recalls) & np.all(recalls <= 1)
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:314: RuntimeWarning: invalid value encountered in less_equal
assert np.all(0 <= recalls) & np.all(recalls <= 1)
Process Process-3:
Traceback (most recent call last):
File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "mAP_evaluation.py", line 44, in save_AP
AP = get_average_precisions(gt, predictions, class_names, iou_threshold)
File "/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py", line 369, in get_average_precisions
gt_by_class_name[class_name], pred_by_class_name[class_name], iou_threshold
File "/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py", line 314, in recall_precision
assert np.all(0 <= recalls) & np.all(recalls <= 1)
AssertionError
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:312: RuntimeWarning: invalid value encountered in true_divide
recalls = tp / float(num_gts)
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:314: RuntimeWarning: invalid value encountered in greater_equal
assert np.all(0 <= recalls) & np.all(recalls <= 1)
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:314: RuntimeWarning: invalid value encountered in less_equal
assert np.all(0 <= recalls) & np.all(recalls <= 1)
Process Process-4:
Traceback (most recent call last):
File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "mAP_evaluation.py", line 44, in save_AP
AP = get_average_precisions(gt, predictions, class_names, iou_threshold)
File "/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py", line 369, in get_average_precisions
gt_by_class_name[class_name], pred_by_class_name[class_name], iou_threshold
File "/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py", line 314, in recall_precision
assert np.all(0 <= recalls) & np.all(recalls <= 1)
AssertionError
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:312: RuntimeWarning: invalid value encountered in true_divide
recalls = tp / float(num_gts)
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:314: RuntimeWarning: invalid value encountered in greater_equal
assert np.all(0 <= recalls) & np.all(recalls <= 1)
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:314: RuntimeWarning: invalid value encountered in less_equal
assert np.all(0 <= recalls) & np.all(recalls <= 1)
Process Process-5:
Traceback (most recent call last):
File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "mAP_evaluation.py", line 44, in save_AP
AP = get_average_precisions(gt, predictions, class_names, iou_threshold)
File "/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py", line 369, in get_average_precisions
gt_by_class_name[class_name], pred_by_class_name[class_name], iou_threshold
File "/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py", line 314, in recall_precision
assert np.all(0 <= recalls) & np.all(recalls <= 1)
AssertionError
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:312: RuntimeWarning: invalid value encountered in true_divide
recalls = tp / float(num_gts)
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:314: RuntimeWarning: invalid value encountered in greater_equal
assert np.all(0 <= recalls) & np.all(recalls <= 1)
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:314: RuntimeWarning: invalid value encountered in less_equal
assert np.all(0 <= recalls) & np.all(recalls <= 1)
Process Process-1:
Traceback (most recent call last):
File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "mAP_evaluation.py", line 44, in save_AP
AP = get_average_precisions(gt, predictions, class_names, iou_threshold)
File "/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py", line 369, in get_average_precisions
gt_by_class_name[class_name], pred_by_class_name[class_name], iou_threshold
File "/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py", line 314, in recall_precision
assert np.all(0 <= recalls) & np.all(recalls <= 1)
AssertionError
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:312: RuntimeWarning: invalid value encountered in true_divide
recalls = tp / float(num_gts)
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:314: RuntimeWarning: invalid value encountered in greater_equal
assert np.all(0 <= recalls) & np.all(recalls <= 1)
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:314: RuntimeWarning: invalid value encountered in less_equal
assert np.all(0 <= recalls) & np.all(recalls <= 1)
Process Process-7:
Traceback (most recent call last):
File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "mAP_evaluation.py", line 44, in save_AP
AP = get_average_precisions(gt, predictions, class_names, iou_threshold)
File "/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py", line 369, in get_average_precisions
gt_by_class_name[class_name], pred_by_class_name[class_name], iou_threshold
File "/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py", line 314, in recall_precision
assert np.all(0 <= recalls) & np.all(recalls <= 1)
AssertionError
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:312: RuntimeWarning: invalid value encountered in true_divide
recalls = tp / float(num_gts)
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:314: RuntimeWarning: invalid value encountered in greater_equal
assert np.all(0 <= recalls) & np.all(recalls <= 1)
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:314: RuntimeWarning: invalid value encountered in less_equal
assert np.all(0 <= recalls) & np.all(recalls <= 1)
Process Process-8:
Traceback (most recent call last):
File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "mAP_evaluation.py", line 44, in save_AP
AP = get_average_precisions(gt, predictions, class_names, iou_threshold)
File "/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py", line 369, in get_average_precisions
gt_by_class_name[class_name], pred_by_class_name[class_name], iou_threshold
File "/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py", line 314, in recall_precision
assert np.all(0 <= recalls) & np.all(recalls <= 1)
AssertionError
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:312: RuntimeWarning: invalid value encountered in true_divide
recalls = tp / float(num_gts)
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:314: RuntimeWarning: invalid value encountered in greater_equal
assert np.all(0 <= recalls) & np.all(recalls <= 1)
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:314: RuntimeWarning: invalid value encountered in less_equal
assert np.all(0 <= recalls) & np.all(recalls <= 1)
Process Process-2:
Traceback (most recent call last):
File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "mAP_evaluation.py", line 44, in save_AP
AP = get_average_precisions(gt, predictions, class_names, iou_threshold)
File "/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py", line 369, in get_average_precisions
gt_by_class_name[class_name], pred_by_class_name[class_name], iou_threshold
File "/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py", line 314, in recall_precision
assert np.all(0 <= recalls) & np.all(recalls <= 1)
AssertionError
Traceback (most recent call last):
File "mAP_evaluation.py", line 126, in
fire.Fire(main)
File "/usr/local/lib/python3.6/dist-packages/fire/core.py", line 138, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/usr/local/lib/python3.6/dist-packages/fire/core.py", line 471, in _Fire
target=component.name)
File "/usr/local/lib/python3.6/dist-packages/fire/core.py", line 675, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "mAP_evaluation.py", line 115, in main
metric, overall_AP = get_metric_overall_AP(iou_th_range, output_dir, class_names)
File "mAP_evaluation.py", line 66, in get_metric_overall_AP
with open(str(summary_path), 'r') as f:
FileNotFoundError: [Errno 2] No such file or directory: 'tmp/metric_summary_0.5.json'
I could not able to figure out why it it happening? can you help me out?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions