Skip to content

Commit 7e72ad1

Browse files
authored
Merge pull request #1000 from roboflow/0.19.0-changelog
`supervision-0.19.0` release changelog
2 parents 5b6713f + 8d7f18c commit 7e72ad1

File tree

3 files changed

+84
-7
lines changed

3 files changed

+84
-7
lines changed

docs/annotators.md

Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -293,12 +293,6 @@ status: new
293293
)
294294
```
295295

296-
<div class="result" markdown>
297-
298-
![label-annotator-example](https://media.roboflow.com/supervision-annotator-examples/label-annotator-example-purple.png){ align=center width="800" }
299-
300-
</div>
301-
302296
=== "Blur"
303297

304298
```python

docs/changelog.md

Lines changed: 83 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,88 @@
1+
### 0.19.0 <small>March 15, 2024</small>
2+
3+
- Added [#818](https://github.com/roboflow/supervision/pull/818): [`sv.CSVSink`](/0.19.0/detection/tools/save_detections/#supervision.detection.tools.csv_sink.CSVSink) allowing for the straightforward saving of image, video, or stream inference results in a `.csv` file.
4+
5+
```python
6+
import supervision as sv
7+
from ultralytics import YOLO
8+
9+
model = YOLO(<SOURCE_MODEL_PATH>)
10+
csv_sink = sv.CSVSink(<RESULT_CSV_FILE_PATH>)
11+
frames_generator = sv.get_video_frames_generator(<SOURCE_VIDEO_PATH>)
12+
13+
with csv_sink:
14+
for frame in frames_generator:
15+
result = model(frame)[0]
16+
detections = sv.Detections.from_ultralytics(result)
17+
csv_sink.append(detections, custom_data={<CUSTOM_LABEL>:<CUSTOM_DATA>})
18+
```
19+
20+
- Added [#819](https://github.com/roboflow/supervision/pull/819): [`sv.JSONSink`](/0.19.0/detection/tools/save_detections/#supervision.detection.tools.csv_sink.JSONSink) allowing for the straightforward saving of image, video, or stream inference results in a `.json` file.
21+
22+
```python
23+
24+
```python
25+
import supervision as sv
26+
from ultralytics import YOLO
27+
28+
model = YOLO(<SOURCE_MODEL_PATH>)
29+
json_sink = sv.JSONSink(<RESULT_JSON_FILE_PATH>)
30+
frames_generator = sv.get_video_frames_generator(<SOURCE_VIDEO_PATH>)
31+
32+
with json_sink:
33+
for frame in frames_generator:
34+
result = model(frame)[0]
35+
detections = sv.Detections.from_ultralytics(result)
36+
json_sink.append(detections, custom_data={<CUSTOM_LABEL>:<CUSTOM_DATA>})
37+
```
38+
39+
- Added [#847](https://github.com/roboflow/supervision/pull/847): [`sv.mask_iou_batch`](/0.19.0/detection/utils/#supervision.detection.utils.mask_iou_batch) allowing to compute Intersection over Union (IoU) of two sets of masks.
40+
41+
- Added [#847](https://github.com/roboflow/supervision/pull/847): [`sv.mask_non_max_suppression`](/0.19.0/detection/utils/#supervision.detection.utils.mask_non_max_suppression) allowing to perform Non-Maximum Suppression (NMS) on segmentation predictions.
42+
43+
- Added [#888](https://github.com/roboflow/supervision/pull/888): [`sv.CropAnnotator`](/0.19.0/annotators/#supervision.annotators.core.CropAnnotator) allowing users to annotate the scene with scaled-up crops of detections.
44+
45+
```python
46+
import cv2
47+
import supervision as sv
48+
from inference import get_model
49+
50+
image = cv2.imread(<SOURCE_IMAGE_PATH>)
51+
model = get_model(model_id="yolov8n-640")
52+
53+
result = model.infer(image)[0]
54+
detections = sv.Detections.from_inference(result)
55+
56+
crop_annotator = sv.CropAnnotator()
57+
annotated_frame = crop_annotator.annotate(
58+
scene=image.copy(),
59+
detections=detections
60+
)
61+
```
62+
63+
- Changed [#827](https://github.com/roboflow/supervision/pull/827): [`sv.ByteTrack.reset`](/0.19.0/tracking/#supervision.tracking.ByteTrack.reset) allowing users to clear trackers state, enabling the processing of multiple video files in sequence.
64+
65+
- Changed [#802](https://github.com/roboflow/supervision/pull/802): [`sv.LineZoneAnnotator`](/0.19.0/detection/tools/line_zone/#supervision.detection.line_zone.LineZone) allowing to hide in/out count using `display_in_count` and `display_out_count` properties.
66+
67+
- Changed [#787](https://github.com/roboflow/supervision/pull/787): [`sv.ByteTrack`](/0.19.0/tracking/#supervision.tracking.ByteTrack) input arguments and docstrings updated to improve readability and ease of use.
68+
69+
!!! failure "Deprecated"
70+
71+
The `track_buffer`, `track_thresh`, and `match_thresh` parameters in `sv.ByterTrack` are deprecated and will be removed in `supervision-0.23.0`. Use `lost_track_buffer,` `track_activation_threshold`, and `minimum_matching_threshold` instead.
72+
73+
- Changed [#910](https://github.com/roboflow/supervision/pull/910): [`sv.PolygonZone`](/0.19.0/detection/tools/polygon_zone/#supervision.detection.tools.polygon_zone.PolygonZone) to now accept a list of specific box anchors that must be in zone for a detection to be counted.
74+
75+
!!! failure "Deprecated"
76+
77+
The `triggering_position ` parameter in `sv.PolygonZone` is deprecated and will be removed in `supervision-0.23.0`. Use `triggering_anchors` instead.
78+
79+
- Changed [#875](https://github.com/roboflow/supervision/pull/875): annotators adding support for Pillow images. All supervision Annotators can now accept an image as either a numpy array or a Pillow Image. They automatically detect its type, draw annotations, and return the output in the same format as the input.
80+
81+
- Fixed [#944](https://github.com/roboflow/supervision/pull/944): [`sv.DetectionsSmoother`](/0.19.0/detection/tools/smoother/#supervision.detection.tools.smoother.DetectionsSmoother) removing `tracking_id` from `sv.Detections`.
82+
183
### 0.18.0 <small>January 25, 2024</small>
284

3-
- Added [#633](https://github.com/roboflow/supervision/pull/720): [`sv.PercentageBarAnnotator`](/0.18.0/annotators/#percentagebarannotator) allowing to annotate images and videos with percentage values representing confidence or other custom property.
85+
- Added [#720](https://github.com/roboflow/supervision/pull/720): [`sv.PercentageBarAnnotator`](/0.18.0/annotators/#percentagebarannotator) allowing to annotate images and videos with percentage values representing confidence or other custom property.
486

587
```python
688
>>> import supervision as sv

supervision/annotators/core.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1859,6 +1859,7 @@ def __init__(self, position: Position = Position.TOP_CENTER, scale_factor: int =
18591859
self.position: Position = position
18601860
self.scale_factor: int = scale_factor
18611861

1862+
@scene_to_annotator_img_type
18621863
def annotate(
18631864
self,
18641865
scene: np.ndarray,

0 commit comments

Comments
 (0)