Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 7 additions & 1 deletion src/supervision/detection/tools/csv_sink.py
Original file line number Diff line number Diff line change
Expand Up @@ -144,7 +144,13 @@ def parse_detection_data(
row[key] = value[i] if hasattr(value, "__getitem__") else value

if custom_data:
row.update(custom_data)
for key, value in custom_data.items():
if isinstance(value, np.ndarray) and value.ndim == 0:
row[key] = value
elif isinstance(value, np.ndarray):
row[key] = value[i]
else:
Comment on lines +147 to +152
Copy link

Copilot AI Apr 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indexing custom_data numpy arrays with value[i] can raise IndexError when the provided array length doesn't match the number of detections (including a 1-element array intended to broadcast). Consider validating lengths and either broadcasting or raising a clear ValueError describing the expected shape to make failures easier to debug.

Copilot uses AI. Check for mistakes.
row[key] = value
Copy link

Copilot AI Apr 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

custom_data slicing currently only handles np.ndarray. If a caller passes a per-detection Python sequence (e.g., list/tuple) it will still be written as the full sequence on every row. Consider mirroring the detections.data logic here (slice values that are indexable and match detection length) or explicitly documenting that only numpy arrays are supported for per-row custom values.

Suggested change
row[key] = value
row[key] = value[i] if hasattr(value, "__getitem__") else value

Copilot uses AI. Check for mistakes.
parsed_rows.append(row)
return parsed_rows

Expand Down
8 changes: 7 additions & 1 deletion src/supervision/detection/tools/json_sink.py
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,13 @@ def parse_detection_data(
)

if custom_data:
row.update(custom_data)
for key, value in custom_data.items():
if isinstance(value, np.ndarray) and value.ndim == 0:
row[key] = str(value)
elif isinstance(value, np.ndarray):
row[key] = str(value[i])
else:
Comment on lines +121 to +126
Copy link

Copilot AI Apr 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indexing custom_data numpy arrays with value[i] will raise IndexError if the array length doesn't match the number of detections (including the common case of a 1-element array intended as a constant). It would be safer to validate lengths and either broadcast length-1 arrays or raise a clear ValueError explaining the expected shape.

Copilot uses AI. Check for mistakes.
row[key] = value
Comment on lines 120 to +127
Copy link

Copilot AI Apr 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

custom_data numpy arrays are serialized using str(...), which turns numeric values into JSON strings (while other built-in fields like confidence are numbers). Consider converting numpy values to native Python scalars (e.g., via .item() for 0-d arrays and elements) so JSON output preserves numeric types and remains consistently typed.

Copilot uses AI. Check for mistakes.
parsed_rows.append(row)
Comment on lines 120 to 128
Copy link

Copilot AI Apr 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The PR changes JSONSink behavior but there’s no unit test covering custom_data passed as a numpy array (similar to the new CSVSink test). Adding a test that asserts per-row slicing and JSON-serializable output would prevent regressions and confirm the fix end-to-end.

Copilot generated this review using guidance from repository custom instructions.
return parsed_rows

Expand Down
31 changes: 31 additions & 0 deletions tests/detection/test_csv.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
import os
from typing import Any

import numpy as np
import pytest

import supervision as sv
Expand Down Expand Up @@ -193,6 +194,36 @@
],
],
), # Complex Data
(
_create_detections(
xyxy=[[10, 20, 30, 40], [50, 60, 70, 80]],
confidence=[0.9, 0.8],
class_id=[0, 1],
),
{"area": np.array([400.0, 400.0])},
_create_detections(
xyxy=[[15, 25, 35, 45]],
confidence=[0.7],
class_id=[2],
),
{"area": np.array([400.0])},
"test_detections_array_custom_data.csv",
[
[
"x_min",
"y_min",
"x_max",
"y_max",
"class_id",
"confidence",
"tracker_id",
"area",
],
["10.0", "20.0", "30.0", "40.0", "0", "0.9", "", "400.0"],
["50.0", "60.0", "70.0", "80.0", "1", "0.8", "", "400.0"],
["15.0", "25.0", "35.0", "45.0", "2", "0.7", "", "400.0"],
],
), # numpy array in custom_data sliced per detection row
],
)
def test_csv_sink(
Expand Down
Loading