We write your reusable computer vision tools. Whether you need to load your dataset from your hard drive, draw detections on an image or video, or count how many detections are in a zone. You can count on us! π€
Pip install the supervision package in a Python>=3.8 environment.
pip install supervision
Read more about conda, mamba, and installing from source in our guide.
Create a .env
file to store sensitive configuration:
ROBOFLOW_API_KEY=your_api_key_here
LOG_LEVEL=INFO
Then load them in your code:
from dotenv import load_dotenv
load_dotenv() # Load before other imports
# Now use os.getenv() to access values
Supervision was designed to be model agnostic. Just plug in any classification, detection, or segmentation model. For your convenience, we have created connectors for the most popular libraries like Ultralytics, Transformers, or MMDetection.
import cv2
import supervision as sv
from ultralytics import YOLO
image = cv2.imread(...)
model = YOLO("yolov8s.pt")
result = model(image)[0]
detections = sv.Detections.from_ultralytics(result)
len(detections)
# 5
π more model connectors
-
inference
Running with Inference requires a Roboflow API KEY.
import cv2 import supervision as sv from inference import get_model image = cv2.imread(...) model = get_model(model_id="yolov8s-640", api_key=<ROBOFLOW API KEY>) result = model.infer(image)[0] detections = sv.Detections.from_inference(result) len(detections) # 5
Supervision offers a wide range of highly customizable annotators, allowing you to compose the perfect visualization for your use case.
import cv2
import supervision as sv
image = cv2.imread(...)
detections = sv.Detections(...)
box_annotator = sv.BoxAnnotator()
annotated_frame = box_annotator.annotate(
scene=image.copy(),
detections=detections)
supervision-0.16.0-annotators.mp4
Supervision provides a set of utils that allow you to load, split, merge, and save datasets in one of the supported formats.
import supervision as sv
from roboflow import Roboflow
project = Roboflow().workspace(<WORKSPACE_ID>).project(<PROJECT_ID>)
dataset = project.version(<PROJECT_VERSION>).download("coco")
ds = sv.DetectionDataset.from_coco(
images_directory_path=f"{dataset.location}/train",
annotations_path=f"{dataset.location}/train/_annotations.coco.json",
)
path, image, annotation = ds[0]
# loads image on demand
for path, image, annotation in ds:
# loads image on demand
π more dataset utils
-
load
dataset = sv.DetectionDataset.from_yolo( images_directory_path=..., annotations_directory_path=..., data_yaml_path=... ) dataset = sv.DetectionDataset.from_pascal_voc( images_directory_path=..., annotations_directory_path=... ) dataset = sv.DetectionDataset.from_coco( images_directory_path=..., annotations_path=... )
-
split
train_dataset, test_dataset = dataset.split(split_ratio=0.7) test_dataset, valid_dataset = test_dataset.split(split_ratio=0.5) len(train_dataset), len(test_dataset), len(valid_dataset) # (700, 150, 150)
-
merge
ds_1 = sv.DetectionDataset(...) len(ds_1) # 100 ds_1.classes # ['dog', 'person'] ds_2 = sv.DetectionDataset(...) len(ds_2) # 200 ds_2.classes # ['cat'] ds_merged = sv.DetectionDataset.merge([ds_1, ds_2]) len(ds_merged) # 300 ds_merged.classes # ['cat', 'dog', 'person']
-
save
dataset.as_yolo( images_directory_path=..., annotations_directory_path=..., data_yaml_path=... ) dataset.as_pascal_voc( images_directory_path=..., annotations_directory_path=... ) dataset.as_coco( images_directory_path=..., annotations_path=... )
-
convert
sv.DetectionDataset.from_yolo( images_directory_path=..., annotations_directory_path=..., data_yaml_path=... ).as_pascal_voc( images_directory_path=..., annotations_directory_path=... )
football-players-tracking-25.mp4
traffic_analysis_result.mov
vehicles-step-7-new.mp4
Visit our documentation page to learn how supervision can help you build computer vision applications faster and more reliably.
We love your input! Please see our contributing guide to get started. Thank you π to all our contributors!