Skip to content

[Example] Yolo12 Detection sample with OpenVINO/XNNPACK backend #10156

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 12 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
198 changes: 198 additions & 0 deletions .ci/scripts/test_yolo12.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,198 @@
#!/bin/bash
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree.

set -ex
# shellcheck source=/dev/null
source "$(dirname "${BASH_SOURCE[0]}")/utils.sh"

while [[ $# -gt 0 ]]; do
case "$1" in
-model)
MODEL_NAME="$2" # stories110M
shift 2
;;
-mode)
MODE="$2" # portable or xnnpack+custom or xnnpack+custom+qe
shift 2
;;
-pt2e_quantize)
PT2E_QUANTIZE="$2"
shift 2
;;
-upload)
UPLOAD_DIR="$2"
shift 2
;;
-video_path)
VIDEO_PATH="$2" # portable or xnnpack+custom or xnnpack+custom+qe
shift 2
;;
*)
echo "Unknown option: $1"
usage
;;
esac
done

# Default mode to xnnpack+custom if not set
MODE=${MODE:-"openvino"}

# Default UPLOAD_DIR to empty string if not set
UPLOAD_DIR="${UPLOAD_DIR:-}"

# Default PT2E_QUANTIZE to empty string if not set
PT2E_QUANTIZE="${PT2E_QUANTIZE:-}"

# Default CMake Build Type to release mode
CMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE:-Release}

if [[ $# -lt 5 ]]; then # Assuming 4 mandatory args
echo "Expecting atleast 5 positional arguments"
echo "Usage: [...]"
fi
if [[ -z "${MODEL_NAME:-}" ]]; then
echo "Missing model name, exiting..."
exit 1
fi


if [[ -z "${MODE:-}" ]]; then
echo "Missing mode, choose openvino or xnnpack, exiting..."
exit 1
fi

if [[ -z "${PYTHON_EXECUTABLE:-}" ]]; then
PYTHON_EXECUTABLE=python3
fi

TARGET_LIBS=""

if [[ "${MODE}" =~ .*openvino.* ]]; then
OPENVINO=ON
TARGET_LIBS="$TARGET_LIBS openvino_backend "

git clone https://github.com/daniil-lyakhov/openvino.git

cd openvino && git checkout dl/executorch/yolo12
git submodule update --init --recursive
sudo ./install_build_dependencies.sh
mkdir build && cd build
cmake .. -DCMAKE_BUILD_TYPE=Release -DENABLE_PYTHON=ON
make -j$(nproc)

cd ..
cmake --install build --prefix dist

source dist/setupvars.sh
cd ../backends/openvino
pip install -r requirements.txt
cd ../../
else
OPENVINO=OFF
fi

if [[ "${MODE}" =~ .*xnnpack.* ]]; then
XNNPACK=ON
TARGET_LIBS="$TARGET_LIBS xnnpack_backend "
else
XNNPACK=OFF
fi

which "${PYTHON_EXECUTABLE}"


DIR="examples/models/yolo12"
$PYTHON_EXECUTABLE -m pip install -r ${DIR}/requirements.txt

cmake_install_executorch_libraries() {
rm -rf cmake-out
build_dir=cmake-out
mkdir $build_dir


retry cmake -DCMAKE_INSTALL_PREFIX="${build_dir}" \
-DCMAKE_BUILD_TYPE="${CMAKE_BUILD_TYPE}" \
-DEXECUTORCH_BUILD_OPENVINO="$OPENVINO" \
-DEXECUTORCH_BUILD_XNNPACK="$XNNPACK" \
-DEXECUTORCH_BUILD_EXTENSION_DATA_LOADER=ON \
-DEXECUTORCH_BUILD_EXTENSION_MODULE=ON \
-DEXECUTORCH_BUILD_EXTENSION_RUNNER_UTIL=ON \
-DEXECUTORCH_BUILD_EXTENSION_TENSOR=ON \
-B"${build_dir}"

# Build the project
cmake --build ${build_dir} --target install --config ${CMAKE_BUILD_TYPE} -j$(nproc)

export CMAKE_ARGS="
-DEXECUTORCH_BUILD_OPENVINO="$OPENVINO" \
-DEXECUTORCH_BUILD_XNNPACK="$XNNPACK" \
-DEXECUTORCH_BUILD_EXTENSION_DATA_LOADER=ON \
-DEXECUTORCH_BUILD_EXTENSION_MODULE=ON \
-DEXECUTORCH_BUILD_EXTENSION_RUNNER_UTIL=ON \
-DEXECUTORCH_ENABLE_LOGGING=ON \
-DEXECUTORCH_BUILD_EXTENSION_TENSOR=ON \
-DEXECUTORCH_BUILD_PYBIND=ON"

echo $TARGET_LIBS
export CMAKE_BUILD_ARGS="--target $TARGET_LIBS"
pip install . --no-build-isolation
}

cmake_build_demo() {
echo "Building yolo12 runner"
retry cmake \
-DCMAKE_BUILD_TYPE="$CMAKE_BUILD_TYPE" \
-DUSE_OPENVINO_BACKEND="$OPENVINO" \
-DUSE_XNNPACK_BACKEND="$XNNPACK" \
-Bcmake-out/${DIR} \
${DIR}
cmake --build cmake-out/${DIR} -j9 --config "$CMAKE_BUILD_TYPE"

}

cleanup_files() {
rm $EXPORTED_MODEL_NAME
}

prepare_artifacts_upload() {
if [ -n "${UPLOAD_DIR}" ]; then
echo "Preparing for uploading generated artifacs"
zip -j model.zip "${EXPORTED_MODEL_NAME}"
mkdir -p "${UPLOAD_DIR}"
mv model.zip "${UPLOAD_DIR}"
mv result.txt "${UPLOAD_DIR}"

fi
}


# Export model.
EXPORTED_MODEL_NAME="${MODEL_NAME}_fp32_${MODE}.pte"
echo "Exporting ${EXPORTED_MODEL_NAME}"
EXPORT_ARGS="--model_name=${MODEL_NAME} --backend=${MODE}"

# Add dynamically linked library location
cmake_install_executorch_libraries

$PYTHON_EXECUTABLE -m examples.models.yolo12.export_and_validate ${EXPORT_ARGS}


RUNTIME_ARGS="--model_path=${EXPORTED_MODEL_NAME} --input_path=${VIDEO_PATH}"
# Check build tool.
cmake_build_demo
# Run yolo12 runner
NOW=$(date +"%H:%M:%S")
echo "Starting to run yolo12 runner at ${NOW}"
# shellcheck source=/dev/null
cmake-out/examples/models/yolo12/Yolo12DetectionDemo ${RUNTIME_ARGS} > result.txt
NOW=$(date +"%H:%M:%S")
echo "Finished at ${NOW}"

RESULT=$(cat result.txt)

prepare_artifacts_upload
cleanup_files
4 changes: 2 additions & 2 deletions backends/openvino/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,8 +43,8 @@ executorch
Before you begin, ensure you have openvino installed and configured on your system:

```bash
git clone https://github.com/openvinotoolkit/openvino.git
cd openvino && git checkout releases/2025/1
git clone https://github.com/daniil-lyakhov/openvino.git
cd openvino && git checkout dl/executorch/yolo12
git submodule update --init --recursive
sudo ./install_build_dependencies.sh
mkdir build && cd build
Expand Down
84 changes: 84 additions & 0 deletions examples/models/yolo12/CMakeLists.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,84 @@
cmake_minimum_required(VERSION 3.5)

project(Yolo12DetectionDemo VERSION 0.1)

option(USE_OPENVINO_BACKEND "Build the tutorial with the OPENVINO backend" ON)
option(USE_XNNPACK_BACKEND "Build the tutorial with the XNNPACK backend" OFF)

set(CMAKE_INCLUDE_CURRENT_DIR ON)

set(CMAKE_CXX_STANDARD 17)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
set(CMAKE_CXX_EXTENSIONS OFF)

# OpenCV
find_package(OpenCV REQUIRED)
include_directories(${OpenCV_INCLUDE_DIRS})
# !OpenCV

if(NOT PYTHON_EXECUTABLE)
set(PYTHON_EXECUTABLE python3)
endif()

set(EXECUTORCH_ROOT ${CMAKE_CURRENT_SOURCE_DIR}/../../..)
set(TORCH_ROOT ${EXECUTORCH_ROOT}/third-party/pytorch)

include(${EXECUTORCH_ROOT}/tools/cmake/Utils.cmake)

# Let files say "include <executorch/path/to/header.h>".
set(_common_include_directories ${EXECUTORCH_ROOT}/..)

# find `executorch` libraries Same as for gflags
find_package(executorch CONFIG REQUIRED PATHS ${EXECUTORCH_ROOT}/cmake-out)
target_link_options_shared_lib(executorch)

set(link_libraries gflags)
list(APPEND link_libraries portable_ops_lib portable_kernels)
target_link_options_shared_lib(portable_ops_lib)


if(USE_XNNPACK_BACKEND)
set(xnnpack_backend_libs xnnpack_backend XNNPACK microkernels-prod)
list(APPEND link_libraries ${xnnpack_backend_libs})
target_link_options_shared_lib(xnnpack_backend)
endif()

if(USE_OPENVINO_BACKEND)
add_subdirectory(${EXECUTORCH_ROOT}/backends/openvino openvino_backend)

target_include_directories(
openvino_backend
INTERFACE ${CMAKE_CURRENT_BINARY_DIR}/../../include
${CMAKE_CURRENT_BINARY_DIR}/../../include/executorch/runtime/core/portable_type/c10
${CMAKE_CURRENT_BINARY_DIR}/../../lib
)
list(APPEND link_libraries openvino_backend)
target_link_options_shared_lib(openvino_backend)
endif()

list(APPEND link_libraries extension_threadpool pthreadpool)
list(APPEND _common_include_directories
${XNNPACK_ROOT}/third-party/pthreadpool/include
)

set(PROJECT_SOURCES
main.cpp
inference.h
${EXECUTORCH_ROOT}/extension/data_loader/file_data_loader.cpp
${EXECUTORCH_ROOT}/extension/evalue_util/print_evalue.cpp
${EXECUTORCH_ROOT}/extension/runner_util/inputs.cpp
${EXECUTORCH_ROOT}/extension/runner_util/inputs_portable.cpp
)

add_executable(Yolo12DetectionDemo ${PROJECT_SOURCES})
target_link_libraries(Yolo12DetectionDemo PUBLIC
${link_libraries}
${OpenCV_LIBS}
executorch_core
extension_module
extension_tensor
)

find_package(Threads REQUIRED)
target_link_libraries(Yolo12DetectionDemo PRIVATE Threads::Threads)
target_include_directories(Yolo12DetectionDemo PUBLIC ${_common_include_directories})
103 changes: 103 additions & 0 deletions examples/models/yolo12/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,103 @@
# YOLO12 Detection C++ Inference with ExecuTorch

<p align="center">
<br>
<img src="./yolo12s_demo.gif">
<br>
</p>

This example demonstrates how to perform inference of [Ultralytics YOLO12 family](https://docs.ultralytics.com/models/yolo12/) detection models in C++ leveraging the Executorch backends:
- [OpenVINO](../../../backends/openvino/README.md)
- [XNNPACK](../../../backends/xnnpack/README.md)


# Instructions

### Step 1: Install ExecuTorch

To install ExecuTorch, follow this [guide](https://pytorch.org/executorch/stable/getting-started-setup.html).

### Step 2: Install the backend of your choice

- [OpenVINO backend installation guide](../../../backends/openvino/README.md#build-instructions)
- [XNNPACK backend installation guilde](https://pytorch.org/executorch/stable/tutorial-xnnpack-delegate-lowering.html#running-the-xnnpack-model-with-cmake)

### Step 3: Install the demo requirements


Python demo requirements:
```bash
python -m pip install -r examples/models/yolo12/requirements.txt
```

Demo infenrece dependency - OpenCV library:
https://opencv.org/get-started/


### Step 4: Export the Yolo12 model to the ExecuTorch


OpenVINO:
```bash
python export_and_validate.py --model_name yolo12s --input_dims=[1920,1080] --backend openvino --device CPU
```

XNNPACK:
```bash
python export_and_validate.py --model_name yolo12s --input_dims=[1920,1080] --backend xnnpack
```

> **_NOTE:_** Quantization is comming soon!

Exported model could be validated using the `--validate` key:

```bash
python export_and_validate.py --model_name yolo12s --backend ... --validate dataset_name.yaml
```

A list of available datasets and instructions on how to use a custom dataset can be found [here](https://docs.ultralytics.com/datasets/detect/).
Validation only supports the default `--input_dims`; please do not specify this parameter when using the `--validate` flag.


To get a full parameters description please use the following command:
```bash
python export_and_validate.py --help
```

### Step 5: Build the demo project

OpenVINO:

```bash
cd examples/models/yolo12
mkdir build && cd build
cmake -DCMAKE_BUILD_TYPE=Release -DUSE_OPENVINO_BACKEND=ON ..
make -j$(nproc)
```

XNNPACK:

```bash
cd examples/models/yolo12
mkdir build && cd build
cmake -DCMAKE_BUILD_TYPE=Release -DUSE_XNNPACK_BACKEND=ON ..
make -j$(nproc)
```

### Step 6: Run the demo

```bash
./build/Yolo12DetectionDemo -model_path /path/to/exported/model -input_path /path/to/video/file -output_path /path/to/output/annotated/video
```

To get a full parameters description please use the following command:
```
./build/Yolo12DetectionDemo --help
```


# Credits:

Ultralytics examples: https://github.com/ultralytics/ultralytics/tree/main/examples

Sample video: https://www.pexels.com/@shanu-1040189/
Loading