@@ -17,7 +17,7 @@ Learn how to build and run ONNX models with built-in pre and post processing for
17
17
* TOC placeholder
18
18
{: toc }
19
19
20
- ## Object detection with Yolov8
20
+ ## Object detection with YOLOv8
21
21
22
22
You can find the full source code for the [ Android] (https://github.com/microsoft/ app in the ONNX Runtime inference examples repository.
23
23
@@ -44,8 +44,8 @@ python yolo_e2e.py [--test_image <image to test on>]
44
44
```
45
45
46
46
After the script has run, you will see one PyTorch model and two ONNX models:
47
- * ` yolov8n.pt ` : The original Yolov8 PyTorch model
48
- * ` yolov8n.onnx ` : The exported Yolov8 ONNX model
47
+ * ` yolov8n.pt ` : The original YOLOv8 PyTorch model
48
+ * ` yolov8n.onnx ` : The exported YOLOv8 ONNX model
49
49
* ` yolov8n.with_pre_post_processing.onnx ` : The ONNX model with pre and post processing included in the model
50
50
* ` <test image>.out.jpg ` : Your test image with bounding boxes supplied.
51
51
@@ -95,7 +95,7 @@ You see the main inference code in [ObjectDetector.kt](https://github.com/micros
95
95
96
96
![ Image of person with bicycle] ( ../../../images/person-with-bicycle-and-bounding-boxes.png )
97
97
98
- ## Pose estimation with Yolov8
98
+ ## Pose estimation with YOLOv8
99
99
100
100
### Build the pose estimation model
101
101
@@ -120,8 +120,8 @@ python yolov8_pose_e2e.py
120
120
```
121
121
122
122
After the script has run, you will see one PyTorch model and two ONNX models:
123
- * ` yolov8n-pose.pt ` : The original Yolov8 PyTorch model
124
- * ` yolov8n-pose.onnx ` : The exported Yolov8 ONNX model
123
+ * ` yolov8n-pose.pt ` : The original YOLOv8 PyTorch model
124
+ * ` yolov8n-pose.onnx ` : The exported YOLOv8 ONNX model
125
125
* ` yolov8n-pose.with_pre_post_processing.onnx ` : The ONNX model with pre and post processing included in the model
126
126
127
127
0 commit comments