You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: demos/crossroad_camera_demo/README.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -89,7 +89,7 @@ If Person Attributes Recognition or Person Reidentification Retail are enabled,
89
89
* **Person Attributes Recognition time** - Inference time of Person Attributes Recognition averaged by the number of detected persons.
90
90
* **Person Reidentification time** - Inference time of Person Reidentification averaged by the number of detected persons.
91
91
92
-
> **NOTE**: On VPU devices (Intel® Movidius™ Neural Compute Stick, Intel® Neural Compute Stick 2, and Intel® Vision Accelerator Design with Intel® Movidius™ VPUs) this demo has been tested on the following Model Downloader available topologies:
92
+
> **NOTE**: On VPU devices (Intel® Movidius™ Neural Compute Stick, Intel® Neural Compute Stick 2, and Intel® Vision Accelerator Design with Intel® Movidius™ VPUs) this demo has been tested on the following Model Downloader available topologies:
Copy file name to clipboardExpand all lines: demos/gaze_estimation_demo/README.md
+4-4
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ The demo also relies on the following auxiliary networks:
7
7
*`face-detection-retail-0004` or `face-detection-adas-0001` detection networks for finding faces
8
8
*`head-pose-estimation-adas-0001`, which estimates head pose in Tait-Bryan angles, serving as an input for gaze estimation model
9
9
*`facial-landmarks-35-adas-0002`, which estimates coordinates of facial landmarks for detected faces. The keypoints at the corners of eyes are used to locate eyes regions required for the gaze estimation model
10
-
*`open-closed-eye-0001`, which estimates eyes state of detected faces.
10
+
*`open-closed-eye-0001`, which estimates eyes state of detected faces.
11
11
12
12
For more information about the pre-trained models, refer to the [model documentation](../../models/intel/index.md).
13
13
@@ -72,7 +72,7 @@ For example, to do inference on a CPU, run the following command:
72
72
73
73
## Demo Output
74
74
75
-
The demo uses OpenCV to display the resulting frame with marked gaze vectors, text reports of **FPS** (frames per second performance) for the demo, and, optionally, marked facial landmarks, head pose angles, and face bounding boxes.
75
+
The demo uses OpenCV to display the resulting frame with marked gaze vectors, text reports of **FPS** (frames per second performance) for the demo, and, optionally, marked facial landmarks, head pose angles, and face bounding boxes.
76
76
By default, it shows only gaze estimation results. To see inference results of auxiliary networks, use run-time control keys.
77
77
78
78
### Run-Time Control Keys
@@ -82,14 +82,14 @@ The following keys are supported:
82
82
* G - to toggle displaying gaze vector
83
83
* B - to toggle displaying face detector bounding boxes
84
84
* O - to toggle displaying head pose information
85
-
* L - to toggle displaying facial landmarks
85
+
* L - to toggle displaying facial landmarks
86
86
* E - to toggle displaying eyes state
87
87
* A - to switch on displaying all inference results
88
88
* N - to switch off displaying all inference results
89
89
* F - to flip frames horizontally
90
90
* Esc - to quit the demo
91
91
92
-
> **NOTE**: On VPU devices (Intel® Movidius™ Neural Compute Stick, Intel® Neural Compute Stick 2, and Intel® Vision Accelerator Design with Intel® Movidius™ VPUs) this demo has been tested on the following Model Downloader available topologies:
92
+
> **NOTE**: On VPU devices (Intel® Movidius™ Neural Compute Stick, Intel® Neural Compute Stick 2, and Intel® Vision Accelerator Design with Intel® Movidius™ VPUs) this demo has been tested on the following Model Downloader available topologies:
Copy file name to clipboardExpand all lines: demos/human_pose_estimation_demo/README.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -56,7 +56,7 @@ For example, to do inference on a CPU, run the following command:
56
56
## Demo Output
57
57
58
58
The demo uses OpenCV to display the resulting frame with estimated poses and text report of **FPS** - frames per second performance for the human pose estimation demo.
59
-
> **NOTE**: On VPU devices (Intel® Movidius™ Neural Compute Stick, Intel® Neural Compute Stick 2, and Intel® Vision Accelerator Design with Intel® Movidius™ VPUs) this demo has been tested on the following Model Downloader available topologies:
59
+
> **NOTE**: On VPU devices (Intel® Movidius™ Neural Compute Stick, Intel® Neural Compute Stick 2, and Intel® Vision Accelerator Design with Intel® Movidius™ VPUs) this demo has been tested on the following Model Downloader available topologies:
60
60
>*`human-pose-estimation-0001`
61
61
> Other models may produce unexpected results on these devices.
Copy file name to clipboardExpand all lines: demos/interactive_face_detection_demo/README.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -101,7 +101,7 @@ For example, to do inference on a GPU with the OpenVINO™ toolkit pre-train
101
101
The demo uses OpenCV to display the resulting frame with detections (rendered as bounding boxes and labels, if provided).
102
102
The demo reports total image throughput which includes frame decoding time, inference time, time to render bounding boxes and labels, and time to display the results.
103
103
104
-
> **NOTE**: On VPU devices (Intel® Movidius™ Neural Compute Stick, Intel® Neural Compute Stick 2, and Intel® Vision Accelerator Design with Intel® Movidius™ VPUs) this demo has been tested on the following Model Downloader available topologies:
104
+
> **NOTE**: On VPU devices (Intel® Movidius™ Neural Compute Stick, Intel® Neural Compute Stick 2, and Intel® Vision Accelerator Design with Intel® Movidius™ VPUs) this demo has been tested on the following Model Downloader available topologies:
Copy file name to clipboardExpand all lines: demos/multi_channel/object_detection_demo_yolov3/README.md
+3-3
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@
3
3
This demo provides an inference pipeline for multi-channel yolo v3. The demo uses Yolo v3 Object Detection network. You can follow [this](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_YOLO_From_Tensorflow.html) page convert the YOLO V3 and tiny YOLO V3 into IR model and execute this demo with converted IR model.
4
4
5
5
> **NOTES**:
6
-
> If you don't use [this](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_YOLO_From_Tensorflow.html) page to convert the model, it may not work.
6
+
> If you don't use [this](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_YOLO_From_Tensorflow.html) page to convert the model, it may not work.
7
7
8
8
Other demo objectives are:
9
9
@@ -49,7 +49,7 @@ Options:
49
49
-u Optional. List of monitors to show initially.
50
50
```
51
51
52
-
To run the demo, you can use public pre-train model and follow [this](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_YOLO_From_Tensorflow.html) page for instruction of how to convert it to IR model.
52
+
To run the demo, you can use public pre-train model and follow [this](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_YOLO_From_Tensorflow.html) page for instruction of how to convert it to IR model.
53
53
54
54
> **NOTE**: Before running the demo with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html).
55
55
@@ -67,7 +67,7 @@ Video files will be processed repeatedly.
67
67
To achieve 100% utilization of one Myriad X, the thumb rule is to run 4 infer requests on each Myriad X. Option `-nireq 32` can be added to above command to use 100% of HDDL-R card. The 32 here is 8 (Myriad X on HDDL-R card) x 4 (infer requests), such as following command:
0 commit comments