You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Sep 30, 2024. It is now read-only.
Merge pull request #8 from intel-iot-devkit/upload_svet_2020.2.0
upload code for svet linux 2020.2.0 with following features
1) Support using VPP instead of SFC for scaling and color format conversion in decoding session.
2) Support H265 RTSP stream as input.
3) Support new inference options “-infer::interval” and “-infer::max_detect”.
4) Support disabling composition by adding new sink session type “-fake_sink”.
5) Upgrade the MediaSDK to version 2020.1.1 and OpenVINO to version 2020.3.
6) Support pure multiple decoding performance testing.
Copy file name to clipboardExpand all lines: README.md
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,7 +14,7 @@ Sample par files can be found in par_files directory. Verfied on i7-8559U. Perfo
14
14
The sample application depends on [Intel® Media SDK](https://github.com/Intel-Media-SDK/), [Intel® OpenVINO™](https://software.intel.com/en-us/openvino-toolkit) and [FFmpeg](https://www.ffmpeg.org/)
15
15
16
16
# FAQ
17
-
See doc/FAQ.md
17
+
See [FAQ](./doc/FAQ.md)
18
18
19
19
# Table of contents
20
20
@@ -33,27 +33,27 @@ The sample application is licensed under MIT license. See [LICENSE](./LICENSE) f
33
33
See [CONTRIBUTING](./doc/CONTRIBUTING.md) for details. Thank you!
34
34
35
35
# Documentation
36
-
See [user guide](./doc/concurrent_video_analytic_sample_application_user_guide_2020.1.0.pdf)
36
+
See [user guide](./doc/concurrent_video_analytic_sample_application_user_guide_2020.2.0.pdf)
* Intel® platforms supported by the MediaSDK 19.4.0 and OpenVINO 2019 R3.
48
+
* Intel® platforms supported by the MediaSDK 20.1.1 and OpenVINO 2020.3.
49
49
* For Media SDK, the major platform dependency comes from the back-end media driver. https://github.com/intel/media-driver
50
50
* For OpenVINO™, see details from here: https://software.intel.com/en-us/openvino-toolkit/documentation/system-requirements
51
51
52
52
# How to build
53
53
54
54
Run build_and_install.sh to install dependent software packages and build sample application video_e2e_sample.
55
55
56
-
Please refer to ”Installation Guide“ in [user guide](./doc/concurrent_video_analytic_sample_application_user_guide_2020.1.0.pdf) for details.
56
+
Please refer to ”Installation Guide“ in [user guide](./doc/concurrent_video_analytic_sample_application_user_guide_2020.2.0.pdf) for details.
57
57
58
58
## Build steps
59
59
@@ -68,7 +68,7 @@ cd cva_sample
68
68
```
69
69
This script will install the dependent software packages by running command "apt install". So it will ask for sudo password. Then it will download libva, libva-util, media-driver and MediaSDK source code and install these libraries. It might take 10 to 20 minutes depending on the network bandwidth.
70
70
71
-
After the script finishing, the sample application video_e2e_sample can be found under ./bin. Please refer to "Run sample application" in [user guide](./doc/concurrent_video_analytic_sample_application_user_guide_2020.1.0.pdf) for details.
71
+
After the script finishing, the sample application video_e2e_sample can be found under ./bin. Please refer to "Run sample application" in [user guide](./doc/concurrent_video_analytic_sample_application_user_guide_2020.2.0.pdf) for details.
Copy file name to clipboardExpand all lines: doc/CONTRIBUTING.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,8 +1,8 @@
1
-
We welcome community contributions to SVET sample application. Thank you for your time!
1
+
We welcome community contributions to the concurrent video analytic sample application. Thank you for your time!
2
2
3
3
Please note that review and merge might take some time at this point.
4
4
5
-
SVET sample application is licensed under MIT license. By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms.
5
+
The sample application is licensed under MIT license. By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms.
6
6
7
7
Steps:
8
8
- In the commit message, explain what bug is fixed or what new feature is added in details.
Copy file name to clipboardExpand all lines: doc/FAQ.md
+18-5Lines changed: 18 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,15 +4,21 @@
4
4
See chapter 2.4 in doc/svet_sample_application_user_guide_2020.1.0.pdf
5
5
Running the SVET sample applicton with option "-?" can show the usage of options.
6
6
7
-
## Why does the system need to be switched to text mode before running the sample application
8
-
The sample application uses libDRM to render the video directly to display, so it needs to act as master of the DRM display, which isn't allowed when X server is running.
7
+
## Why does the system need to be switched to text console mode before running the sample application
8
+
The sample application uses libDRM to render the video directly to display, so it needs to act as master of the DRM display, which isn't allowed when X client is running. If there is any VNC session, please also close it. Because VNC session also starts X client.
9
9
If the par file doesn't include display session, there is no need to switch to text mode.
10
10
11
11
## Why it needs "su -p" to switch to root user before running the sample application
12
12
To become DRM master, it needs root privileges. With option "-p", it will preserve environment variables, like LIBVA_DRIVERS_PATH, LIBVA_DRIVER_NAME and LD_LIBRARY_PATH. If without "-p", these environment variables will be reset and the sample application will run into problems.
13
13
14
+
## Is it possible to use X11 instead of DRM display?
15
+
If user doesn't want to switch to text console mode or switch to root for using DRM display, user can replace "-rdrm-DisplayPort" with "-rx11" in the par file. However, the X11 rendering isn't as effcient as DRM rendering. According to our 16-channel face detection 1080p test on CFL , the time cost of each frame increased by around 6ms. Example [par file](./par_file/inference/n16_face_detection_1080p_x11.par) using X11 as rendering method.
16
+
17
+
## Is there any limitation of the order of decoding, encoding and dislay sessions in par file
18
+
Yes. The decoding dessions must be descripted firstly. If there is display dession, it must be the last line in par file.
19
+
14
20
## The loading time of 16-channel face detection demo is too long
15
-
Please enable cl_cache by running command "export cl_cache_dir=/tmp/cl_cache" and "mkdir -p /tmp/cl_cache". Then after the first running of 16-channel face detection demo, the compiled OpenCL kernles are cached and the model loading time of next runnings of 16-channel face detection demo will only take about 10 seconds.
21
+
Please make sure cl_cache is enabled by command "echo $cl_cache_dir". If this environment isn't set, please enable cl_cache by running command "export cl_cache_dir=/tmp/cl_cache" and "mkdir -p /tmp/cl_cache". Then after the first running of 16-channel face detection demo, the compiled OpenCL kernles are cached and the model loading time of next runnings of 16-channel face detection demo will only take about 10 seconds.
16
22
More details about cl_cache can be found at https://github.com/intel/compute-runtime/blob/master/opencl/doc/FAQ.md
17
23
18
24
## Can sources number for "-vpp_comp_only" or "-vpp_comp" be different from number of decoding sessions?
@@ -21,15 +27,22 @@ No. The sources number for "-vpp_comp_only" or "-vpp_comp" must be equal to the
21
27
## How to limit the fps of whole pipeline to 30?
22
28
Add "-fps 30" to every decoding session.
23
29
30
+
## "-fps 30" doesn't work with "-fake_sink"
31
+
Fake sink session doesn't support "-fps 30". Please add "-fps 30" to every decoding session instead.
32
+
33
+
Add "-fps 30" to every decoding session.
24
34
## How to limit the frame number of input to 1000?
25
-
Add "-n 1000" to every decoding dessions. However this option won't work if both "-vpp_comp_only" and "-vpp_comp" are set.
35
+
Add "-n 1000" to every decoding dessions. But please not add "-n" to encode, display and fake sink session. These sink sessions will automatically stop when the source session stops. Note, this option won't work if both "-vpp_comp_only" and "-vpp_comp" are set.
26
36
27
37
## Where can I find tutorials for inference engine?
28
38
Please refer to https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Deep_Learning_Inference_Engine_DevGuide.html
29
39
40
+
## Why HDDL card usage ratio is low for face detection inference?
41
+
It can be caused by the decoded frames aren't fed to inference engine efficiently. The default inference interval of face detection is 6. You try to set the inference interval to a lower valuer when using HDDL as inference target device. For example, with 3 HDDL L2 card, adding "-infer::inverval 1" to 16-channel face detection par file can increase the HDDL usage ratio to 100%.
42
+
30
43
## Where can I find information for the models?
31
44
Please refer to https://github.com/opencv/open_model_zoo/tree/master/models/intel. The names of models used in sample application are
## Can I use other OpenVINO version rather than 2019 R3?
47
+
## Can I use other OpenVINO version rather than 2020.3 ?
35
48
Yes, but you have to modify some code due to interfaces changing. And also you need to download the IR files and copy them to ./model manually. Please refer to script/download_and_copy_models.sh for how to download the IR files.
0 commit comments