You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This document introduces an example of deploying a segmentation model on a Linux server (NV GPU or X86 CPU) using Paddle Inference's C++ interface. The main steps include:
7
-
* Prepare the environment
8
-
* Prepare models and pictures
9
-
* Compile and execute
10
-
11
-
PaddlePaddle provides multiple prediction engine deployment models (as shown in the figure below) for different scenarios. For details, please refer to [document](https://paddleinference.paddlepaddle.org.cn/product_introduction/summary.html).
### Prepare Paddle Inference C++ prediction library
18
-
19
-
You can download the Paddle Inference C++ prediction library from [link](https://www.paddlepaddle.org.cn/inference/v2.3/user_guides/download_lib.html).
20
-
21
-
Pay attention to select the exact version according to the machine's CUDA version, cudnn version, using MKLDNN or OpenBlas, whether to use TenorRT and other information. It is recommended to choose a prediction library with version >= 2.0.1.
22
-
23
-
Download the `paddle_inference.tgz` compressed file and decompress it, and save the decompressed paddle_inference file to `PaddleSeg/deploy/cpp/`.
24
-
25
-
If you need to compile the Paddle Inference C++ prediction library, you can refer to the [document](https://www.paddlepaddle.org.cn/inference/v2.3/user_guides/source_compile.html), which will not be repeated here.
26
-
27
-
### Prepare OpenCV
28
-
29
-
This example uses OpenCV to read images, so OpenCV needs to be prepared.
30
-
31
-
Run the following commands to download, compile, and install OpenCV.
32
-
````
33
-
sh install_opencv.sh
34
-
````
35
-
36
-
### Install Yaml, Gflags and Glog
37
-
38
-
This example uses Yaml, Gflags and Glog.
39
-
40
-
Run the following commands to download, compile, and install these libs.
41
-
42
-
````
43
-
sh install_yaml.sh
44
-
sh install_gflags.sh
45
-
sh install_glog.sh
46
-
````
47
-
48
-
## 3. Prepare models and pictures
49
-
50
-
Execute the following command in the `PaddleSeg/deploy/cpp/` directory to download the [test model](https://paddleseg.bj.bcebos.com/dygraph/demo/pp_liteseg_infer_model.tar.gz) for testing. If you need to test other models, please refer to [documentation](../../model_export.md) to export the prediction model.
Please check that `PaddleSeg/deploy/cpp/` stores prediction libraries, models, and pictures, as follows.
66
-
67
-
````
68
-
PaddleSeg/deploy/cpp
69
-
|-- paddle_inference # prediction library
70
-
|-- pp_liteseg_infer_model # model
71
-
|-- cityscapes_demo.png # image
72
-
````
73
-
74
-
Execute `sh run_seg_cpu.sh`, it will compile and then perform prediction on X86 CPU.
75
-
76
-
Execute `sh run_seg_gpu.sh`, it will compile and then perform prediction on Nvidia GPU.
77
-
78
-
The segmentation result will be saved in the "out_img.jpg" image in the current directory, as shown below. Note that this image is using histogram equalization for easy visualization.
### 1. Compilation and deployment tutorial for different environment
6
+
7
+
*[Compilation and deployment on Linux](cpp_inference_linux.md)
8
+
*[Compilation and deployment on Windows](cpp_inference_windows.md)
9
+
10
+
### 2. Illustration
11
+
`PaddleSeg/deploy/cpp` provides users with a cross-platform C++deployment scheme. After exporting the PaddleSeg training model, users can quickly run based on the project, or quickly integrate the code into their own project application.
12
+
The main design objectives include the following two points:
13
+
14
+
* Cross-platform, supporting compilation, secondary development integration and deployment on Windows and Linux
15
+
* Extensibility, supporting users to develop their own special data preprocessing and other logic for the new model
16
+
17
+
The main directory and documents are described as follows:
18
+
```
19
+
deploy/cpp
20
+
|
21
+
├── cmake # Dependent external project cmake (currently only yaml-cpp)
22
+
│
23
+
├── src ── test_seg.cc # Sample code file
24
+
│
25
+
├── CMakeList.txt # Cmake compilation entry file
26
+
│
27
+
└── *.sh # Install related packages or run sample scripts under Linux
0 commit comments