|
| 1 | +# PaddleSeg高性能全场景模型部署方案—FastDeploy |
| 2 | + |
| 3 | +## 目录 |
| 4 | +- [FastDeploy介绍](#FastDeploy介绍) |
| 5 | +- [语义分割模型部署](#语义分割模型部署) |
| 6 | +- [Matting模型部署](#Matting模型部署) |
| 7 | +- [常见问题](#常见问题) |
| 8 | + |
| 9 | +## 1. FastDeploy介绍 |
| 10 | +<div id="FastDeploy介绍"></div> |
| 11 | + |
| 12 | +**[⚡️FastDeploy](https://github.com/PaddlePaddle/FastDeploy)**是一款**全场景**、**易用灵活**、**极致高效**的AI推理部署工具,支持**云边端**部署。使用FastDeploy可以简单高效的在X86 CPU、NVIDIA GPU、飞腾CPU、ARM CPU、Intel GPU、昆仑、昇腾、瑞芯微、晶晨、算能等10+款硬件上对PaddleSeg模型进行快速部署,并且支持Paddle Inference、Paddle Lite、TensorRT、OpenVINO、ONNXRuntime、RKNPU2、SOPHGO等多种推理后端。 |
| 13 | + |
| 14 | +<div align="center"> |
| 15 | + |
| 16 | +<img src="https://user-images.githubusercontent.com/31974251/219546373-c02f24b7-2222-4ad4-9b43-42b8122b898f.png" > |
| 17 | + |
| 18 | +</div> |
| 19 | + |
| 20 | +## 2. 语义分割模型部署 |
| 21 | +<div id="语义分割模型部署"></div> |
| 22 | + |
| 23 | +### 2.1 硬件支持列表 |
| 24 | + |
| 25 | +|硬件类型|该硬件是否支持|使用指南|Python|C++| |
| 26 | +|:---:|:---:|:---:|:---:|:---:| |
| 27 | +|X86 CPU|✅|[链接](semantic_segmentation/cpu-gpu)|✅|✅| |
| 28 | +|NVIDIA GPU|✅|[链接](semantic_segmentation/cpu-gpu)|✅|✅| |
| 29 | +|飞腾CPU|✅|[链接](semantic_segmentation/cpu-gpu)|✅|✅| |
| 30 | +|ARM CPU|✅|[链接](semantic_segmentation/cpu-gpu)|✅|✅| |
| 31 | +|Intel GPU(集成显卡)|✅|[链接](semantic_segmentation/cpu-gpu)|✅|✅| |
| 32 | +|Intel GPU(独立显卡)|✅|[链接](semantic_segmentation/cpu-gpu)|✅|✅| |
| 33 | +|昆仑|✅|[链接](semantic_segmentation/kunlun)|✅|✅| |
| 34 | +|昇腾|✅|[链接](semantic_segmentation/ascend)|✅|✅| |
| 35 | +|瑞芯微|✅|[链接](semantic_segmentation/rockchip)|✅|✅| |
| 36 | +|晶晨|✅|[链接](semantic_segmentation/amlogic)|--|✅|✅| |
| 37 | +|算能|✅|[链接](semantic_segmentation/sophgo)|✅|✅| |
| 38 | + |
| 39 | +### 2.2. 详细使用文档 |
| 40 | +- X86 CPU |
| 41 | + - [部署模型准备](semantic_segmentation/cpu-gpu) |
| 42 | + - [Python部署示例](semantic_segmentation/cpu-gpu/python/) |
| 43 | + - [C++部署示例](semantic_segmentation/cpu-gpu/cpp/) |
| 44 | +- NVIDIA GPU |
| 45 | + - [部署模型准备](semantic_segmentation/cpu-gpu) |
| 46 | + - [Python部署示例](semantic_segmentation/cpu-gpu/python/) |
| 47 | + - [C++部署示例](semantic_segmentation/cpu-gpu/cpp/) |
| 48 | +- 飞腾CPU |
| 49 | + - [部署模型准备](semantic_segmentation/cpu-gpu) |
| 50 | + - [Python部署示例](semantic_segmentation/cpu-gpu/python/) |
| 51 | + - [C++部署示例](semantic_segmentation/cpu-gpu/cpp/) |
| 52 | +- ARM CPU |
| 53 | + - [部署模型准备](semantic_segmentation/cpu-gpu) |
| 54 | + - [Python部署示例](semantic_segmentation/cpu-gpu/python/) |
| 55 | + - [C++部署示例](semantic_segmentation/cpu-gpu/cpp/) |
| 56 | +- Intel GPU |
| 57 | + - [部署模型准备](semantic_segmentation/cpu-gpu) |
| 58 | + - [Python部署示例](semantic_segmentation/cpu-gpu/python/) |
| 59 | + - [C++部署示例](semantic_segmentation/cpu-gpu/cpp/) |
| 60 | +- 昆仑 XPU |
| 61 | + - [部署模型准备](semantic_segmentation/kunlun) |
| 62 | + - [Python部署示例](semantic_segmentation/kunlun/python/) |
| 63 | + - [C++部署示例](semantic_segmentation/kunlun/cpp/) |
| 64 | +- 昇腾 Ascend |
| 65 | + - [部署模型准备](semantic_segmentation/ascend) |
| 66 | + - [Python部署示例](semantic_segmentation/ascend/python/) |
| 67 | + - [C++部署示例](semantic_segmentation/ascend/cpp/) |
| 68 | +- 瑞芯微 Rockchip |
| 69 | + - [部署模型准备](semantic_segmentation/rockchip/) |
| 70 | + - [Python部署示例](semantic_segmentation/rockchip/rknpu2/) |
| 71 | + - [C++部署示例](semantic_segmentation/rockchip/rknpu2/) |
| 72 | +- 晶晨 Amlogic |
| 73 | + - [部署模型准备](semantic_segmentation/amlogic/a311d/) |
| 74 | + - [C++部署示例](semantic_segmentation/amlogic/a311d/cpp/) |
| 75 | +- 算能 Sophgo |
| 76 | + - [部署模型准备](semantic_segmentation/sophgo/) |
| 77 | + - [Python部署示例](semantic_segmentation/sophgo/python/) |
| 78 | + - [C++部署示例](semantic_segmentation/sophgo/cpp/) |
| 79 | + |
| 80 | +### 2.3 更多部署方式 |
| 81 | + |
| 82 | +- [Android ARM CPU部署](semantic_segmentation/android) |
| 83 | +- [服务化Serving部署](semantic_segmentation/serving) |
| 84 | +- [web部署](semantic_segmentation/web) |
| 85 | +- [模型自动化压缩工具](semantic_segmentation/quantize) |
| 86 | + |
| 87 | +## 3. Matting模型部署 |
| 88 | +<div id="Matting模型部署"></div> |
| 89 | + |
| 90 | +### 3.1 硬件支持列表 |
| 91 | + |
| 92 | +|硬件类型|该硬件是否支持|使用指南|Python|C++| |
| 93 | +|:---:|:---:|:---:|:---:|:---:| |
| 94 | +|X86 CPU|✅|[链接](matting/cpu-gpu)|✅|✅| |
| 95 | +|NVIDIA GPU|✅|[链接](matting/cpu-gpu)|✅|✅| |
| 96 | +|飞腾CPU|✅|[链接](matting/cpu-gpu)|✅|✅| |
| 97 | +|ARM CPU|✅|[链接](matting/cpu-gpu)|✅|✅| |
| 98 | +|Intel GPU(集成显卡)|✅|[链接](matting/cpu-gpu)|✅|✅| |
| 99 | +|Intel GPU(独立显卡)|✅|[链接](matting/cpu-gpu)|✅|✅| |
| 100 | +|昆仑|✅|[链接](matting/kunlun)|✅|✅| |
| 101 | +|昇腾|✅|[链接](matting/ascend)|✅|✅| |
| 102 | + |
| 103 | +### 3.2 详细使用文档 |
| 104 | +- X86 CPU |
| 105 | + - [部署模型准备](matting/cpu-gpu) |
| 106 | + - [Python部署示例](matting/cpu-gpu/python/) |
| 107 | + - [C++部署示例](matting/cpu-gpu/cpp/) |
| 108 | +- NVIDIA GPU |
| 109 | + - [部署模型准备](matting/cpu-gpu) |
| 110 | + - [Python部署示例](matting/cpu-gpu/python/) |
| 111 | + - [C++部署示例](matting/cpu-gpu/cpp/) |
| 112 | +- 飞腾CPU |
| 113 | + - [部署模型准备](matting/cpu-gpu) |
| 114 | + - [Python部署示例](matting/cpu-gpu/python/) |
| 115 | + - [C++部署示例](matting/cpu-gpu/cpp/) |
| 116 | +- ARM CPU |
| 117 | + - [部署模型准备](matting/cpu-gpu) |
| 118 | + - [Python部署示例](matting/cpu-gpu/python/) |
| 119 | + - [C++部署示例](matting/cpu-gpu/cpp/) |
| 120 | +- Intel GPU |
| 121 | + - [部署模型准备](matting/cpu-gpu) |
| 122 | + - [Python部署示例](matting/cpu-gpu/python/) |
| 123 | + - [C++部署示例](cpu-gpu/cpp/) |
| 124 | +- 昆仑 XPU |
| 125 | + - [部署模型准备](matting/kunlun) |
| 126 | + - [Python部署示例](matting/kunlun/README.md) |
| 127 | + - [C++部署示例](matting/kunlun/README.md) |
| 128 | +- 昇腾 Ascend |
| 129 | + - [部署模型准备](matting/ascend) |
| 130 | + - [Python部署示例](matting/ascend/README.md) |
| 131 | + - [C++部署示例](matting/ascend/README.md) |
| 132 | + |
| 133 | +## 4. 常见问题 |
| 134 | +<div id="常见问题"></div> |
| 135 | + |
| 136 | +遇到问题可查看常见问题集合,搜索FastDeploy issue,*或给FastDeploy提交[issue](https://github.com/PaddlePaddle/FastDeploy/issues)*: |
| 137 | + |
| 138 | +[常见问题集合](https://github.com/PaddlePaddle/FastDeploy/tree/develop/docs/cn/faq) |
| 139 | +[FastDeploy issues](https://github.com/PaddlePaddle/FastDeploy/issues) |
0 commit comments