Skip to content

Commit 02870dc

Browse files
authored
LLM: Refine README of AutoTP-FastAPI example (#10960)
1 parent 2ebec03 commit 02870dc

File tree

6 files changed

+136
-0
lines changed

6 files changed

+136
-0
lines changed

docs/readthedocs/source/_templates/sidebar_quicklinks.html

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -61,6 +61,10 @@
6161
<li>
6262
<a href="doc/LLM/Quickstart/axolotl_quickstart.html">Finetune LLM with Axolotl on Intel GPU</a>
6363
</li>
64+
<li>
65+
<a href="doc/LLM/Quickstart/deepspeed_autotp_fastapi_quickstart.html">Run IPEX-LLM serving on Multiple Intel GPUs
66+
using DeepSpeed AutoTP and FastApi</a>
67+
</li>
6468
</ul>
6569
</li>
6670
<li>

docs/readthedocs/source/_toc.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -33,6 +33,7 @@ subtrees:
3333
- file: doc/LLM/Quickstart/llama3_llamacpp_ollama_quickstart
3434
- file: doc/LLM/Quickstart/fastchat_quickstart
3535
- file: doc/LLM/Quickstart/axolotl_quickstart
36+
- file: doc/LLM/Quickstart/deepspeed_autotp_fastapi_quickstart
3637
- file: doc/LLM/Overview/KeyFeatures/index
3738
title: "Key Features"
3839
subtrees:
Lines changed: 102 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,102 @@
1+
# Run IPEX-LLM serving on Multiple Intel GPUs using DeepSpeed AutoTP and FastApi
2+
3+
This example demonstrates how to run IPEX-LLM serving on multiple [Intel GPUs](../README.md) by leveraging DeepSpeed AutoTP.
4+
5+
## Requirements
6+
7+
To run this example with IPEX-LLM on Intel GPUs, we have some recommended requirements for your machine, please refer to [here](../README.md#recommended-requirements) for more information. For this particular example, you will need at least two GPUs on your machine.
8+
9+
## Example
10+
11+
### 1. Install
12+
13+
```bash
14+
conda create -n llm python=3.11
15+
conda activate llm
16+
# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
17+
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
18+
pip install oneccl_bind_pt==2.1.100 --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
19+
# configures OneAPI environment variables
20+
source /opt/intel/oneapi/setvars.sh
21+
pip install git+https://github.com/microsoft/DeepSpeed.git@ed8aed5
22+
pip install git+https://github.com/intel/intel-extension-for-deepspeed.git@0eb734b
23+
pip install mpi4py fastapi uvicorn
24+
conda install -c conda-forge -y gperftools=2.10 # to enable tcmalloc
25+
```
26+
27+
> **Important**: IPEX 2.1.10+xpu requires Intel® oneAPI Base Toolkit's version == 2024.0. Please make sure you have installed the correct version.
28+
29+
### 2. Run tensor parallel inference on multiple GPUs
30+
31+
When we run the model in a distributed manner across two GPUs, the memory consumption of each GPU is only half of what it was originally, and the GPUs can work simultaneously during inference computation.
32+
33+
We provide example usage for `Llama-2-7b-chat-hf` model running on Arc A770
34+
35+
Run Llama-2-7b-chat-hf on two Intel Arc A770:
36+
37+
```bash
38+
39+
# Before run this script, you should adjust the YOUR_REPO_ID_OR_MODEL_PATH in last line
40+
# If you want to change server port, you can set port parameter in last line
41+
42+
# To avoid GPU OOM, you could adjust --max-num-seqs and --max-num-batched-tokens parameters in below script
43+
bash run_llama2_7b_chat_hf_arc_2_card.sh
44+
```
45+
46+
If you successfully run the serving, you can get output like this:
47+
48+
```bash
49+
[0] INFO: Started server process [120071]
50+
[0] INFO: Waiting for application startup.
51+
[0] INFO: Application startup complete.
52+
[0] INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
53+
```
54+
55+
> **Note**: You could change `NUM_GPUS` to the number of GPUs you have on your machine. And you could also specify other low bit optimizations through `--low-bit`.
56+
57+
### 3. Sample Input and Output
58+
59+
We can use `curl` to test serving api
60+
61+
```bash
62+
# Set http_proxy and https_proxy to null to ensure that requests are not forwarded by a proxy.
63+
export http_proxy=
64+
export https_proxy=
65+
66+
curl -X 'POST' \
67+
'http://127.0.0.1:8000/generate/' \
68+
-H 'accept: application/json' \
69+
-H 'Content-Type: application/json' \
70+
-d '{
71+
"prompt": "What is AI?",
72+
"n_predict": 32
73+
}'
74+
```
75+
76+
And you should get output like this:
77+
78+
```json
79+
{
80+
"generated_text": "What is AI? Artificial intelligence (AI) refers to the development of computer systems able to perform tasks that would normally require human intelligence, such as visual perception, speech",
81+
"generate_time": "0.45149803161621094s"
82+
}
83+
84+
```
85+
86+
**Important**: The first token latency is much larger than rest token latency, you could use [our benchmark tool](https://github.com/intel-analytics/ipex-llm/blob/main/python/llm/dev/benchmark/README.md) to obtain more details about first and rest token latency.
87+
88+
### 4. Benchmark with wrk
89+
90+
We use wrk for testing end-to-end throughput, check [here](https://github.com/wg/wrk).
91+
92+
You can install by:
93+
```bash
94+
sudo apt install wrk
95+
```
96+
97+
Please change the test url accordingly.
98+
99+
```bash
100+
# set t/c to the number of concurrencies to test full throughput.
101+
wrk -t1 -c1 -d5m -s ./wrk_script_1024.lua http://127.0.0.1:8000/generate/ --timeout 1m
102+
```

docs/readthedocs/source/doc/LLM/Quickstart/index.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,7 @@ This section includes efficient guide to show you how to:
2323
* `Run Llama 3 on Intel GPU using llama.cpp and ollama with IPEX-LLM <./llama3_llamacpp_ollama_quickstart.html>`_
2424
* `Run IPEX-LLM Serving with FastChat <./fastchat_quickstart.html>`_
2525
* `Finetune LLM with Axolotl on Intel GPU <./axolotl_quickstart.html>`_
26+
* `Run IPEX-LLM serving on Multiple Intel GPUs using DeepSpeed AutoTP and FastApi <./deepspeed_autotp_fastapi_quickstart.html>`
2627

2728
.. |bigdl_llm_migration_guide| replace:: ``bigdl-llm`` Migration Guide
2829
.. _bigdl_llm_migration_guide: bigdl_llm_migration.html

python/llm/example/GPU/Deepspeed-AutoTP-FastAPI/README.md

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -38,6 +38,8 @@ Run Llama-2-7b-chat-hf on two Intel Arc A770:
3838

3939
# Before run this script, you should adjust the YOUR_REPO_ID_OR_MODEL_PATH in last line
4040
# If you want to change server port, you can set port parameter in last line
41+
42+
# To avoid GPU OOM, you could adjust --max-num-seqs and --max-num-batched-tokens parameters in below script
4143
bash run_llama2_7b_chat_hf_arc_2_card.sh
4244
```
4345

@@ -82,3 +84,19 @@ And you should get output like this:
8284
```
8385

8486
**Important**: The first token latency is much larger than rest token latency, you could use [our benchmark tool](https://github.com/intel-analytics/ipex-llm/blob/main/python/llm/dev/benchmark/README.md) to obtain more details about first and rest token latency.
87+
88+
### 4. Benchmark with wrk
89+
90+
We use wrk for testing end-to-end throughput, check [here](https://github.com/wg/wrk).
91+
92+
You can install by:
93+
```bash
94+
sudo apt install wrk
95+
```
96+
97+
Please change the test url accordingly.
98+
99+
```bash
100+
# set t/c to the number of concurrencies to test full throughput.
101+
wrk -t1 -c1 -d5m -s ./wrk_script_1024.lua http://127.0.0.1:8000/generate/ --timeout 1m
102+
```
Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
wrk.method = "POST"
2+
wrk.headers["accept"] = "application/json"
3+
wrk.headers["Content-Type"] = "application/json"
4+
wrk.body = '{"prompt": "Once upon a time, there existed a little girl who liked to have adventures. She wanted to go to places and meet new people, and have fun. However, her parents were always telling her to stay close to home, to be careful, and to avoid any danger. But the little girl was stubborn, and she wanted to see what was on the other side of the mountain. So she sneaked out of the house one night, leaving a note for her parents, and set off on her journey. As she climbed the mountain, the little girl felt a sense of excitement and wonder. She had never been this far away from home before, and she couldnt wait to see what she would find on the other side. She climbed higher and higher, her lungs burning from the thin air, until she finally reached the top of the mountain. And there, she found a beautiful meadow filled with wildflowers and a sparkling stream. The little girl danced and played in the meadow, feeling free and alive. She knew she had to return home eventually, but for now, she was content to enjoy her adventure. As the sun began to set, the little girl reluctantly made her way back down the mountain, but she knew that she would never forget her adventure and the joy of discovering something new and exciting. And whenever she felt scared or unsure, she would remember the thrill of climbing the mountain and the beauty of the meadow on the other side, and she would know that she could face any challenge that came her way, with courage and determination. She carried the memories of her journey in her heart, a constant reminder of the strength she possessed. The little girl returned home to her worried parents, who had discovered her note and anxiously awaited her arrival. They scolded her for disobeying their instructions and venturing into the unknown. But as they looked into her sparkling eyes and saw the glow on her face, their anger softened. They realized that their little girl had grown, that she had experienced something extraordinary. The little girl shared her tales of the mountain and the meadow with her parents, painting vivid pictures with her words. She spoke of the breathtaking view from the mountaintop, where the world seemed to stretch endlessly before her. She described the delicate petals of the wildflowers, vibrant hues that danced in the gentle breeze. And she recounted the soothing melody of the sparkling stream, its waters reflecting the golden rays of the setting sun. Her parents listened intently, captivated by her story. They realized that their daughter had discovered a part of herself on that journey—a spirit of curiosity and a thirst for exploration. They saw that she had learned valuable lessons about independence, resilience, and the beauty that lies beyond ones comfort zone. From that day forward, the little girls parents encouraged her to pursue her dreams and embrace new experiences. They understood that while there were risks in the world, there were also rewards waiting to be discovered. They supported her as she continued to embark on adventures, always reminding her to stay safe but never stifling her spirit. As the years passed, the little girl grew into a remarkable woman, fearlessly exploring the world and making a difference wherever she went. The lessons she had learned on that fateful journey stayed with her, guiding her through challenges and inspiring her to live life to the fullest. And so, the once timid little girl became a symbol of courage and resilience, a reminder to all who knew her that the greatest joys in life often lie just beyond the mountains we fear to climb. Her story spread far and wide, inspiring others to embrace their own journeys and discover the wonders that awaited them. In the end, the little girls adventure became a timeless tale, passed down through generations, reminding us all that sometimes, the greatest rewards come to those who dare to step into the unknown and follow their hearts. With each passing day, the little girls story continued to inspire countless individuals, igniting a spark within their souls and encouraging them to embark on their own extraordinary adventures. The tale of her bravery and determination resonated deeply with people from all walks of life, reminding them of the limitless possibilities that awaited them beyond the boundaries of their comfort zones. People marveled at the little girls unwavering spirit and her unwavering belief in the power of dreams. They saw themselves reflected in her journey, finding solace in the knowledge that they too could overcome their fears and pursue their passions. The little girl\'s story became a beacon of hope, a testament to the human spirit", "n_predict":128}'
5+
6+
logfile = io.open("wrk.log", "w");
7+
8+
response = function(status, header, body)
9+
logfile:write("status:" .. status .. "\n" .. body .. "\n-------------------------------------------------\n");
10+
end

0 commit comments

Comments
 (0)