Skip to content

Commit a2a35fd

Browse files
authored
Update portable zip link (#13098)
* update portable zip link * update CN * address comments * update latest updates * revert
1 parent 2f78afc commit a2a35fd

10 files changed

+20
-20
lines changed

README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@
99
> - ***70+ models** have been optimized/verified on `ipex-llm` (e.g., Llama, Phi, Mistral, Mixtral, DeepSeek, Qwen, ChatGLM, MiniCPM, Qwen-VL, MiniCPM-V and more), with state-of-art **LLM optimizations**, **XPU acceleration** and **low-bit (FP8/FP6/FP4/INT4) support**; see the complete list [here](#verified-models).*
1010
1111
## Latest Update 🔥
12-
- [2025/04] We released `ipex-llm 2.2.0`, which includes [Ollama Portable Zip and llama.cpp Portable Zip](https://github.com/intel/ipex-llm/releases/tag/v2.2.0).
12+
- [2025/04] We released `ipex-llm 2.2.0`, which includes [Ollama Portable Zip and llama.cpp Portable Zip](https://github.com/ipex-llm/ipex-llm/releases/tag/v2.2.0).
1313
- [2025/03] We added support for **Gemma3** model in the latest [llama.cpp Portable Zip](https://github.com/intel/ipex-llm/issues/12963#issuecomment-2724032898).
1414
- [2025/03] We can now run **DeepSeek-R1-671B-Q4_K_M** with 1 or 2 Arc A770 on Xeon using the latest [llama.cpp Portable Zip](docs/mddocs/Quickstart/llamacpp_portable_zip_gpu_quickstart.md#flashmoe-for-deepseek-v3r1).
1515
- [2025/02] We added support of [llama.cpp Portable Zip](https://github.com/intel/ipex-llm/releases/tag/v2.2.0-nightly) for Intel **GPU** (both [Windows](docs/mddocs/Quickstart/llamacpp_portable_zip_gpu_quickstart.md#windows-quickstart) and [Linux](docs/mddocs/Quickstart/llamacpp_portable_zip_gpu_quickstart.md#linux-quickstart)) and **NPU** ([Windows](docs/mddocs/Quickstart/llama_cpp_npu_portable_zip_quickstart.md) only).

README.zh-CN.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@
99
> - ***70+** 模型已经在 `ipex-llm` 上得到优化和验证(如 Llama, Phi, Mistral, Mixtral, DeepSeek, Qwen, ChatGLM, MiniCPM, Qwen-VL, MiniCPM-V 等), 以获得先进的 **大模型算法优化**, **XPU 加速** 以及 **低比特(FP8FP8/FP6/FP4/INT4)支持**;更多模型信息请参阅[这里](#模型验证)*
1010
1111
## 最新更新 🔥
12-
- [2025/04] 发布 `ipex-llm 2.2.0`, 其中包括 [Ollama Portable Zip 和 llama.cpp Portable Zip](https://github.com/intel/ipex-llm/releases/tag/v2.2.0)
12+
- [2025/04] 发布 `ipex-llm 2.2.0`, 其中包括 [Ollama Portable Zip 和 llama.cpp Portable Zip](https://github.com/ipex-llm/ipex-llm/releases/tag/v2.2.0)
1313
- [2025/03] 通过最新 [llama.cpp Portable Zip](https://github.com/intel/ipex-llm/issues/12963#issuecomment-2724032898) 可运行 **Gemma3** 模型。
1414
- [2025/03] 使用最新 [llama.cpp Portable Zip](docs/mddocs/Quickstart/llamacpp_portable_zip_gpu_quickstart.zh-CN.md#flashmoe-运行-deepseek-v3r1), 可以在 Xeon 上通过1到2张 Arc A770 GPU 运行 **DeepSeek-R1-671B-Q4_K_M**
1515
- [2025/02] 新增 [llama.cpp Portable Zip](https://github.com/intel/ipex-llm/releases/tag/v2.2.0-nightly) 在 Intel **GPU** (包括 [Windows](docs/mddocs/Quickstart/llamacpp_portable_zip_gpu_quickstart.zh-CN.md#windows-用户指南)[Linux](docs/mddocs/Quickstart/llamacpp_portable_zip_gpu_quickstart.zh-CN.md#linux-用户指南)) 和 **NPU** (仅 [Windows](docs/mddocs/Quickstart/llama_cpp_npu_portable_zip_quickstart.zh-CN.md)) 上直接**免安装运行 llama.cpp**

docs/mddocs/Quickstart/llama_cpp_npu_portable_zip_quickstart.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
<b>< English</b> | <a href='./llama_cpp_npu_portable_zip_quickstart.zh-CN.md'>中文</a> >
44
</p>
55

6-
IPEX-LLM provides llama.cpp support for running GGUF models on Intel NPU. This guide demonstrates how to use [llama.cpp NPU portable zip](https://github.com/intel/ipex-llm/releases/tag/v2.2.0) to directly run on Intel NPU (without the need of manual installations).
6+
IPEX-LLM provides llama.cpp support for running GGUF models on Intel NPU. This guide demonstrates how to use [llama.cpp NPU portable zip](https://github.com/ipex-llm/ipex-llm/releases/tag/v2.2.0) to directly run on Intel NPU (without the need of manual installations).
77

88
> [!IMPORTANT]
99
>
@@ -29,7 +29,7 @@ Check your NPU driver version, and update it if needed:
2929

3030
## Step 1: Download and Unzip
3131

32-
Download IPEX-LLM llama.cpp NPU portable zip for Windows users from the [link](https://github.com/intel/ipex-llm/releases/tag/v2.2.0).
32+
Download IPEX-LLM llama.cpp NPU portable zip for Windows users from the [link](https://github.com/ipex-llm/ipex-llm/releases/tag/v2.2.0).
3333

3434
Then, extract the zip file to a folder.
3535

docs/mddocs/Quickstart/llama_cpp_npu_portable_zip_quickstart.zh-CN.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
< <a href='./llama_cpp_npu_portable_zip_quickstart.md'>English</a> | <b>中文</b> >
44
</p>
55

6-
IPEX-LLM 提供了 llama.cpp 的相关支持以在 Intel NPU 上运行 GGUF 模型。本指南演示如何使用 [llama.cpp NPU portable zip](https://github.com/intel/ipex-llm/releases/tag/v2.2.0) 在 Intel NPU 上直接免安装运行。
6+
IPEX-LLM 提供了 llama.cpp 的相关支持以在 Intel NPU 上运行 GGUF 模型。本指南演示如何使用 [llama.cpp NPU portable zip](https://github.com/ipex-llm/ipex-llm/releases/tag/v2.2.0) 在 Intel NPU 上直接免安装运行。
77

88
> [!IMPORTANT]
99
>
@@ -29,7 +29,7 @@ IPEX-LLM 提供了 llama.cpp 的相关支持以在 Intel NPU 上运行 GGUF 模
2929

3030
## 步骤 1:下载和解压
3131

32-
从此[链接](https://github.com/intel/ipex-llm/releases/tag/v2.2.0)下载 IPEX-LLM llama.cpp NPU portable zip。
32+
从此[链接](https://github.com/ipex-llm/ipex-llm/releases/tag/v2.2.0)下载 IPEX-LLM llama.cpp NPU portable zip。
3333

3434
然后,将 zip 文件解压到一个文件夹中。
3535

docs/mddocs/Quickstart/llamacpp_portable_zip_gpu_quickstart.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66
>[!Important]
77
> You can now run **DeepSeek-R1-671B-Q4_K_M** with 1 or 2 Arc A770 on Xeon using the latest *llama.cpp Portable Zip*; see the [guide](#flashmoe-for-deepseek-v3r1) below.
88
9-
This guide demonstrates how to use [llama.cpp portable zip](https://github.com/intel/ipex-llm/releases/tag/v2.2.0) to directly run llama.cpp on Intel GPU with `ipex-llm` (without the need of manual installations).
9+
This guide demonstrates how to use [llama.cpp portable zip](https://github.com/ipex-llm/ipex-llm/releases/tag/v2.2.0) to directly run llama.cpp on Intel GPU with `ipex-llm` (without the need of manual installations).
1010

1111
> [!NOTE]
1212
> llama.cpp portable zip has been verified on:
@@ -42,7 +42,7 @@ We recommend updating your GPU driver to the [latest](https://www.intel.com/cont
4242

4343
### Step 1: Download and Unzip
4444

45-
Download IPEX-LLM llama.cpp portable zip for Windows users from the [link](https://github.com/intel/ipex-llm/releases/tag/v2.2.0).
45+
Download IPEX-LLM llama.cpp portable zip for Windows users from the [link](https://github.com/ipex-llm/ipex-llm/releases/tag/v2.2.0).
4646

4747
Then, extract the zip file to a folder.
4848

@@ -126,7 +126,7 @@ Check your GPU driver version, and update it if needed; we recommend following [
126126

127127
### Step 1: Download and Extract
128128

129-
Download IPEX-LLM llama.cpp portable tgz for Linux users from the [link](https://github.com/intel/ipex-llm/releases/tag/v2.2.0).
129+
Download IPEX-LLM llama.cpp portable tgz for Linux users from the [link](https://github.com/ipex-llm/ipex-llm/releases/tag/v2.2.0).
130130

131131
Then, extract the tgz file to a folder.
132132

docs/mddocs/Quickstart/llamacpp_portable_zip_gpu_quickstart.zh-CN.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
< <a href='./llamacpp_portable_zip_gpu_quickstart.md'>English</a> | <b>中文</b> >
44
</p>
55
6-
本指南演示如何使用 [llama.cpp portable zip](https://github.com/intel/ipex-llm/releases/tag/v2.2.0) 通过 `ipex-llm` 在 Intel GPU 上直接免安装运行。
6+
本指南演示如何使用 [llama.cpp portable zip](https://github.com/ipex-llm/ipex-llm/releases/tag/v2.2.0) 通过 `ipex-llm` 在 Intel GPU 上直接免安装运行。
77

88
> [!Important]
99
> 使用最新版 *llama.cpp Portable Zip* 可以在 Xeon 上通过1到2张 Arc A770 GPU 运行 **DeepSeek-R1-671B-Q4_K_M**;详见如下[指南](#flashmoe-运行-deepseek-v3r1)
@@ -42,7 +42,7 @@
4242

4343
### 步骤 1:下载与解压
4444

45-
对于 Windows 用户,请从此[链接](https://github.com/intel/ipex-llm/releases/tag/v2.2.0)下载 IPEX-LLM llama.cpp portable zip。
45+
对于 Windows 用户,请从此[链接](https://github.com/ipex-llm/ipex-llm/releases/tag/v2.2.0)下载 IPEX-LLM llama.cpp portable zip。
4646

4747
然后,将 zip 文件解压到一个文件夹中。
4848

@@ -128,7 +128,7 @@ llama_perf_context_print: total time = xxxxx.xx ms / 1385 tokens
128128

129129
### 步骤 1:下载与解压
130130

131-
对于 Linux 用户,从此[链接](https://github.com/intel/ipex-llm/releases/tag/v2.2.0)下载 IPEX-LLM llama.cpp portable tgz。
131+
对于 Linux 用户,从此[链接](https://github.com/ipex-llm/ipex-llm/releases/tag/v2.2.0)下载 IPEX-LLM llama.cpp portable tgz。
132132

133133
然后,将 tgz 文件解压到一个文件夹中。
134134

docs/mddocs/Quickstart/ollama_portable_zip_quickstart.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
<b>< English</b> | <a href='./ollama_portable_zip_quickstart.zh-CN.md'>中文</a> >
44
</p>
55

6-
This guide demonstrates how to use [Ollama portable zip](https://github.com/intel/ipex-llm/releases/tag/v2.2.0) to directly run Ollama on Intel GPU with `ipex-llm` (without the need of manual installations).
6+
This guide demonstrates how to use [Ollama portable zip](https://github.com/ipex-llm/ipex-llm/releases/tag/v2.3.0-nightly) to directly run Ollama on Intel GPU with `ipex-llm` (without the need of manual installations).
77

88
> [!NOTE]
99
> Ollama portable zip has been verified on:
@@ -43,7 +43,7 @@ We recommend updating your GPU driver to the [latest](https://www.intel.com/cont
4343

4444
### Step 1: Download and Unzip
4545

46-
Download IPEX-LLM Ollama portable zip for Windows users from the [link](https://github.com/intel/ipex-llm/releases/tag/v2.2.0).
46+
Download IPEX-LLM Ollama portable zip for Windows users from the [link](https://github.com/ipex-llm/ipex-llm/releases/tag/v2.3.0-nightly).
4747

4848
Then, extract the zip file to a folder.
4949

@@ -74,7 +74,7 @@ Check your GPU driver version, and update it if needed; we recommend following [
7474

7575
### Step 1: Download and Extract
7676

77-
Download IPEX-LLM Ollama portable tgz for Ubuntu users from the [link](https://github.com/intel/ipex-llm/releases/tag/v2.2.0).
77+
Download IPEX-LLM Ollama portable tgz for Ubuntu users from the [link](https://github.com/ipex-llm/ipex-llm/releases/tag/v2.3.0-nightly).
7878

7979
Then open a terminal, extract the tgz file to a folder.
8080

docs/mddocs/Quickstart/ollama_portable_zip_quickstart.zh-CN.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
< <a href='./ollama_portable_zip_quickstart.md'>English</a> | <b>中文</b> >
44
</p>
55

6-
本指南演示如何使用 [Ollama portable zip](https://github.com/intel/ipex-llm/releases/tag/v2.2.0) 通过 `ipex-llm` 在 Intel GPU 上直接免安装运行 Ollama。
6+
本指南演示如何使用 [Ollama portable zip](https://github.com/ipex-llm/ipex-llm/releases/tag/v2.3.0-nightly) 通过 `ipex-llm` 在 Intel GPU 上直接免安装运行 Ollama。
77

88
> [!NOTE]
99
> Ollama portable zip 在如下设备上进行了验证:
@@ -43,7 +43,7 @@
4343

4444
### 步骤 1:下载和解压
4545

46-
从此[链接](https://github.com/intel/ipex-llm/releases/tag/v2.2.0)下载 IPEX-LLM Ollama portable zip。
46+
从此[链接](https://github.com/ipex-llm/ipex-llm/releases/tag/v2.3.0-nightly)下载 IPEX-LLM Ollama portable zip。
4747

4848
然后,将 zip 文件解压到一个文件夹中。
4949

@@ -76,7 +76,7 @@
7676

7777
### 步骤 1:下载和解压
7878

79-
从此[链接](https://github.com/intel/ipex-llm/releases/tag/v2.2.0)下载 IPEX-LLM Ollama portable tgz。
79+
从此[链接](https://github.com/ipex-llm/ipex-llm/releases/tag/v2.3.0-nightly)下载 IPEX-LLM Ollama portable tgz。
8080

8181
然后,开启一个终端,输入如下命令将 tgz 文件解压到一个文件夹中。
8282
```bash

docs/mddocs/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66
**`IPEX-LLM`** is an LLM acceleration library for Intel [GPU](Quickstart/install_windows_gpu.md) *(e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max)*, [NPU](Quickstart/npu_quickstart.md) and CPU [^1].
77

88
## Latest Update 🔥
9-
- [2025/04] We released `ipex-llm 2.2.0`, which includes [Ollama Portable Zip and llama.cpp Portable Zip](https://github.com/intel/ipex-llm/releases/tag/v2.2.0).
9+
- [2025/04] We released `ipex-llm 2.2.0`, which includes [Ollama Portable Zip and llama.cpp Portable Zip](https://github.com/ipex-llm/ipex-llm/releases/tag/v2.2.0).
1010
- [2025/03] We can now run **DeepSeek-R1-671B-Q4_K_M** with 1 or 2 Arc A770 on Xeon using the latest [llama.cpp Portable Zip](Quickstart/llamacpp_portable_zip_gpu_quickstart.md#flashmoe-for-deepseek-v3r1).
1111
- [2025/02] We added support of [llama.cpp Portable Zip](https://github.com/intel/ipex-llm/releases/tag/v2.2.0-nightly) for Intel **GPU** (both [Windows](Quickstart/llamacpp_portable_zip_gpu_quickstart.md#windows-quickstart) and [Linux](Quickstart/llamacpp_portable_zip_gpu_quickstart.md#linux-quickstart)) and **NPU** ([Windows](Quickstart/llama_cpp_npu_portable_zip_quickstart.md) only).
1212
- [2025/02] We added support of [Ollama Portable Zip](https://github.com/intel/ipex-llm/releases/tag/v2.2.0-nightly) to directly run Ollama on Intel **GPU** for both [Windows](Quickstart/ollama_portable_zip_quickstart.md#windows-quickstart) and [Linux](Quickstart/ollama_portable_zip_quickstart.md#linux-quickstart) (***without the need of manual installations***).

docs/mddocs/README.zh-CN.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66
**`ipex-llm`** 是一个将大语言模型高效地运行于 Intel [GPU](docs/mddocs/Quickstart/install_windows_gpu.md) *(如搭载集成显卡的个人电脑,Arc 独立显卡、Flex 及 Max 数据中心 GPU 等)*[NPU](docs/mddocs/Quickstart/npu_quickstart.md) 和 CPU 上的大模型 XPU 加速库[^1]
77

88
## 最新更新 🔥
9-
- [2025/04] 发布 `ipex-llm 2.2.0`, 其中包括 [Ollama Portable Zip 和 llama.cpp Portable Zip](https://github.com/intel/ipex-llm/releases/tag/v2.2.0)
9+
- [2025/04] 发布 `ipex-llm 2.2.0`, 其中包括 [Ollama Portable Zip 和 llama.cpp Portable Zip](https://github.com/ipex-llm/ipex-llm/releases/tag/v2.2.0)
1010
- [2025/03] 使用最新 [llama.cpp Portable Zip](Quickstart/llamacpp_portable_zip_gpu_quickstart.zh-CN.md#flashmoe-运行-deepseek-v3r1), 可以在 Xeon 上通过1到2张 Arc A770 GPU 运行 **DeepSeek-R1-671B-Q4_K_M**
1111
- [2025/02] 新增 [llama.cpp Portable Zip](https://github.com/intel/ipex-llm/releases/tag/v2.2.0-nightly) 在 Intel **GPU** (包括 [Windows](Quickstart/llamacpp_portable_zip_gpu_quickstart.zh-CN.md#windows-用户指南)[Linux](Quickstart/llamacpp_portable_zip_gpu_quickstart.zh-CN.md#linux-用户指南)) 和 **NPU** (仅 [Windows](Quickstart/llama_cpp_npu_portable_zip_quickstart.zh-CN.md)) 上直接**免安装运行 llama.cpp**
1212
- [2025/02] 新增 [Ollama Portable Zip](https://github.com/intel/ipex-llm/releases/tag/v2.2.0-nightly) 在 Intel **GPU** 上直接**免安装运行 Ollama** (包括 [Windows](Quickstart/ollama_portable_zip_quickstart.zh-CN.md#windows用户指南)[Linux](Quickstart/ollama_portable_zip_quickstart.zh-CN.md#linux用户指南))。

0 commit comments

Comments
 (0)