|
| 1 | +# Run RAGFlow with IPEX-LLM on Intel GPU |
| 2 | + |
| 3 | +[RAGFlow](https://github.com/infiniflow/ragflow) is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding; by integrating it with [`ipex-llm`](https://github.com/intel-analytics/ipex-llm), users can now easily leverage local LLMs running on Intel GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max). |
| 4 | + |
| 5 | + |
| 6 | +*See the demo of ragflow running Qwen2:7B on Intel Arc A770 below.* |
| 7 | + |
| 8 | +<video src="https://llm-assets.readthedocs.io/en/latest/_images/ragflow-record.mp4" width="100%" controls></video> |
| 9 | + |
| 10 | + |
| 11 | +## Quickstart |
| 12 | + |
| 13 | +### 0 Prerequisites |
| 14 | + |
| 15 | +- CPU >= 4 cores |
| 16 | +- RAM >= 16 GB |
| 17 | +- Disk >= 50 GB |
| 18 | +- Docker >= 24.0.0 & Docker Compose >= v2.26.1 |
| 19 | + |
| 20 | + |
| 21 | +### 1. Install and Start `Ollama` Service on Intel GPU |
| 22 | + |
| 23 | +Follow the steps in [Run Ollama with IPEX-LLM on Intel GPU Guide](./ollama_quickstart.md) to install and run Ollama on Intel GPU. Ensure that `ollama serve` is running correctly and can be accessed through a local URL (e.g., `https://127.0.0.1:11434`) or a remote URL (e.g., `http://your_ip:11434`). |
| 24 | + |
| 25 | + |
| 26 | + |
| 27 | +```eval_rst |
| 28 | +.. important:: |
| 29 | +
|
| 30 | + If the `RAGFlow` is not deployed on the same machine where Ollama is running (which means `RAGFlow` needs to connect to a remote Ollama service), you must configure the Ollama service to accept connections from any IP address. To achieve this, set or export the environment variable `OLLAMA_HOST=0.0.0.0` before executing the command `ollama serve`. |
| 31 | +
|
| 32 | +.. tip:: |
| 33 | +
|
| 34 | + If your local LLM is running on Intel Arc™ A-Series Graphics with Linux OS (Kernel 6.2), it is recommended to additionaly set the following environment variable for optimal performance before executing `ollama serve`: |
| 35 | +
|
| 36 | + .. code-block:: bash |
| 37 | +
|
| 38 | + export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1 |
| 39 | +``` |
| 40 | + |
| 41 | +### 2. Pull Model |
| 42 | + |
| 43 | +Now we need to pull a model for RAG using Ollama. Here we use [Qwen/Qwen2-7B](https://huggingface.co/Qwen/Qwen2-7B) model as an example. Open a new terminal window, run the following command to pull [`qwen2:latest`](https://ollama.com/library/qwen2). |
| 44 | + |
| 45 | + |
| 46 | +```eval_rst |
| 47 | +.. tabs:: |
| 48 | + .. tab:: Linux |
| 49 | +
|
| 50 | + .. code-block:: bash |
| 51 | +
|
| 52 | + export no_proxy=localhost,127.0.0.1 |
| 53 | + ./ollama pull qwen2:latest |
| 54 | +
|
| 55 | + .. tab:: Windows |
| 56 | +
|
| 57 | + Please run the following command in Miniforge or Anaconda Prompt. |
| 58 | +
|
| 59 | + .. code-block:: cmd |
| 60 | +
|
| 61 | + set no_proxy=localhost,127.0.0.1 |
| 62 | + ollama pull qwen2:latest |
| 63 | +
|
| 64 | +.. seealso:: |
| 65 | +
|
| 66 | + Besides Qwen2, there are other LLM models you might want to explore, such as Llama3, Phi3, Mistral, etc. You can find all available models in the `Ollama model library <https://ollama.com/library>`_. Simply search for the model, pull it in a similar manner, and give it a try. |
| 67 | +``` |
| 68 | + |
| 69 | +### 3. Start `RAGFlow` Service |
| 70 | + |
| 71 | + |
| 72 | +```eval_rst |
| 73 | +.. note:: |
| 74 | +
|
| 75 | + The steps in section 3 is verified on Linux system only. |
| 76 | +``` |
| 77 | + |
| 78 | + |
| 79 | +#### 3.1 Download `RAGFlow` |
| 80 | + |
| 81 | +You can either clone the repository or download the source zip from [github](https://github.com/infiniflow/ragflow/archive/refs/heads/main.zip): |
| 82 | + |
| 83 | +```bash |
| 84 | +$ git clone https://github.com/infiniflow/ragflow.git |
| 85 | +``` |
| 86 | + |
| 87 | +#### 3.2 Environment Settings |
| 88 | + |
| 89 | +Ensure `vm.max_map_count` is set to at least 262144. To check the current value of `vm.max_map_count`, use: |
| 90 | + |
| 91 | +```bash |
| 92 | +$ sysctl vm.max_map_count |
| 93 | +``` |
| 94 | + |
| 95 | +##### Changing `vm.max_map_count` |
| 96 | + |
| 97 | +To set the value temporarily, use: |
| 98 | + |
| 99 | +```bash |
| 100 | +$ sudo sysctl -w vm.max_map_count=262144 |
| 101 | +``` |
| 102 | + |
| 103 | +To make the change permanent and ensure it persists after a reboot, add or update the following line in `/etc/sysctl.conf`: |
| 104 | + |
| 105 | +```bash |
| 106 | +vm.max_map_count=262144 |
| 107 | +``` |
| 108 | + |
| 109 | +### 3.3 Start the `RAGFlow` server using Docker |
| 110 | + |
| 111 | +Build the pre-built Docker images and start up the server: |
| 112 | + |
| 113 | +```eval_rst |
| 114 | +.. note:: |
| 115 | +
|
| 116 | + Running the following commands automatically downloads the *dev* version RAGFlow Docker image. To download and run a specified Docker version, update `RAGFLOW_VERSION` in **docker/.env** to the intended version, for example `RAGFLOW_VERSION=v0.7.0`, before running the following commands. |
| 117 | +``` |
| 118 | + |
| 119 | + |
| 120 | +```bash |
| 121 | +$ export no_proxy=localhost,127.0.0.1 |
| 122 | +$ cd ragflow/docker |
| 123 | +$ chmod +x ./entrypoint.sh |
| 124 | +$ docker compose up -d |
| 125 | +``` |
| 126 | + |
| 127 | + |
| 128 | +```eval_rst |
| 129 | +.. note:: |
| 130 | + |
| 131 | + The core image is about 9 GB in size and may take a while to load. |
| 132 | +``` |
| 133 | + |
| 134 | +Check the server status after having the server up and running: |
| 135 | + |
| 136 | +```bash |
| 137 | +$ docker logs -f ragflow-server |
| 138 | +``` |
| 139 | + |
| 140 | +Upon successful deployment, you will see logs in the terminal similar to the following: |
| 141 | + |
| 142 | +```bash |
| 143 | + ____ ______ __ |
| 144 | + / __ \ ____ _ ____ _ / ____// /____ _ __ |
| 145 | + / /_/ // __ `// __ `// /_ / // __ \| | /| / / |
| 146 | + / _, _// /_/ // /_/ // __/ / // /_/ /| |/ |/ / |
| 147 | +/_/ |_| \__,_/ \__, //_/ /_/ \____/ |__/|__/ |
| 148 | + /____/ |
| 149 | + |
| 150 | +* Running on all addresses (0.0.0.0) |
| 151 | +* Running on http://127.0.0.1:9380 |
| 152 | +* Running on http://x.x.x.x:9380 |
| 153 | +INFO:werkzeug:Press CTRL+C to quit |
| 154 | +``` |
| 155 | + |
| 156 | + |
| 157 | +You can now open a browser and access the RAGflow web portal. With the default settings, simply enter `http://IP_OF_YOUR_MACHINE` (without the port number), as the default HTTP serving port `80` can be omitted. If RAGflow is deployed on the same machine as your browser, you can also access the web portal at `http://127.0.0.1` or `http://localhost`. |
| 158 | + |
| 159 | + |
| 160 | +### 4. Using `RAGFlow` |
| 161 | + |
| 162 | +```eval_rst |
| 163 | +.. note:: |
| 164 | +
|
| 165 | + For detailed information about how to use RAGFlow, visit the README of `RAGFlow official repository <https://github.com/infiniflow/ragflow>`_. |
| 166 | +
|
| 167 | +``` |
| 168 | + |
| 169 | +#### Log-in |
| 170 | + |
| 171 | +If this is your first time using RAGFlow, you will need to register. After registering, log in with your new account to access the portal. |
| 172 | + |
| 173 | +<div style="display: flex; gap: 5px;"> |
| 174 | + <a href="https://llm-assets.readthedocs.io/en/latest/_images/ragflow-login.png" target="_blank" style="flex: 1;"> |
| 175 | + <img src="https://llm-assets.readthedocs.io/en/latest/_images/ragflow-login.png" style="width: 100%;" /> |
| 176 | + </a> |
| 177 | + <a href="https://llm-assets.readthedocs.io/en/latest/_images/ragflow-login2.png" target="_blank" style="flex: 1;"> |
| 178 | + <img src="https://llm-assets.readthedocs.io/en/latest/_images/ragflow-login2.png" style="width: 100%;" /> |
| 179 | + </a> |
| 180 | +</div> |
| 181 | + |
| 182 | + |
| 183 | +#### Configure `Ollama` service URL |
| 184 | + |
| 185 | +Access the Ollama settings through **Settings -> Model Providers** in the menu. Fill out the **Base URL**, and then click the **OK** button at the bottom. |
| 186 | + |
| 187 | + |
| 188 | +<a href="https://llm-assets.readthedocs.io/en/latest/_images/ragflow-add-ollama.png" target="_blank"> |
| 189 | + <img src="https://llm-assets.readthedocs.io/en/latest/_images/ragflow-add-ollama.png" width="100%" /> |
| 190 | +</a> |
| 191 | + |
| 192 | +If the connection is successful, you will see the model listed down **Show more models** as illustrated below. |
| 193 | + |
| 194 | +<a href="https://llm-assets.readthedocs.io/en/latest/_images/ragflow-add-ollama2.png" target="_blank"> |
| 195 | + <img src="https://llm-assets.readthedocs.io/en/latest/_images/ragflow-add-ollama2.png" width="100%" /> |
| 196 | +</a> |
| 197 | + |
| 198 | +```eval_rst |
| 199 | +.. note:: |
| 200 | +
|
| 201 | + If you want to use an Ollama server hosted at a different URL, simply update the **Ollama Base URL** to the new URL and press the **OK** button again to re-confirm the connection to Ollama. |
| 202 | +``` |
| 203 | + |
| 204 | +#### Create Knowledge Base |
| 205 | + |
| 206 | +Go to **Knowledge Base** by clicking on **Knowledge Base** in the top bar. Click the **+Create knowledge base** button on the right. You will be prompted to input a name for the knowledge base. |
| 207 | + |
| 208 | + |
| 209 | +<a href="https://llm-assets.readthedocs.io/en/latest/_images/ragflow-knowledgebase.png" target="_blank"> |
| 210 | + <img src="https://llm-assets.readthedocs.io/en/latest/_images/ragflow-knowledgebase.png" width="100%" /> |
| 211 | +</a> |
| 212 | + |
| 213 | +#### Edit Knowledge Base |
| 214 | + |
| 215 | +After entering a name, you will be directed to edit the knowledge base. Click on **Dataset** on the left, then click **+ Add file -> Local files**. Upload your file in the pop-up window and click **OK**. |
| 216 | + |
| 217 | +<div style="display: flex; gap: 5px;"> |
| 218 | + <a href="https://llm-assets.readthedocs.io/en/latest/_images/ragflow-knowledgebase2.png" target="_blank" style="flex: 1;"> |
| 219 | + <img src="https://llm-assets.readthedocs.io/en/latest/_images/ragflow-knowledgebase2.png" style="width: 100%;" /> |
| 220 | + </a> |
| 221 | + <a href="https://llm-assets.readthedocs.io/en/latest/_images/ragflow-knowledgebase3.png" target="_blank" style="flex: 1;"> |
| 222 | + <img src="https://llm-assets.readthedocs.io/en/latest/_images/ragflow-knowledgebase3.png" style="width: 100%;" /> |
| 223 | + </a> |
| 224 | +</div> |
| 225 | + |
| 226 | +After the upload is successful, you will see a new record in the dataset. The _**Parsing Status**_ column will show `UNSTARTED`. Click the green start button in the _**Action**_ column to begin file parsing. Once parsing is finished, the _**Parsing Status**_ column will change to **SUCCESS**. |
| 227 | + |
| 228 | +<div style="display: flex; gap: 5px;"> |
| 229 | + <a href="https://llm-assets.readthedocs.io/en/latest/_images/ragflow-knowledgebase4.png" target="_blank" style="flex: 1;"> |
| 230 | + <img src="https://llm-assets.readthedocs.io/en/latest/_images/ragflow-knowledgebase4.png" style="width: 100%;" /> |
| 231 | + </a> |
| 232 | + <a href="https://llm-assets.readthedocs.io/en/latest/_images/ragflow-knowledgebase5.png" target="_blank" style="flex: 1;"> |
| 233 | + <img src="https://llm-assets.readthedocs.io/en/latest/_images/ragflow-knowledgebase5.png" style="width: 100%;" /> |
| 234 | + </a> |
| 235 | +</div> |
| 236 | + |
| 237 | + |
| 238 | +Next, go to **Configuration** on the left menu and click **Save** at the bottom to save the changes. |
| 239 | + |
| 240 | +<a href="https://llm-assets.readthedocs.io/en/latest/_images/ragflow-knowledgebase6.png" target="_blank"> |
| 241 | + <img src="https://llm-assets.readthedocs.io/en/latest/_images/ragflow-knowledgebase6.png" width="100%" /> |
| 242 | +</a> |
| 243 | + |
| 244 | +#### Chat with the Model |
| 245 | + |
| 246 | +Start new conversations by clicking **Chat** in the top navbar. |
| 247 | + |
| 248 | +On the left side, create a conversation by clicking **Create an Assistant**. Under **Assistant Setting**, give it a name and select your knowledge bases. |
| 249 | + |
| 250 | + |
| 251 | + <a href="https://llm-assets.readthedocs.io/en/latest/_images/ragflow-chat.png" target="_blank"> |
| 252 | + <img src="https://llm-assets.readthedocs.io/en/latest/_images/ragflow-chat.png" width="100%" /> |
| 253 | + </a> |
| 254 | + |
| 255 | + |
| 256 | +Next, go to **Model Setting**, choose your model added by Ollama, and disable the **Max Tokens** toggle. Finally, click **OK** to start. |
| 257 | + |
| 258 | +```eval_rst |
| 259 | +.. tip:: |
| 260 | +
|
| 261 | + Enabling the **Max Tokens** toggle may result in very short answers. |
| 262 | +``` |
| 263 | + |
| 264 | + <a href="https://llm-assets.readthedocs.io/en/latest/_images/ragflow-chat2.png" target="_blank"> |
| 265 | + <img src="https://llm-assets.readthedocs.io/en/latest/_images/ragflow-chat2.png" width="100%" /> |
| 266 | + </a> |
| 267 | + |
| 268 | +<br/> |
| 269 | + |
| 270 | +Input your questions into the **Message Resume Assistant** textbox at the bottom, and click the button on the right to get responses. |
| 271 | + |
| 272 | +<a href="https://llm-assets.readthedocs.io/en/latest/_images/ragflow-chat3.png" target="_blank"> |
| 273 | + <img src="https://llm-assets.readthedocs.io/en/latest/_images/ragflow-chat3.png" width="100%" /> |
| 274 | +</a> |
| 275 | + |
| 276 | +#### Exit |
| 277 | + |
| 278 | +To shut down the RAGFlow server, use **Ctrl+C** in the terminal where the Ragflow server is runing, then close your browser tab. |
0 commit comments