Skip to content

Commit 5b5b759

Browse files
committed
Add documentation on how to generate and use Visual Studio Project for
ggml-sycl
1 parent fabee87 commit 5b5b759

File tree

1 file changed

+29
-0
lines changed

1 file changed

+29
-0
lines changed

docs/backend/SYCL.md

+29
Original file line numberDiff line numberDiff line change
@@ -468,6 +468,12 @@ b. Enable oneAPI running environment:
468468
"C:\Program Files (x86)\Intel\oneAPI\setvars.bat" intel64
469469
```
470470

471+
- if you are using Powershell, enable the runtime environment with the following:
472+
473+
```
474+
cmd.exe "/K" '"C:\Program Files (x86)\Intel\oneAPI\setvars.bat" && powershell'
475+
```
476+
471477
c. Verify installation
472478

473479
In the oneAPI command line, run the following to print the available SYCL devices:
@@ -541,6 +547,29 @@ You can use Visual Studio to open llama.cpp folder as a CMake project. Choose th
541547

542548
- In case of a minimal experimental setup, the user can build the inference executable only through `cmake --build build --config Release -j --target llama-cli`.
543549

550+
4. Visual Studio Project
551+
552+
You can use Visual Studio projects to build and work on llama.cpp on Windows. You need to convert the CMake Project into a `.sln` file.
553+
554+
If you want to use Intel C++ compiler for the entire llama.cpp project:
555+
```
556+
cmake -B build -G "Visual Studio 17 2022" -T "Intel C++ Compiler 2025" -A x64 -DGGML_SYCL=ON -DCMAKE_BUILD_TYPE=Release
557+
```
558+
559+
If you want to use Intel C++ Compiler only for ggml-sycl:
560+
```
561+
cmake -B build -G "Visual Studio 17 2022" -A x64 -DGGML_SYCL=ON -DCMAKE_BUILD_TYPE=Release -DSYCL_INCLUDE_DIR="C:\Program Files (x86)\Intel\oneAPI\compiler\latest\include" -DSYCL_LIBRARY_DIR="C:\Program Files (x86)\Intel\oneAPI\compiler\latest\lib"
562+
```
563+
564+
In both cases, after the Visual Studio is created open it, right click on `ggml-sycl` and open properties. In the left column open `C/C++` sub menu and select `DPC++`. In the option window on the right set `Enable SYCL offload` to `yes` and apply changes.
565+
566+
Properties -> C\C++ -> DPC++ -> Enable SYCL offload(yes)
567+
568+
Now you can build llama.cpp with SYCL backend as a Visual Studio project.
569+
570+
*Notes:*
571+
- you can avoid to specify `SYCL_INCLUDE_DIR` and `SYCL_LIBRARY_DIR` if set the two env vars `SYCL_INCLUDE_DIR_HINT` and `SYCL_LIBRARY_DIR_HINT`.
572+
544573
### III. Run the inference
545574

546575
#### Retrieve and prepare model

0 commit comments

Comments
 (0)