You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the oneAPI command line, run the following to print the available SYCL devices:
@@ -541,6 +547,29 @@ You can use Visual Studio to open llama.cpp folder as a CMake project. Choose th
541
547
542
548
- In case of a minimal experimental setup, the user can build the inference executable only through `cmake --build build --config Release -j --target llama-cli`.
543
549
550
+
4. Visual Studio Project
551
+
552
+
You can use Visual Studio projects to build and work on llama.cpp on Windows. You need to convert the CMake Project into a `.sln` file.
553
+
554
+
If you want to use Intel C++ compiler for the entire llama.cpp project:
555
+
```
556
+
cmake -B build -G "Visual Studio 17 2022" -T "Intel C++ Compiler 2025" -A x64 -DGGML_SYCL=ON -DCMAKE_BUILD_TYPE=Release
557
+
```
558
+
559
+
If you want to use Intel C++ Compiler only for ggml-sycl:
560
+
```
561
+
cmake -B build -G "Visual Studio 17 2022" -A x64 -DGGML_SYCL=ON -DCMAKE_BUILD_TYPE=Release -DSYCL_INCLUDE_DIR="C:\Program Files (x86)\Intel\oneAPI\compiler\latest\include" -DSYCL_LIBRARY_DIR="C:\Program Files (x86)\Intel\oneAPI\compiler\latest\lib"
562
+
```
563
+
564
+
In both cases, after the Visual Studio is created open it, right click on `ggml-sycl` and open properties. In the left column open `C/C++` sub menu and select `DPC++`. In the option window on the right set `Enable SYCL offload` to `yes` and apply changes.
0 commit comments