Skip to content

Commit 94148ba

Browse files
authored
sycl: allow ggml-sycl configuration and compilation using Visual Studio project/solution (#12625)
1 parent 9ac4d61 commit 94148ba

File tree

2 files changed

+91
-5
lines changed

2 files changed

+91
-5
lines changed

docs/backend/SYCL.md

+82-5
Original file line numberDiff line numberDiff line change
@@ -475,6 +475,12 @@ b. Enable oneAPI running environment:
475475
"C:\Program Files (x86)\Intel\oneAPI\setvars.bat" intel64
476476
```
477477

478+
- if you are using Powershell, enable the runtime environment with the following:
479+
480+
```
481+
cmd.exe "/K" '"C:\Program Files (x86)\Intel\oneAPI\setvars.bat" && powershell'
482+
```
483+
478484
c. Verify installation
479485

480486
In the oneAPI command line, run the following to print the available SYCL devices:
@@ -505,13 +511,13 @@ You could download the release package for Windows directly, which including bin
505511

506512
Choose one of following methods to build from source code.
507513

508-
1. Script
514+
#### 1. Script
509515

510516
```sh
511517
.\examples\sycl\win-build-sycl.bat
512518
```
513519

514-
2. CMake
520+
#### 2. CMake
515521

516522
On the oneAPI command line window, step into the llama.cpp main directory and run the following:
517523

@@ -540,13 +546,84 @@ cmake --preset x64-windows-sycl-debug
540546
cmake --build build-x64-windows-sycl-debug -j --target llama-cli
541547
```
542548

543-
3. Visual Studio
549+
#### 3. Visual Studio
550+
551+
You have two options to use Visual Studio to build llama.cpp:
552+
- As CMake Project using CMake presets.
553+
- Creating a Visual Studio solution to handle the project.
554+
555+
**Note**:
556+
557+
All following commands are executed in PowerShell.
558+
559+
##### - Open as a CMake Project
560+
561+
You can use Visual Studio to open the `llama.cpp` folder directly as a CMake project. Before compiling, select one of the SYCL CMake presets:
544562

545-
You can use Visual Studio to open llama.cpp folder as a CMake project. Choose the sycl CMake presets (`x64-windows-sycl-release` or `x64-windows-sycl-debug`) before you compile the project.
563+
- `x64-windows-sycl-release`
564+
565+
- `x64-windows-sycl-debug`
546566

547567
*Notes:*
568+
- For a minimal experimental setup, you can build only the inference executable using:
569+
570+
```Powershell
571+
cmake --build build --config Release -j --target llama-cli
572+
```
573+
574+
##### - Generating a Visual Studio Solution
575+
576+
You can use Visual Studio solution to build and work on llama.cpp on Windows. You need to convert the CMake Project into a `.sln` file.
577+
578+
If you want to use the Intel C++ Compiler for the entire `llama.cpp` project, run the following command:
579+
580+
```Powershell
581+
cmake -B build -G "Visual Studio 17 2022" -T "Intel C++ Compiler 2025" -A x64 -DGGML_SYCL=ON -DCMAKE_BUILD_TYPE=Release
582+
```
583+
584+
If you prefer to use the Intel C++ Compiler only for `ggml-sycl`, ensure that `ggml` and its backend libraries are built as shared libraries ( i.e. `-DBUILD_SHARED_LIBRARIES=ON`, this is default behaviour):
585+
586+
```Powershell
587+
cmake -B build -G "Visual Studio 17 2022" -A x64 -DGGML_SYCL=ON -DCMAKE_BUILD_TYPE=Release \
588+
-DSYCL_INCLUDE_DIR="C:\Program Files (x86)\Intel\oneAPI\compiler\latest\include" \
589+
-DSYCL_LIBRARY_DIR="C:\Program Files (x86)\Intel\oneAPI\compiler\latest\lib"
590+
```
591+
592+
If successful the build files have been written to: *path/to/llama.cpp/build*
593+
Open the project file **build/llama.cpp.sln** with Visual Studio.
594+
595+
Once the Visual Studio solution is created, follow these steps:
596+
597+
1. Open the solution in Visual Studio.
598+
599+
2. Right-click on `ggml-sycl` and select **Properties**.
600+
601+
3. In the left column, expand **C/C++** and select **DPC++**.
602+
603+
4. In the right panel, find **Enable SYCL Offload** and set it to `Yes`.
604+
605+
5. Apply the changes and save.
606+
607+
608+
*Navigation Path:*
609+
610+
```
611+
Properties -> C/C++ -> DPC++ -> Enable SYCL Offload (Yes)
612+
```
613+
614+
Now, you can build `llama.cpp` with the SYCL backend as a Visual Studio project.
615+
To do it from menu: `Build -> Build Solution`.
616+
Once it is completed, final results will be in **build/Release/bin**
617+
618+
*Additional Note*
619+
620+
- You can avoid specifying `SYCL_INCLUDE_DIR` and `SYCL_LIBRARY_DIR` in the CMake command by setting the environment variables:
621+
622+
- `SYCL_INCLUDE_DIR_HINT`
623+
624+
- `SYCL_LIBRARY_DIR_HINT`
548625

549-
- In case of a minimal experimental setup, the user can build the inference executable only through `cmake --build build --config Release -j --target llama-cli`.
626+
- Above instruction has been tested with Visual Studio 17 Community edition and oneAPI 2025.0. We expect them to work also with future version if the instructions are adapted accordingly.
550627

551628
### III. Run the inference
552629

ggml/src/ggml-sycl/CMakeLists.txt

+9
Original file line numberDiff line numberDiff line change
@@ -27,6 +27,15 @@ file(GLOB GGML_HEADERS_SYCL "*.hpp")
2727
file(GLOB GGML_SOURCES_SYCL "*.cpp")
2828
target_sources(ggml-sycl PRIVATE ${GGML_HEADERS_SYCL} ${GGML_SOURCES_SYCL})
2929

30+
if (WIN32)
31+
# To generate a Visual Studio solution, using Intel C++ Compiler for ggml-sycl is mandatory
32+
if( ${CMAKE_GENERATOR} MATCHES "Visual Studio" AND NOT (${CMAKE_GENERATOR_TOOLSET} MATCHES "Intel C"))
33+
set_target_properties(ggml-sycl PROPERTIES VS_PLATFORM_TOOLSET "Intel C++ Compiler 2025")
34+
set(CMAKE_CXX_COMPILER "icx")
35+
set(CMAKE_CXX_COMPILER_ID "IntelLLVM")
36+
endif()
37+
endif()
38+
3039
find_package(IntelSYCL)
3140
if (IntelSYCL_FOUND)
3241
# Use oneAPI CMake when possible

0 commit comments

Comments
 (0)