This library provides a way to preprocess WGSL shader code with features like file inclusion (native only), macro definitions, and conditional compilation. It is inspired by the C/C++ preprocessor but tailored for WGSL. It is written in C++ and can be used in native applications as well as in web applications via WebAssembly.
https://reeselevine.github.io/pre-wgsl/
- Support for:
#include- Include other shader files, currently native only#ifdef/#ifndef- Conditional compilation#if/#elif/#else- Expression-based conditions- Expressions can use boolean logic and integer arithmetic, as well a a special
defined(MACRO_NAME)operator
- Expressions can use boolean logic and integer arithmetic, as well a a special
#define- Define macros with/without values- Supports
\line continuation for multi-line directives
- Supports
#undef- Undefine macros- Macro expansion in code
- Macro expansion is recursive (e.g.,
#define Z (X / Y)expands usingXandY). - Note: WGSL uses syntax like
4ufor type suffixes, but this preprocessor will not correctly expand a macro likeMACRO_NAMEu.
- Macro expansion is recursive (e.g.,
- Pass macros globally or per-shader process call
- Macro expansion in code
Include include/pre-wgsl.hpp as a header-only library in your C++ project. Just copy the file or see examples/cli for CMake integration.
#include "pre_wgsl.hpp"
Preprocessor preprocessor;
std::vector<std::string> macros = {"MY_MACRO=42"};
std::string shaderCode = R"(
@compute @workgroup_size(1)
fn main() {
let value = {{MY_MACRO}}u;
}
)";
std::string processed = preprocessor.preprocess(shaderCode, macros);You can also expand only #include directives (no macro or conditional processing):
std::string expanded = preprocessor.preprocess_includes(shaderCode);For a full demo see examples/cli.
Via NPM:
npm install pre-wgslimport { createPreprocessor } from 'pre-wgsl';
const preprocessor = await createPreprocessor({
macros: ['MY_MACRO=42']
});
const source = `
@compute @workgroup_size(1)
fn main() {
let value = {{MY_MACRO}}u;
}
`;
const processed = preprocessor.preprocess(source);You can also expand only #include directives (no macro or conditional processing):
const expanded = preprocessor.preprocess_includes(source);For a full demo see examples/web.
WGSL made the deliberate decision to not include a preprocessor in the core language specification (at least for now): gpuweb/gpuweb#568. Without getting into the pros and cons of this decision, this has led to the development of several open-source third-party preprocessors. Likely, many projects implement custom preprocessing solutions within their codebases as well.
To the best of my knowledge, here are the existing open-source WGSL preprocessors:
- wgsl-preprocessor: A JavaScript-based preprocessor with basic conditional preprocessing, uses template literals for macro expansion.
- wgsl-template: Another JavaScript-based preprocessor which supports different syntax for macros, and also can generate C++ code to embed processed WGSL shaders.
- wgsl-plus: yet another JavaScript-based preprocessor with very similar syntax, built-in obfuscator/minifier/prettifier.
- WESL: An extended version of WGSL with support for imports, conditional translation by extending WGSL's "@" attributes. Also supports packaging shader libraries for reuse. Implementations in Rust and Javascript. As far as I can tell, the two implementations are independent, meaning they must be kept in sync manually.
- wgsl_preprocessor: A Rust-based preprocessor with support for includes and macros.
- wgsl-macro: Another Rust-based preprocessor.
- naga_oil: A Rust-based preprocessor built specifically for the Bevy game engine.
Oof that's a lot.
However, none of these worked for my use case, which was to be able to define many variants of WGSL shaders for my work on llama.cpp, which is a C++ project. And I did not want to have to depend on a JavaScript or Rust toolchain to preprocess my shaders.
The other benefit to a C++ preprocessor is that the same code can be compiled to WebAssembly and used in web applications too. Of course, this could be done with Rust, but since llama.cpp and Dawn are C++ projects, I wanted a C++ solution. I think the competition between dawn vs. wgpu, llama.cpp vs. candle/burn is good, and may the best framework win :).
- CMake 3.17+
For building the WebAssembly module:
- Emscripten SDK
- Node.js 16+
# Build the native library
mkdir build && cd build
cmake ..
# Run tests
tests/pre_wgsl_tests
# Or with ctest
ctest# Build the WASM module and TypeScript
npm run buildInclude another WGSL file:
#include "common.wgsl"
#include "utils.wgsl"Define a macro:
#define PI 3.14159
#define WORKGROUP_SIZE 256
#define ENABLEDMulti-line macros using \ line continuation:
#define MY_CODE \
let a : i32 = 32; \
let b : i32 = 42;Undefine a macro:
#define DEBUG
#undef DEBUGConditional compilation based on whether a macro is defined:
#define DEBUG
#ifdef DEBUG
// This code is included
#endif
#ifndef RELEASE
// This code is included
#endifExpression-based conditions:
#define VERSION 2
#if VERSION == 1
// Version 1 code
#elif VERSION == 2
// Version 2 code
#else
// Other versions
#endifSupported operators: ==, !=, <, >, <=, >=, &&, ||, +, -, *, /, %, !, <<, >>
Check if a macro is defined in expressions:
#if defined(FEATURE_A) && defined(FEATURE_B)
// Both features enabled
#endifMIT
Contributions are welcome! Feel free to open an issue/PR.