Dependency between Int8
and VulkanMemoryModel
capabilities
#226
-
Followup from #225, I have been able to use Is this dependency on GLSL code: #version 460
#extension GL_EXT_shader_8bit_storage : require
layout(local_size_x = 64, local_size_y = 1, local_size_z = 1) in;
layout(set = 0, binding = 0) buffer Data {
uint8_t data[];
} buf;
void main() {
uint idx = gl_GlobalInvocationID.x;
buf.data[idx] *= uint8_t(2);
}
#![no_std]
use glam::UVec3;
use spirv_std::spirv;
#[spirv(compute(threads(64, 1, 1)))]
pub fn compute_shader(
#[spirv(global_invocation_id)] global_invocation_id: UVec3,
#[spirv(storage_buffer, descriptor_set = 0, binding = 0)] buf: &mut [u8],
) {
let idx = (global_invocation_id.x + global_invocation_id.y + global_invocation_id.z) as usize;
buf[idx] *= 2;
} Both shaders were tested using vulkano-rs/vulkano, which offers safe bindings on top of Error: a validation error occurred
Caused by:
create_info.code: uses the SPIR-V capability `VulkanMemoryModel` -- requires one of: device feature `vulkan_memory_model` (Vulkan VUIDs: VUID-VkShaderModuleCreateInfo-pCode-08742) Enabling it I get: Error: a validation error occurred
Caused by:
create_info.enabled_features: contains `vulkan_memory_model`, but this feature is not supported by the physical device |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 3 replies
-
Hmm, this appears to be set depending on the target, so when you specify a vulkan target it (naturally) targets the same memory model. There is a way to do the glsl memory model, but you have to specify an opengl target: Here are the opengl targets: rust-gpu/docs/src/platform-support.md Line 48 in 698f10a Are you using a device that supports vulkan natively or going through the voltenmk translation layer? I'm again afraid I haven't touched this part of the project but happy to help figure out what is going on. |
Beta Was this translation helpful? Give feedback.
I see. The translation layer does not implement certain things. It does seem weird to provide a target and then not implement a memory model for it.
Anyways, for anyone else looking to use these features, for macos I set:
For a nvidia GPU, I use: