[webgpu] fix the wrong dispatch size in flash_attention #24020
+1
−1
Annotations
2 warnings
Build and Test:
csharp/test/Microsoft.ML.OnnxRuntime.Tests.NetCoreApp/InferenceTest.netcore.cs#L81
Theory method 'CanRunInferenceOnAModelDotnetTensors' on test class 'InferenceTest' does not use parameter 'enableParallelExecution'. Use the parameter, or remove the parameter and associated data. (https://xunit.net/xunit.analyzers/rules/xUnit1026)
|
Build and Test:
csharp/test/Microsoft.ML.OnnxRuntime.Tests.NetCoreApp/InferenceTest.netcore.cs#L81
Theory method 'CanRunInferenceOnAModelDotnetTensors' on test class 'InferenceTest' does not use parameter 'enableParallelExecution'. Use the parameter, or remove the parameter and associated data. (https://xunit.net/xunit.analyzers/rules/xUnit1026)
|
Loading