Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

vulkan: implement several ops relevant for ggml_opt #11769

Open
wants to merge 10 commits into
base: master
Choose a base branch
from

Conversation

remyoudompheng
Copy link
Contributor

This PR implements several GGML opcodes which are possibly relevant for #10544 (SUM, ARGMAX, SUB, COUNT_EQUAL, OPT_STEP_ADAMW, REPEAT_BACK).
After these patches, it is possible to run test-opt using the Vulkan backend (with a few failures maybe caused by rounding issues?).

Several issues were identified in test-backend-ops:

  • SUB was not tested at all
  • REPEAT_BACK has a few cases not supported by the CPU backend (crash with -b CPU)

Several issues were identified in Vulkan CHECK_RESULTS mode:

  • RWKV_WKV6 was crashing
  • various buffers were not freed

@github-actions github-actions bot added testing Everything test related Vulkan Issues specific to the Vulkan backend ggml changes relating to the ggml tensor library for machine learning labels Feb 9, 2025
count += uint(data_a[idx] == data_b[idx]);
}

atomicAdd(data_d[0], D_TYPE(count));
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This shader crashes my Intel A770. I assume it's this atomicAdd. Maybe there is a way to avoid it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure how to perform reduction with multiple workgroups without adding an extra buffer.
Maybe doing a single atomic per warp helps with your crash?

Does it also crash with this variant : a1633e4 ?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's surprising this crashes because int32 atomics in compute shaders are required in vulkan 1.0. Does it crash during the compile or while executing? Maybe the compiler would handle uint better?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's surprising this crashes because int32 atomics in compute shaders are required in vulkan 1.0. Does it crash during the compile or while executing? Maybe the compiler would handle uint better?

It crashes with a vk::DeviceLostError on execution.

I'm not sure how to perform reduction with multiple workgroups without adding an extra buffer. Maybe doing a single atomic per warp helps with your crash?

Does it also crash with this variant : a1633e4 ?

Sadly yes. But this isn't your problem, probably just another driver bug in mesa ANV.

Copy link
Collaborator

@jeffbolznv jeffbolznv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The tests all pass on my system.

}

static void ggml_vk_count_equal(ggml_backend_vk_context * ctx, vk_context& subctx, const ggml_tensor * src0, const ggml_tensor * src1, ggml_tensor * dst, bool dryrun = false) {
ggml_backend_tensor_memset(dst, 0, 0, ggml_nbytes(dst));
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks like it'll record to a separate command buffer and run out of order. Is this intended?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch, this is definitely not right. If a memset is needed, you have to implement an async variant of ggml_vk_buffer_memset and use that. ggml_backend_tensor_memset just comes back around to ggml_backend_vk_buffer_memset_tensor, which calls the synchronous ggml_vk_buffer_memset.

for (uint i2 = i12; i2 < p.ne02; i2 += p.ne12) {
for (uint i1 = i11; i1 < p.ne01; i1 += p.ne11) {
for (uint i0 = i10; i0 < p.ne00; i0 += p.ne10) {
acc += data_a[i3*p.nb03 + i2*p.nb02 + i1*p.nb01 + i0*p.nb00];
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is get_aoffset() needed here? (I don't know)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably.

@0cc4m
Copy link
Collaborator

0cc4m commented Feb 15, 2025

The Intel crash can be ignored. Once you resolve the memset, this can be merged.

@remyoudompheng
Copy link
Contributor Author

Thanks for the review
Let me know if 3d506e5 is the proper way to proceed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ggml changes relating to the ggml tensor library for machine learning testing Everything test related Vulkan Issues specific to the Vulkan backend
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants