-
Notifications
You must be signed in to change notification settings - Fork 646
[KleidiAI] Always attempt activation packing #13232
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: gh/mcr229/49/head
Are you sure you want to change the base?
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/13232
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 New FailureAs of commit 1c33efb with merge base a84b3c9 ( NEW FAILURE - The following job has failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This PR needs a
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good. Perf numbers? IIRC you said this didn't result in perf uplift? Stamping if I am remembering it wrong.
collecting them now. I realized i was running with debug mode on, so the perf numbers weren't representative. |
Currently, we only leverage KleidiAI kernels for Dynamically Quantized Activations with 4 bit blockwise weights on linear layers. This has seen a lot of success in our LLM Prefill performance.
However KleidiAI has also integrated into other kernels for XNNPACK. Specifically 4 bit channelwise weights, and 8 bit channelwise weights. We should attempt to use their kernels for these Linear schemes as well. This should have an effect on some example models we have like:
And in general other models that can do 8 bit channelwise quantization. (We don't support 4 bit channelwise quantization atm).
Performance
Android S24 (6 Threads) (10 Runs)
On Android S24, we see a nice perf uplift using the KleidiAI's activation packing and QD8_QC8W gemm kernels. Specifically on the ViT model we see ~8% (58.61ms --> 53.6948ms). You can see the difference in the GEMM performance by looking at the operator profiling below.
Consider event 834. This is a Fully Connected Layer:
Profiles:
Macbook (6 Threads) (10 Runs)
On macbook, we see a different story. With KleidiAI, we see a dip in perf: (49.32ms --> 56.53ms) which is around a ~14% drop.
Let's take a look at the Fully Connected Layers again specifically event 834 again:
This is a 24% dip in GEMM Performance!
Profiles