Getting started - Trying to chat with Llama-3.2-3B on a M1 Macbook Air (CPU without ANY GPUs) #10631
Unanswered
jasonkaplan79
asked this question in
Q&A
Replies: 1 comment
-
This helped me achieving the same https://zahiralam.com/blog/installing-llama-3-2-on-mac-m1-m2-and-m3-your-gateway-to-ai-power/ |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
For context, I did already search for related "getting started" discussions and was unable to find a CLEAR set of instructions on how to get started:
I have spent several hours trying to figure this out on my own, finally posting to ask for help.
I have a basic MacBook Air (M1, 2020) with Apple M1 and 8GB of RAM. It has a CPU without ANY GPUs. I DO NOT want to run this using Docker.
Can llama.cpp support my hardware?
I want to run and be able to chat with: https://huggingface.co/meta-llama/Llama-3.2-3B
Here's what I tried:
Build locally on Mac CPU:
The build part worked, but I was unable to proceed. Since I couldnt find a clear set of getting started instructions, I then jumped to try this: Simple tutorial for beginers #1166
BUT IT DID NOT WORK (at least, I couldn't figure out what to do next)
Install from Mac Command line: https://github.com/ggerganov/llama.cpp/blob/master/docs/install.md
The installation worked, but again, I was unable to proceed.
What do I do next?
MY GOAL:
I want to do 2 things:
Is this possible?
Beta Was this translation helpful? Give feedback.
All reactions