Skip to content

Commit e750735

Browse files
committed
first commit
1 parent 9b4a15b commit e750735

File tree

4 files changed

+94
-241
lines changed

4 files changed

+94
-241
lines changed

Makefile

+4
Original file line numberDiff line numberDiff line change
@@ -195,6 +195,10 @@ main: main.cpp ggml.o utils.o
195195
$(CXX) $(CXXFLAGS) main.cpp ggml.o utils.o -o main $(LDFLAGS)
196196
./main -h
197197

198+
chat: chat.cpp ggml.o utils.o
199+
$(CXX) $(CXXFLAGS) chat.cpp ggml.o utils.o -o chat $(LDFLAGS)
200+
201+
198202
quantize: quantize.cpp ggml.o utils.o
199203
$(CXX) $(CXXFLAGS) quantize.cpp ggml.o utils.o -o quantize $(LDFLAGS)
200204

README.md

+21-198
Original file line numberDiff line numberDiff line change
@@ -1,223 +1,46 @@
1-
# llama.cpp
2-
3-
[![Actions Status](https://github.com/ggerganov/llama.cpp/workflows/CI/badge.svg)](https://github.com/ggerganov/llama.cpp/actions)
4-
[![License: MIT](https://img.shields.io/badge/license-MIT-blue.svg)](https://opensource.org/licenses/MIT)
5-
6-
Inference of [LLaMA](https://arxiv.org/abs/2302.13971) model in pure C/C++
7-
8-
**Hot topics:**
9-
10-
- Cache input prompts for faster initialization: https://github.com/ggerganov/llama.cpp/issues/64
11-
- Create a `llama.cpp` logo: https://github.com/ggerganov/llama.cpp/issues/105
12-
13-
## Description
14-
15-
The main goal is to run the model using 4-bit quantization on a MacBook
16-
17-
- Plain C/C++ implementation without dependencies
18-
- Apple silicon first-class citizen - optimized via ARM NEON
19-
- AVX2 support for x86 architectures
20-
- Mixed F16 / F32 precision
21-
- 4-bit quantization support
22-
- Runs on the CPU
23-
24-
This was [hacked in an evening](https://github.com/ggerganov/llama.cpp/issues/33#issuecomment-1465108022) - I have no idea if it works correctly.
25-
Please do not make conclusions about the models based on the results from this implementation.
26-
For all I know, it can be completely wrong. This project is for educational purposes.
27-
New features will probably be added mostly through community contributions.
28-
29-
Supported platforms:
30-
31-
- [X] Mac OS
32-
- [X] Linux
33-
- [X] Windows (via CMake)
34-
35-
---
36-
37-
Here is a typical run using LLaMA-7B:
38-
39-
```java
40-
make -j && ./main -m ./models/7B/ggml-model-q4_0.bin -p "Building a website can be done in 10 simple steps:" -t 8 -n 512
41-
I llama.cpp build info:
42-
I UNAME_S: Darwin
43-
I UNAME_P: arm
44-
I UNAME_M: arm64
45-
I CFLAGS: -I. -O3 -DNDEBUG -std=c11 -fPIC -pthread -DGGML_USE_ACCELERATE
46-
I CXXFLAGS: -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -pthread
47-
I LDFLAGS: -framework Accelerate
48-
I CC: Apple clang version 14.0.0 (clang-1400.0.29.202)
49-
I CXX: Apple clang version 14.0.0 (clang-1400.0.29.202)
50-
51-
make: Nothing to be done for `default'.
52-
main: seed = 1678486056
53-
llama_model_load: loading model from './models/7B/ggml-model-q4_0.bin' - please wait ...
54-
llama_model_load: n_vocab = 32000
55-
llama_model_load: n_ctx = 512
56-
llama_model_load: n_embd = 4096
57-
llama_model_load: n_mult = 256
58-
llama_model_load: n_head = 32
59-
llama_model_load: n_layer = 32
60-
llama_model_load: n_rot = 128
61-
llama_model_load: f16 = 2
62-
llama_model_load: n_ff = 11008
63-
llama_model_load: ggml ctx size = 4529.34 MB
64-
llama_model_load: memory_size = 512.00 MB, n_mem = 16384
65-
llama_model_load: .................................... done
66-
llama_model_load: model size = 4017.27 MB / num tensors = 291
67-
68-
main: prompt: 'Building a website can be done in 10 simple steps:'
69-
main: number of tokens in prompt = 15
70-
1 -> ''
71-
8893 -> 'Build'
72-
292 -> 'ing'
73-
263 -> ' a'
74-
4700 -> ' website'
75-
508 -> ' can'
76-
367 -> ' be'
77-
2309 -> ' done'
78-
297 -> ' in'
79-
29871 -> ' '
80-
29896 -> '1'
81-
29900 -> '0'
82-
2560 -> ' simple'
83-
6576 -> ' steps'
84-
29901 -> ':'
85-
86-
sampling parameters: temp = 0.800000, top_k = 40, top_p = 0.950000
87-
88-
89-
Building a website can be done in 10 simple steps:
90-
1) Select a domain name and web hosting plan
91-
2) Complete a sitemap
92-
3) List your products
93-
4) Write product descriptions
94-
5) Create a user account
95-
6) Build the template
96-
7) Start building the website
97-
8) Advertise the website
98-
9) Provide email support
99-
10) Submit the website to search engines
100-
A website is a collection of web pages that are formatted with HTML. HTML is the code that defines what the website looks like and how it behaves.
101-
The HTML code is formatted into a template or a format. Once this is done, it is displayed on the user's browser.
102-
The web pages are stored in a web server. The web server is also called a host. When the website is accessed, it is retrieved from the server and displayed on the user's computer.
103-
A website is known as a website when it is hosted. This means that it is displayed on a host. The host is usually a web server.
104-
A website can be displayed on different browsers. The browsers are basically the software that renders the website on the user's screen.
105-
A website can also be viewed on different devices such as desktops, tablets and smartphones.
106-
Hence, to have a website displayed on a browser, the website must be hosted.
107-
A domain name is an address of a website. It is the name of the website.
108-
The website is known as a website when it is hosted. This means that it is displayed on a host. The host is usually a web server.
109-
A website can be displayed on different browsers. The browsers are basically the software that renders the website on the user’s screen.
110-
A website can also be viewed on different devices such as desktops, tablets and smartphones. Hence, to have a website displayed on a browser, the website must be hosted.
111-
A domain name is an address of a website. It is the name of the website.
112-
A website is an address of a website. It is a collection of web pages that are formatted with HTML. HTML is the code that defines what the website looks like and how it behaves.
113-
The HTML code is formatted into a template or a format. Once this is done, it is displayed on the user’s browser.
114-
A website is known as a website when it is hosted
115-
116-
main: mem per token = 14434244 bytes
117-
main: load time = 1332.48 ms
118-
main: sample time = 1081.40 ms
119-
main: predict time = 31378.77 ms / 61.41 ms per token
120-
main: total time = 34036.74 ms
121-
```
122-
123-
And here is another demo of running both LLaMA-7B and [whisper.cpp](https://github.com/ggerganov/whisper.cpp) on a single M1 Pro MacBook:
124-
125-
https://user-images.githubusercontent.com/1991296/224442907-7693d4be-acaa-4e01-8b4f-add84093ffff.mp4
1+
# Alpaca.cpp
1262

127-
## Usage
3+
Run a fast ChatGPT-like model locally on your device. The screencast below is not sped up and running on an M2 Macbook Air with 4GB of weights.
1284

129-
Here are the step for the LLaMA-7B model:
1305

131-
```bash
132-
# build this repo
133-
git clone https://github.com/ggerganov/llama.cpp
134-
cd llama.cpp
135-
make
6+
[![asciicast](screencast.gif)](https://asciinema.org/a/dfJ8QXZ4u978Ona59LPEldtKK)
1367

137-
# obtain the original LLaMA model weights and place them in ./models
138-
ls ./models
139-
65B 30B 13B 7B tokenizer_checklist.chk tokenizer.model
1408

141-
# install Python dependencies
142-
python3 -m pip install torch numpy sentencepiece
9+
This combines the [LLaMA foundation model](https://github.com/facebookresearch/llama) with an [open reproduction](https://github.com/tloen/alpaca-lora) of [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca) a fine-tuning of the base model to obey instructions (akin to the [RLHF](https://huggingface.co/blog/rlhf) used to train ChatGPT).
14310

144-
# convert the 7B model to ggml FP16 format
145-
python3 convert-pth-to-ggml.py models/7B/ 1
11+
## Get started
14612

147-
# quantize the model to 4-bits
148-
./quantize.sh 7B
149-
150-
# run the inference
151-
./main -m ./models/7B/ggml-model-q4_0.bin -t 8 -n 128
15213
```
14+
git clone https://github.com/antimatter15/alpaca.cpp
15+
cd alpaca.cpp
15316
154-
When running the larger models, make sure you have enough disk space to store all the intermediate files.
155-
156-
TODO: add model disk/mem requirements
157-
158-
### Interactive mode
159-
160-
If you want a more ChatGPT-like experience, you can run in interactive mode by passing `-i` as a parameter.
161-
In this mode, you can always interrupt generation by pressing Ctrl+C and enter one or more lines of text which will be converted into tokens and appended to the current context. You can also specify a *reverse prompt* with the parameter `-r "reverse prompt string"`. This will result in user input being prompted whenever the exact tokens of the reverse prompt string are encountered in the generation. A typical use is to use a prompt which makes LLaMa emulate a chat between multiple users, say Alice and Bob, and pass `-r "Alice:"`.
162-
163-
Here is an example few-shot interaction, invoked with the command
17+
make chat
18+
./chat
16419
```
165-
./main -m ./models/13B/ggml-model-q4_0.bin -t 8 -n 256 --repeat_penalty 1.0 --color -i -r "User:" \
166-
-p \
167-
"Transcript of a dialog, where the User interacts with an Assistant named Bob. Bob is helpful, kind, honest, good at writing, and never fails to answer the User's requests immediately and with precision.
16820

169-
User: Hello, Bob.
170-
Bob: Hello. How may I help you today?
171-
User: Please tell me the largest city in Europe.
172-
Bob: Sure. The largest city in Europe is Moscow, the capital of Russia.
173-
User:"
21+
You can download the weights for `ggml-alpaca-7b-14.bin` with BitTorrent `magnet:?xt=urn:btih:5aaceaec63b03e51a98f04fd5c42320b2a033010&dn=ggml-alpaca-7b-q4.bin&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce&tr=udp%3A%2F%2Fopentracker.i2p.rocks%3A6969%2Fannounce`
17422

175-
```
176-
Note the use of `--color` to distinguish between user input and generated text.
17723

178-
![image](https://user-images.githubusercontent.com/1991296/224575029-2af3c7dc-5a65-4f64-a6bb-517a532aea38.png)
24+
Alternatively you can download them with IPFS.
17925

180-
### Android
181-
182-
You can easily run `llama.cpp` on Android device with [termux](https://play.google.com/store/apps/details?id=com.termux).
183-
First, obtain the [Android NDK](https://developer.android.com/ndk) and then build with CMake:
18426
```
185-
$ mkdir build-android
186-
$ cd build-android
187-
$ export NDK=<your_ndk_directory>
188-
$ cmake -DCMAKE_TOOLCHAIN_FILE=$NDK/build/cmake/android.toolchain.cmake -DANDROID_ABI=arm64-v8a -DANDROID_PLATFORM=android-23 -DCMAKE_C_FLAGS=-march=armv8.4a+dotprod ..
189-
$ make
27+
# any of these will work
28+
wget -O ggml-alpaca-7b-q4.bin -c https://gateway.estuary.tech/gw/ipfs/QmQ1bf2BTnYxq73MFJWu1B7bQ2UD6qG7D7YDCxhTndVkPC
29+
wget -O ggml-alpaca-7b-q4.bin -c https://ipfs.io/ipfs/QmQ1bf2BTnYxq73MFJWu1B7bQ2UD6qG7D7YDCxhTndVkPC
30+
wget -O ggml-alpaca-7b-q4.bin -c https://cloudflare-ipfs.com/ipfs/QmQ1bf2BTnYxq73MFJWu1B7bQ2UD6qG7D7YDCxhTndVkPC
19031
```
191-
Install [termux](https://play.google.com/store/apps/details?id=com.termux) on your device and run `termux-setup-storage` to get access to your SD card.
192-
Finally, copy the `llama` binary and the model files to your device storage. Here is a demo of an interactive session running on Pixel 5 phone:
193-
194-
https://user-images.githubusercontent.com/271616/225014776-1d567049-ad71-4ef2-b050-55b0b3b9274c.mp4
19532

33+
Save the `ggml-alpaca-7b-14.bin` file in the same directory as your `./chat` executable.
19634

197-
## Limitations
35+
The weights are based on the published fine-tunes from `alpaca-lora`, converted back into a pytorch checkpoint with a [modified script](https://github.com/tloen/alpaca-lora/pull/19) and then quantized with llama.cpp the regular way.
19836

199-
- We don't know yet how much the quantization affects the quality of the generated text
200-
- Probably the token sampling can be improved
201-
- The Accelerate framework is actually currently unused since I found that for tensor shapes typical for the Decoder,
202-
there is no benefit compared to the ARM_NEON intrinsics implementation. Of course, it's possible that I simply don't
203-
know how to utilize it properly. But in any case, you can even disable it with `LLAMA_NO_ACCELERATE=1 make` and the
204-
performance will be the same, since no BLAS calls are invoked by the current implementation
37+
## Credit
20538

206-
### Contributing
39+
This combines [Facebook's LLaMA](https://github.com/facebookresearch/llama), [Stanford Alpaca](https://crfm.stanford.edu/2023/03/13/alpaca.html), [alpaca-lora](https://github.com/tatsu-lab/stanford_alpaca) (which uses [Jason Phang's implementation of LLaMA](https://github.com/huggingface/transformers/pull/21955) on top of Hugging Face Transformers), and a modified version of [llama.cpp](https://github.com/ggerganov/llama.cpp) by Georgi Gerganov. The chat implementation is based on Matvey Soloviev's [Interactive Mode](https://github.com/ggerganov/llama.cpp/pull/61) for llama.cpp. Inspired by [Simon Willison's](https://til.simonwillison.net/llms/llama-7b-m2) getting started guide for LLaMA.
20740

208-
- Contributors can open PRs
209-
- Collaborators can push to branches in the `llama.cpp` repo
210-
- Collaborators will be invited based on contributions
21141

212-
### Coding guidelines
42+
## Disclaimer
21343

214-
- Avoid adding third-party dependencies, extra files, extra headers, etc.
215-
- Always consider cross-compatibility with other operating systems and architectures
216-
- Avoid fancy looking modern STL constructs, use basic `for` loops, avoid templates, keep it simple
217-
- There are no strict rules for the code style, but try to follow the patterns in the code (indentation, spaces, etc.). Vertical alignment makes things more readable and easier to batch edit
218-
- Clean-up any trailing whitespaces, use 4 spaces indentation, brackets on same line, `void * ptr`, `int & a`
219-
- See [good first issues](https://github.com/ggerganov/llama.cpp/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22) for tasks suitable for first contributions
44+
Note that the model weights are only to be used for research purposes, as they are derivative of LLaMA, and uses the published instruction data from the Stanford Alpaca project which is generated by OpenAI, which itself disallows the usage of its outputs to train competing models.
22045

221-
### Misc
22246

223-
- Practice your C++ typing skills: https://typing-battles.ggerganov.com

0 commit comments

Comments
 (0)