@@ -120,6 +120,7 @@ Jupyter AI supports the following model providers:
120
120
| Bedrock | ` bedrock ` | N/A | ` boto3 ` |
121
121
| Bedrock (chat) | ` bedrock-chat ` | N/A | ` boto3 ` |
122
122
| Cohere | ` cohere ` | ` COHERE_API_KEY ` | ` cohere ` |
123
+ | GPT4All | ` gpt4all ` | N/A | ` gpt4all ` |
123
124
| Hugging Face Hub | ` huggingface_hub ` | ` HUGGINGFACEHUB_API_TOKEN ` | ` huggingface_hub ` , ` ipywidgets ` , ` pillow ` |
124
125
| OpenAI | ` openai ` | ` OPENAI_API_KEY ` | ` openai ` |
125
126
| OpenAI (chat) | ` openai-chat ` | ` OPENAI_API_KEY ` | ` openai ` |
@@ -352,13 +353,25 @@ response. In this example, the endpoint returns an object with the schema
352
353
### GPT4All usage (early-stage)
353
354
354
355
Currently, we offer experimental support for GPT4All. To get started, first
355
- decide which models you will use. We currently offer three models from GPT4All:
356
+ decide which models you will use. We currently offer the following models from GPT4All:
356
357
357
358
| Model name | Model size | Model bin URL |
358
- | ------------------------------| ------------| ------------------------------------------------------------|
359
- | ` ggml-gpt4all-l13b-snoozy ` | 7.6 GB | ` http://gpt4all.io/models/ggml-gpt4all-l13b-snoozy.bin ` |
360
- | ` ggml-gpt4all-j-v1.2-jazzy ` | 3.8 GB | ` https://gpt4all.io/models/ggml-gpt4all-j-v1.2-jazzy.bin ` |
361
- | ` ggml-gpt4all-j-v1.3-groovy ` | 3.8 GB | ` https://gpt4all.io/models/ggml-gpt4all-j-v1.3-groovy.bin ` |
359
+ | ---------------------------------| ------------| ------------------------------------------------------------|
360
+ | ` ggml-gpt4all-l13b-snoozy ` | 7.6 GB | ` http://gpt4all.io/models/ggml-gpt4all-l13b-snoozy.bin ` |
361
+ | ` ggml-gpt4all-j-v1.2-jazzy ` | 3.8 GB | ` https://gpt4all.io/models/ggml-gpt4all-j-v1.2-jazzy.bin ` |
362
+ | ` ggml-gpt4all-j-v1.3-groovy ` | 3.8 GB | ` https://gpt4all.io/models/ggml-gpt4all-j-v1.3-groovy.bin ` |
363
+ | ` mistral-7b-openorca.Q4_0 ` | 3.8 GB | ` https://gpt4all.io/models/gguf/mistral-7b-openorca.Q4_0.gguf ` |
364
+ | ` mistral-7b-instruct-v0.1.Q4_0 ` | 3.8 GB | ` https://gpt4all.io/models/gguf/mistral-7b-instruct-v0.1.Q4_0.gguf ` |
365
+ | ` gpt4all-falcon-q4_0 ` | 3.9 GB | ` https://gpt4all.io/models/gguf/gpt4all-falcon-q4_0.gguf ` |
366
+ | ` wizardlm-13b-v1.2.Q4_0 ` | 6.9 GB | ` https://gpt4all.io/models/gguf/wizardlm-13b-v1.2.Q4_0.gguf ` |
367
+ | ` nous-hermes-llama2-13b.Q4_0 ` | 6.9 GB | ` https://gpt4all.io/models/gguf/nous-hermes-llama2-13b.Q4_0.gguf ` |
368
+ | ` gpt4all-13b-snoozy-q4_0 ` | 6.9 GB | ` https://gpt4all.io/models/gguf/gpt4all-13b-snoozy-q4_0.gguf ` |
369
+ | ` mpt-7b-chat-merges-q4_0 ` | 3.5 GB | ` https://gpt4all.io/models/gguf/mpt-7b-chat-merges-q4_0.gguf ` |
370
+ | ` orca-mini-3b-gguf2-q4_0 ` | 1.8 GB | ` https://gpt4all.io/models/gguf/orca-mini-3b-gguf2-q4_0.gguf ` |
371
+ | ` starcoder-q4_0 ` | 8.4 GB | ` https://gpt4all.io/models/gguf/starcoder-q4_0.gguf ` |
372
+ | ` rift-coder-v0-7b-q4_0 ` | 3.6 GB | ` https://gpt4all.io/models/gguf/rift-coder-v0-7b-q4_0.gguf ` |
373
+ | ` all-MiniLM-L6-v2-f16 ` | 44 MB | ` https://gpt4all.io/models/gguf/all-MiniLM-L6-v2-f16.gguf ` |
374
+ | ` em_german_mistral_v01.Q4_0 ` | 3.8 GB | ` https://huggingface.co/TheBloke/em_german_mistral_v01-GGUF/resolve/main/em_german_mistral_v01.Q4_0.gguf ` |
362
375
363
376
364
377
Note that each model comes with its own license, and that users are themselves
0 commit comments