@@ -17,9 +17,9 @@ Cortex.cpp is in active development. If you have any questions, please reach out
17
17
## Local Installation
18
18
19
19
Cortex has a ** Local Installer** with all of the required dependencies, so that once you've downloaded it, no internet connection is required during the installation process.
20
- - [ Windows] ( https://app.cortexcpp.com/download/latest/windows-amd64-local )
21
- - [ Mac (Universal)] ( https://app.cortexcpp.com/download/latest/mac-universal-local )
22
- - [ Linux] ( https://app.cortexcpp.com/download/latest/linux-amd64-local )
20
+ - [ Windows] ( https://app.cortexcpp.com/download/latest/windows-amd64-local )
21
+ - [ Mac (Universal)] ( https://app.cortexcpp.com/download/latest/mac-universal-local )
22
+ - [ Linux] ( https://app.cortexcpp.com/download/latest/linux-amd64-local )
23
23
24
24
## Start a Cortex Server
25
25
@@ -43,11 +43,11 @@ This command allows users to download a model from these Model Hubs:
43
43
- [ Cortex Built-in Models] ( https://cortex.so/models )
44
44
- [ Hugging Face] ( https://huggingface.co ) (GGUF): ` cortex pull <author/ModelRepo> `
45
45
46
- It displays available quantizations, recommends a default and downloads the desired quantization.
46
+ It displays available quantizations, recommends a default and downloads the desired quantization.
47
47
48
48
<Tabs >
49
49
<TabItem value = " MacOs/Linux" label = " MacOs/Linux" >
50
- The following two options will show you all of the available models under those names. Cortex will first search
50
+ The following two options will show you all of the available models under those names. Cortex will first search
51
51
on its own hub for models like ` llama3.3 ` , and in huggingface for hyper specific ones like ` bartowski/Meta-Llama-3.1-8B-Instruct-GGU ` .
52
52
``` sh
53
53
cortex pull llama3.3
@@ -70,8 +70,8 @@ It displays available quantizations, recommends a default and downloads the desi
70
70
71
71
## Run a Model
72
72
73
- This command downloads the default ` gguf ` model (if not available in your file system) from the [ Cortex Hub ] ( https://huggingface.co/cortexso ) ,
74
- starts the model, and chat with the model.
73
+ This command downloads the default ` gguf ` model (if not available in your file system) from the
74
+ [ Cortex Hub ] ( https://huggingface.co/cortexso ) , starts the model, and chat with the model.
75
75
76
76
<Tabs >
77
77
<TabItem value = " MacOs/Linux" label = " MacOs/Linux" >
@@ -137,6 +137,7 @@ This command displays the running model and the hardware system status (RAM, Eng
137
137
## Stop a Model
138
138
139
139
This command stops the running model.
140
+
140
141
<Tabs >
141
142
<TabItem value = " MacOs/Linux" label = " MacOs/Linux" >
142
143
``` sh
@@ -153,6 +154,7 @@ This command stops the running model.
153
154
## Stop a Cortex Server
154
155
155
156
This command stops the Cortex.cpp API server at ` localhost:39281 ` or whichever other port you used to start cortex.
157
+
156
158
<Tabs >
157
159
<TabItem value = " MacOs/Linux" label = " MacOs/Linux" >
158
160
``` sh
0 commit comments