Skip to content
This repository was archived by the owner on Jul 4, 2025. It is now read-only.

Commit d28291e

Browse files
authored
Update README.md
1 parent 52dc578 commit d28291e

File tree

1 file changed

+9
-7
lines changed

1 file changed

+9
-7
lines changed

README.md

Lines changed: 9 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -60,13 +60,6 @@ Download a llama model to try running the llama C++ integration. You can find a
6060

6161
Double-click on Nitro to run it. After downloading your model, make sure it's saved to a specific path. Then, make an API call to load your model into Nitro.
6262

63-
***OPTIONAL***: You can run Nitro on a different port like 5000 instead of 3928 by running it manually in terminal
64-
```zsh
65-
./nitro 1 127.0.0.1 5000 ([thread_num] [host] [port])
66-
```
67-
- thread_num : the number of thread that nitro webserver needs to have
68-
- host : host value normally 127.0.0.1 or 0.0.0.0
69-
- port : the port that nitro got deployed onto
7063

7164
```zsh
7265
curl -X POST 'http://localhost:3928/inferences/llamacpp/loadmodel' \
@@ -98,6 +91,15 @@ Table of parameters
9891
| `system_prompt` | String | The prompt to use for system rules. |
9992
| `pre_prompt` | String | The prompt to use for internal configuration. |
10093

94+
95+
***OPTIONAL***: You can run Nitro on a different port like 5000 instead of 3928 by running it manually in terminal
96+
```zsh
97+
./nitro 1 127.0.0.1 5000 ([thread_num] [host] [port])
98+
```
99+
- thread_num : the number of thread that nitro webserver needs to have
100+
- host : host value normally 127.0.0.1 or 0.0.0.0
101+
- port : the port that nitro got deployed onto
102+
101103
**Step 4: Perform Inference on Nitro for the First Time**
102104

103105
```zsh

0 commit comments

Comments
 (0)