You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+8-8
Original file line number
Diff line number
Diff line change
@@ -76,28 +76,28 @@ Use Claude 3 with Vision to see how it stacks up to GPT-4-Vision at operating a
76
76
operate -m claude-3
77
77
```
78
78
79
-
#### Try LLaVa Hosted Through Ollama `-m llava`
80
-
If you wish to experiment with the Self-Operating Computer Framework using LLaVA on your own machine, you can with Ollama!
79
+
#### Try a model Hosted Through Ollama `-m llama3.2-vision`
80
+
If you wish to experiment with the Self-Operating Computer Framework using e.g. LLaVA on your own machine, you can with Ollama!
81
81
*Note: Ollama currently only supports MacOS and Linux. Windows now in Preview*
82
82
83
83
First, install Ollama on your machine from https://ollama.ai/download.
84
84
85
-
Once Ollama is installed, pull the LLaVA model:
85
+
Once Ollama is installed, pull the vision model:
86
86
```
87
-
ollama pull llava
87
+
ollama pull llama3.2-vision
88
88
```
89
89
This will download the model on your machine which takes approximately 5 GB of storage.
90
90
91
-
When Ollama has finished pulling LLaVA, start the server:
91
+
When Ollama has finished pulling llama3.2-vision, start the server:
92
92
```
93
93
ollama serve
94
94
```
95
95
96
-
That's it! Now start `operate` and select the LLaVA model:
96
+
That's it! Now start `operate` and select the model:
97
97
```
98
-
operate -m llava
98
+
operate -m llama3.2-vision
99
99
```
100
-
**Important:** Error rates when using LLaVA are very high. This is simply intended to be a base to build off of as local multimodal models improve over time.
100
+
**Important:** Error rates when using self-hosted models are very high. This is simply intended to be a base to build off of as local multimodal models improve over time.
101
101
102
102
Learn more about Ollama at its [GitHub Repository](https://www.github.com/ollama/ollama)
0 commit comments