Skip to content

Commit f84ab48

Browse files
committed
Adds demo script with local models
1 parent 54bc035 commit f84ab48

File tree

4 files changed

+22
-23
lines changed

4 files changed

+22
-23
lines changed

Makefile

+3-3
Original file line numberDiff line numberDiff line change
@@ -2,10 +2,10 @@ run:
22
diambra -r ~/.diambra/roms run -l python3 script.py
33

44
demo:
5-
diambra -r ~/.diambra/roms run -l python3 mistral.py && python3 result.py
5+
diambra -r ~/.diambra/roms run -l python3 demo.py && python3 result.py
66

7-
qwen:
8-
diambra -r ~/.diambra/roms run -l python3 qwen.py
7+
local:
8+
diambra -r ~/.diambra/roms run -l python3 ollama.py
99

1010
install:
1111
pip3 install -r requirements.txt

README.md

+15-18
Original file line numberDiff line numberDiff line change
@@ -72,55 +72,52 @@ We send to the LLM a text description of the screen. The LLM decide on the next
7272

7373
- Follow instructions in https://docs.diambra.ai/#installation
7474
- Download the ROM and put it in `~/.diambra/roms`
75-
- Install with `pip3 install -r requirements`
75+
- (Optional) Create and activate a [new python venv](https://docs.python.org/3/library/venv.html)
76+
- Install dependencies with `make install` or `pip install -r requirements.txt`
7677
- Create a `.env` file and fill it with the content like in the `.env.example` file
7778
- Run with `make run`
7879

7980
## Test mode
8081

8182
To disable the LLM calls, set `DISABLE_LLM` to `True` in the `.env` file.
82-
It will choose the action randomly.
83+
It will choose the actions randomly.
8384

8485
## Logging
8586

8687
Change the logging level in the `script.py` file.
8788

8889
## Local model
8990

90-
You can run the arena with local models.
91+
You can run the arena with local models using [Ollama](https://ollama.com/).
9192

9293
1. Make sure you have ollama installed, running, and with a model downloaded (run `ollama serve mistral` in the terminal for example)
9394

94-
2. Make sure you pulled the latest version from the `main` branch:
95+
2. Run `make local` to start the fight.
9596

96-
```
97-
git checkout main
98-
git pull
99-
```
100-
101-
4. In `script.py`, replace the main function with the following one.
97+
By default, it runs mistral against mistral. To use other models, you need to change the parameter model in `ollama.py`.
10298

10399
```python
100+
from eval.game import Game, Player1, Player2
101+
104102
def main():
105-
# Environment Settings
106103
game = Game(
107104
render=True,
105+
save_game=True,
108106
player_1=Player1(
109-
nickname="Daddy",
110-
model="ollama:mistral",
107+
nickname="Baby",
108+
model="ollama:mistral", # change this
111109
),
112110
player_2=Player2(
113-
nickname="Baby",
114-
model="ollama:mistral",
111+
nickname="Daddy",
112+
model="ollama:mistral", # change this
115113
),
116114
)
117-
return game.run()
115+
game.run()
116+
return 0
118117
```
119118

120119
The convention we use is `model_provider:model_name`. If you want to use another local model than Mistral, you can do `ollama:some_other_model`
121120

122-
5. Run the simulation: `make`
123-
124121
## How to make my own LLM model play? Can I improve the prompts?
125122

126123
The LLM is called in `Robot.call_llm()` method of the `agent/robot.py` file.

mistral.py demo.py

File renamed without changes.

qwen.py ollama.py

+4-2
Original file line numberDiff line numberDiff line change
@@ -18,11 +18,13 @@ def main():
1818
save_game=True,
1919
player_1=Player1(
2020
nickname="Baby",
21-
model="qwen:14b-chat-v1.5-fp16",
21+
# model="ollama:qwen:14b-chat-v1.5-fp16",
22+
model="ollama:mistral",
2223
),
2324
player_2=Player2(
2425
nickname="Daddy",
25-
model="qwen:14b-chat-v1.5-fp16",
26+
# model="ollama:qwen:14b-chat-v1.5-fp16",
27+
model="ollama:mistral",
2628
),
2729
)
2830

0 commit comments

Comments
 (0)