@@ -72,55 +72,52 @@ We send to the LLM a text description of the screen. The LLM decide on the next
72
72
73
73
- Follow instructions in https://docs.diambra.ai/#installation
74
74
- Download the ROM and put it in ` ~/.diambra/roms `
75
- - Install with ` pip3 install -r requirements `
75
+ - (Optional) Create and activate a [ new python venv] ( https://docs.python.org/3/library/venv.html )
76
+ - Install dependencies with ` make install ` or ` pip install -r requirements.txt `
76
77
- Create a ` .env ` file and fill it with the content like in the ` .env.example ` file
77
78
- Run with ` make run `
78
79
79
80
## Test mode
80
81
81
82
To disable the LLM calls, set ` DISABLE_LLM ` to ` True ` in the ` .env ` file.
82
- It will choose the action randomly.
83
+ It will choose the actions randomly.
83
84
84
85
## Logging
85
86
86
87
Change the logging level in the ` script.py ` file.
87
88
88
89
## Local model
89
90
90
- You can run the arena with local models.
91
+ You can run the arena with local models using [ Ollama ] ( https://ollama.com/ ) .
91
92
92
93
1 . Make sure you have ollama installed, running, and with a model downloaded (run ` ollama serve mistral ` in the terminal for example)
93
94
94
- 2 . Make sure you pulled the latest version from the ` main ` branch:
95
+ 2 . Run ` make local ` to start the fight.
95
96
96
- ```
97
- git checkout main
98
- git pull
99
- ```
100
-
101
- 4 . In ` script.py ` , replace the main function with the following one.
97
+ By default, it runs mistral against mistral. To use other models, you need to change the parameter model in ` ollama.py ` .
102
98
103
99
``` python
100
+ from eval .game import Game, Player1, Player2
101
+
104
102
def main ():
105
- # Environment Settings
106
103
game = Game(
107
104
render = True ,
105
+ save_game = True ,
108
106
player_1 = Player1(
109
- nickname = " Daddy " ,
110
- model = " ollama:mistral" ,
107
+ nickname = " Baby " ,
108
+ model = " ollama:mistral" , # change this
111
109
),
112
110
player_2 = Player2(
113
- nickname = " Baby " ,
114
- model = " ollama:mistral" ,
111
+ nickname = " Daddy " ,
112
+ model = " ollama:mistral" , # change this
115
113
),
116
114
)
117
- return game.run()
115
+ game.run()
116
+ return 0
118
117
```
119
118
120
119
The convention we use is ` model_provider:model_name ` . If you want to use another local model than Mistral, you can do ` ollama:some_other_model `
121
120
122
- 5 . Run the simulation: ` make `
123
-
124
121
## How to make my own LLM model play? Can I improve the prompts?
125
122
126
123
The LLM is called in ` Robot.call_llm() ` method of the ` agent/robot.py ` file.
0 commit comments