Skip to content

Commit e55caea

Browse files
committed
Update README
1 parent 749d8fd commit e55caea

File tree

1 file changed

+4
-5
lines changed

1 file changed

+4
-5
lines changed

README.md

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -102,7 +102,6 @@ Now, after declaring what is trainable and what isn't, and use `node` and `bundl
102102
can use the optimizer to optimize the computation graph.
103103

104104
```python
105-
import autogen
106105
from opto.optimizers import OptoPrime
107106

108107

@@ -120,8 +119,7 @@ test_input = [1, 2, 3, 4]
120119

121120
epoch = 2
122121

123-
optimizer = OptoPrime(strange_sort_list.parameters(),
124-
config_list=autogen.config_list_from_json("OAI_CONFIG_LIST"))
122+
optimizer = OptoPrime(strange_sort_list.parameters())
125123

126124
for i in range(epoch):
127125
print(f"Training Epoch {i}")
@@ -275,8 +273,9 @@ with TraceGraph coming soon).
275273

276274
## LLM API Setup
277275

278-
Currently we rely on AutoGen for LLM caching and API-Key management.
279-
AutoGen relies on `OAI_CONFIG_LIST`, which is a file you put in your working directory. It has the format of:
276+
Currently we rely on LiteLLM or AutoGen for LLM caching and API-Key management.
277+
By default, LiteLLM is used. Please the documentation there to set the right environment variables for keys and end-point urls.
278+
On the other hand, AutoGen relies on `OAI_CONFIG_LIST`, which is a file you put in your working directory. It has the format of:
280279

281280
```json lines
282281
[

0 commit comments

Comments
 (0)