Skip to content

Commit

Permalink
updated docs
Browse files Browse the repository at this point in the history
  • Loading branch information
vedpatwardhan committed Sep 5, 2024
1 parent d71071b commit 7611480
Show file tree
Hide file tree
Showing 4 changed files with 29 additions and 29 deletions.
36 changes: 18 additions & 18 deletions python/chat/clients.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ def __init__(endpoint: Optional[str] = None,
*,
model: Optional[str] = None,
provider: Optional[str] = None,
system_prompt: Optional[str] = None,
system_message: Optional[str] = None,
messages: Optional[Iterable[ChatCompletionMessageParam]] = None,
frequency_penalty: Optional[float] = None,
logit_bias: Optional[Dict[str, int]] = None,
Expand Down Expand Up @@ -64,10 +64,10 @@ Initialize the Unify client.

- `provider` - Name of the provider. Should only be set if endpoint is not set.

- `system_prompt` - An optional string containing the system prompt.
- `system_message` - An optional string containing the system message.

- `messages` - A list of messages comprising the conversation so far.
If provided, user_prompt must be None.
If provided, user_message must be None.

- `api_key` - API key for accessing the Unify API.
If None, it attempts to retrieve the API key from the environment variable
Expand Down Expand Up @@ -129,22 +129,22 @@ Get the provider name.

The provider name.

<a id="chat.clients.Client.system_prompt"></a>
<a id="chat.clients.Client.system_message"></a>

---

### system\_prompt
### system\_message

```python
@property
def system_prompt() -> Optional[str]
def system_message() -> Optional[str]
```

Get the default system prompt, if set.
Get the default system message, if set.

**Returns**:

The default system prompt.
The default system message.

<a id="chat.clients.Client.messages"></a>

Expand Down Expand Up @@ -619,21 +619,21 @@ Set the provider name.

- `value` - The provider name.

<a id="chat.clients.Client.set_system_prompt"></a>
<a id="chat.clients.Client.set_system_message"></a>

---

### set\_system\_prompt
### set\_system\_message

```python
def set_system_prompt(value: str) -> None
def set_system_message(value: str) -> None
```

Set the default system prompt.
Set the default system message.

**Arguments**:

- `value` - The default system prompt.
- `value` - The default system message.

<a id="chat.clients.Client.set_messages"></a>

Expand Down Expand Up @@ -1042,8 +1042,8 @@ Set the default extra body.
### generate

```python
def generate(user_prompt: Optional[str] = None,
system_prompt: Optional[str] = None,
def generate(user_message: Optional[str] = None,
system_message: Optional[str] = None,
messages: Optional[Iterable[ChatCompletionMessageParam]] = None,
*,
frequency_penalty: Optional[float] = None,
Expand Down Expand Up @@ -1076,13 +1076,13 @@ Generate content using the Unify API.

**Arguments**:

- `user_prompt` - A string containing the user prompt.
- `user_message` - A string containing the user message.
If provided, messages must be None.

- `system_prompt` - An optional string containing the system prompt.
- `system_message` - An optional string containing the system message.

- `messages` - A list of messages comprising the conversation so far.
If provided, user_prompt must be None.
If provided, user_message must be None.

- `frequency_penalty` - Number between -2.0 and 2.0. Positive values penalize new
tokens based on their existing frequency in the text so far, decreasing the
Expand Down
10 changes: 5 additions & 5 deletions python/chat/multi_llm.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -39,8 +39,8 @@ Get the remaining credits left on your account.

```python
@abstractmethod
def generate(user_prompt: Optional[str] = None,
system_prompt: Optional[str] = None,
def generate(user_message: Optional[str] = None,
system_message: Optional[str] = None,
messages: Optional[Iterable[ChatCompletionMessageParam]] = None,
*,
frequency_penalty: Optional[float] = None,
Expand Down Expand Up @@ -70,13 +70,13 @@ Generate content using the Unify API.

**Arguments**:

- `user_prompt` - A string containing the user prompt.
- `user_message` - A string containing the user message.
If provided, messages must be None.

- `system_prompt` - An optional string containing the system prompt.
- `system_message` - An optional string containing the system message.

- `messages` - A list of messages comprising the conversation so far. If
provided, user_prompt must be None.
provided, user_message must be None.

- `frequency_penalty` - Number between -2.0 and 2.0. Positive values penalize new
tokens based on their existing frequency in the text so far, decreasing the
Expand Down
2 changes: 1 addition & 1 deletion python/dataset.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ Initialize a local dataset of LLM queries.
**Arguments**:

- `data` - The data for populating the dataset. This can either can a string
specifying an upstream dataset, a list of user prompts, a list of full
specifying an upstream dataset, a list of user messages, a list of full
queries, or a list of dicts of queries alongside any extra fields.

- `name` - The name of the dataset.
Expand Down
10 changes: 5 additions & 5 deletions python/evaluator.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ class Evaluator(abc.ABC)
@abstractmethod
def evaluate(agent: Union[str, Client, Agent],
dataset: Union[str, Dataset],
default_query: Prompt = None)
default_prompt: Prompt = None)
```

Evaluate the agent on the given dataset, based on this evaluator.
Expand All @@ -33,9 +33,9 @@ Evaluate the agent on the given dataset, based on this evaluator.
- `dataset` - Name of the uploaded dataset or handle to the local Dataset
instance to evaluate

- `default_query` - The default query for evaluation, which each unique query in
the dataset will inherit from, overwriting the extra fields. This query can
therefore include temperature, system prompt, tools etc. which are not
present in each query in the dataset.
- `default_prompt` - The default prompt for evaluation, which each unique query in
the dataset will inherit from, overwriting the extra fields. This prompt can
therefore include temperature, system message, tools etc. which are not
present in each prompt in the dataset.

<a id="utils"></a>

0 comments on commit 7611480

Please sign in to comment.