diff --git a/mint.json b/mint.json
index 14dc73a91..2e623767f 100644
--- a/mint.json
+++ b/mint.json
@@ -91,13 +91,7 @@
},
{
"group": "",
- "pages": [
- "python/chat",
- "python/clients",
- "python/exceptions",
- "python/multi_llm",
- "python/utils"
- ]
+ "pages": []
},
{
"group": "API Reference",
diff --git a/python/chat.mdx b/python/chat.mdx
deleted file mode 100644
index 89c157b34..000000000
--- a/python/chat.mdx
+++ /dev/null
@@ -1,209 +0,0 @@
----
-title: 'chat'
----
-
-
-
-## ChatBot
-
-```python
-class ChatBot()
-```
-
-Agent class represents an LLM chat agent.
-
-
-
----
-
-### \_\_init\_\_
-
-```python
-def __init__(endpoint: Optional[str] = None,
- model: Optional[str] = None,
- provider: Optional[str] = None,
- api_key: Optional[str] = None) -> None
-```
-
-Initializes the ChatBot object.
-
-**Arguments**:
-
-- `endpoint` - Endpoint name in OpenAI API format:
- \/\@\
- Defaults to None.
-
-- `model` - Name of the model. If None, endpoint must be provided.
-
-- `provider` - Name of the provider. If None, endpoint must be provided.
-
-- `api_key` - API key for accessing the Unify API. If None, it attempts to retrieve the API key from the
- environment variable UNIFY_KEY. Defaults to None.
-
-
-**Raises**:
-
-- `UnifyError` - If the API key is missing.
-
-
-
----
-
-### client
-
-```python
-@property
-def client() -> Unify
-```
-
-Get the client object.
-
-**Returns**:
-
- The client.
-
-
-
----
-
-### set\_client
-
-```python
-def set_client(value: Unify) -> None
-```
-
-Set the client.
-
-**Arguments**:
-
-- `value` - The unify client.
-
-
-
----
-
-### model
-
-```python
-@property
-def model() -> str
-```
-
-Get the model name.
-
-**Returns**:
-
- The model name.
-
-
-
----
-
-### set\_model
-
-```python
-def set_model(value: str) -> None
-```
-
-Set the model name.
-
-**Arguments**:
-
-- `value` - The model name.
-
-
-
----
-
-### provider
-
-```python
-@property
-def provider() -> Optional[str]
-```
-
-Get the provider name.
-
-**Returns**:
-
- The provider name.
-
-
-
----
-
-### set\_provider
-
-```python
-def set_provider(value: str) -> None
-```
-
-Set the provider name.
-
-**Arguments**:
-
-- `value` - The provider name.
-
-
-
----
-
-### endpoint
-
-```python
-@property
-def endpoint() -> str
-```
-
-Get the endpoint name.
-
-**Returns**:
-
- The endpoint name.
-
-
-
----
-
-### set\_endpoint
-
-```python
-def set_endpoint(value: str) -> None
-```
-
-Set the endpoint name.
-
-**Arguments**:
-
-- `value` - The endpoint name.
-
-
-
----
-
-### clear\_chat\_history
-
-```python
-def clear_chat_history() -> None
-```
-
-Clears the chat history.
-
-
-
----
-
-### run
-
-```python
-def run(show_credits: bool = False, show_provider: bool = False) -> None
-```
-
-Starts the chat interaction loop.
-
-**Arguments**:
-
-- `show_credits` - Whether to show credit consumption. Defaults to False.
-- `show_provider` - Whether to show the provider used. Defaults to False.
-
-
diff --git a/python/clients.mdx b/python/clients.mdx
deleted file mode 100644
index 3abee0b60..000000000
--- a/python/clients.mdx
+++ /dev/null
@@ -1,337 +0,0 @@
----
-title: 'clients'
----
-
-
-
-## Client
-
-```python
-class Client(ABC)
-```
-
-Base Abstract class for interacting with the Unify chat completions endpoint.
-
-
-
----
-
-### generate
-
-```python
-@abstractmethod
-def generate(user_prompt: Optional[str] = None,
- system_prompt: Optional[str] = None,
- messages: Optional[Iterable[ChatCompletionMessageParam]] = None,
- *,
- max_tokens: Optional[int] = 1024,
- stop: Union[Optional[str], List[str]] = None,
- stream: Optional[bool] = False,
- temperature: Optional[float] = 1.0,
- frequency_penalty: Optional[float] = None,
- logit_bias: Optional[Dict[str, int]] = None,
- logprobs: Optional[bool] = None,
- top_logprobs: Optional[int] = None,
- n: Optional[int] = None,
- presence_penalty: Optional[float] = None,
- response_format: Optional[ResponseFormat] = None,
- seed: Optional[int] = None,
- stream_options: Optional[ChatCompletionStreamOptionsParam] = None,
- top_p: Optional[float] = None,
- tools: Optional[Iterable[ChatCompletionToolParam]] = None,
- tool_choice: Optional[ChatCompletionToolChoiceOptionParam] = None,
- parallel_tool_calls: Optional[bool] = None,
- use_custom_keys: bool = False,
- tags: Optional[List[str]] = None,
- message_content_only: bool = True,
- extra_headers: Optional[Headers] = None,
- extra_query: Optional[Query] = None,
- **kwargs)
-```
-
-Generate content using the Unify API.
-
-**Arguments**:
-
-- `user_prompt` - A string containing the user prompt.
- If provided, messages must be None.
-
-- `system_prompt` - An optional string containing the system prompt.
-
-- `messages` - A list of messages comprising the conversation so far. If provided, user_prompt must be None.
-
-- `max_tokens` - The maximum number of tokens that can be generated in the chat completion.
- The total length of input tokens and generated tokens is limited by the model's context length.
- Defaults to the provider's default max_tokens when the value is None.
-
-- `stop` - Up to 4 sequences where the API will stop generating further tokens.
-
-- `stream` - If True, generates content as a stream. If False, generates content as a single response.
- Defaults to False.
-
-- `temperature` - What sampling temperature to use, between 0 and 2.
- Higher values like 0.8 will make the output more random,
- while lower values like 0.2 will make it more focused and deterministic.
- It is generally recommended to alter this or top_p, but not both.
- Defaults to the provider's default max_tokens when the value is None.
-
-- `frequency_penalty` - Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing
- frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
-
-- `logit_bias` - Modify the likelihood of specified tokens appearing in the completion.
- Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias
- value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to
- sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase
- likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the
- relevant token.
-
-- `logprobs` - Whether to return log probabilities of the output tokens or not. If true, returns the log
- probabilities of each output token returned in the content of message.
-
-- `top_logprobs` - An integer between 0 and 20 specifying the number of most likely tokens to return at each
- token position, each with an associated log probability. logprobs must be set to true if this parameter
- is used.
-
-- `n` - How many chat completion choices to generate for each input message. Note that you will be charged based
- on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs.
-
-- `presence_penalty` - Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they
- appear in the text so far, increasing the model's likelihood to talk about new topics.
-
-- `response_format` - An object specifying the format that the model must output.
- Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured Outputs which ensures the
- model will match your supplied JSON schema. Learn more in the Structured Outputs guide.
- Setting to `{ "type": "json_object" }` enables JSON mode, which ensures the message the model generates is
- valid JSON.
-
-- `seed` - If specified, a best effort attempt is made to sample deterministically, such that
- repeated requests with the same seed and parameters should return the same result. Determinism is not
- guaranteed, and you should refer to the system_fingerprint response parameter to monitor changes in the
- backend.
-
-- `stream_options` - Options for streaming response. Only set this when you set stream: true.
-
-- `top_p` - An alternative to sampling with temperature, called nucleus sampling, where the
- model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens
- comprising the top 10% probability mass are considered. Generally recommended to alter this or temperature,
- but not both.
-
-- `tools` - A list of tools the model may call. Currently, only
- functions are supported as a tool. Use this to provide a list of functions the model may generate JSON
- inputs for. A max of 128 functions are supported.
-
-- `tool_choice` - Controls which (if any) tool is called by the
- model. none means the model will not call any tool and instead generates a message. auto means the model can
- pick between generating a message or calling one or more tools. required means the model must call one or
- more tools. Specifying a particular tool via `{"type": "function", "function": {"name": "my_function"}}`
- forces the model to call that tool.
- none is the default when no tools are present. auto is the default if tools are present.
-
-- `parallel_tool_calls` - Whether to enable parallel function calling during tool use.
-
-- `use_custom_keys` - Whether to use custom API keys or our unified API keys with the backend provider.
-
-- `tags` - Arbitrary number of tags to classify this API query as needed. Helpful for
- generally grouping queries across tasks and users, for logging purposes.
-
-- `message_content_only` - If True, only return the message content
- chat_completion.choices[0].message.content.strip(" ") from the OpenAI return.
- Otherwise, the full response chat_completion is returned.
- Defaults to True.
-
-- `extra_headers` - Additional "passthrough" headers for the request which are provider-specific, and are not
- part of the OpenAI standard. They are handled by the provider-specific API.
-
-- `extra_query` - Additional "passthrough" query parameters for the request which are provider-specific, and are
- not part of the OpenAI standard. They are handled by the provider-specific API.
-
-- `kwargs` - Additional "passthrough" JSON properties for the body of the request, which are provider-specific,
- and are not part of the OpenAI standard. They will be handled by the provider-specific API.
-
-
-**Returns**:
-
- If stream is True, returns a generator yielding chunks of content.
- If stream is False, returns a single string response.
-
-
-**Raises**:
-
-- `UnifyError` - If an error occurs during content generation.
-
-
-
----
-
-### \_\_init\_\_
-
-```python
-def __init__(endpoint: Optional[str] = None,
- model: Optional[str] = None,
- provider: Optional[str] = None,
- api_key: Optional[str] = None) -> None
-```
-
-Initialize the Unify client.
-
-**Arguments**:
-
-- `endpoint` - Endpoint name in OpenAI API format:
- \@\
- Defaults to None.
-
-- `model` - Name of the model.
-
-- `provider` - Name of the provider.
-
-- `api_key` - API key for accessing the Unify API.
- If None, it attempts to retrieve the API key from the
- environment variable UNIFY_KEY.
- Defaults to None.
-
-
-**Raises**:
-
-- `UnifyError` - If the API key is missing.
-
-
-
----
-
-### model
-
-```python
-@property
-def model() -> str
-```
-
-Get the model name.
-
-**Returns**:
-
- The model name.
-
-
-
----
-
-### set\_model
-
-```python
-def set_model(value: str) -> None
-```
-
-Set the model name.
-
-**Arguments**:
-
-- `value` - The model name.
-
-
-
----
-
-### provider
-
-```python
-@property
-def provider() -> Optional[str]
-```
-
-Get the provider name.
-
-**Returns**:
-
- The provider name.
-
-
-
----
-
-### set\_provider
-
-```python
-def set_provider(value: str) -> None
-```
-
-Set the provider name.
-
-**Arguments**:
-
-- `value` - The provider name.
-
-
-
----
-
-### endpoint
-
-```python
-@property
-def endpoint() -> str
-```
-
-Get the endpoint name.
-
-**Returns**:
-
- The endpoint name.
-
-
-
----
-
-### set\_endpoint
-
-```python
-def set_endpoint(value: str) -> None
-```
-
-Set the endpoint name.
-
-**Arguments**:
-
-- `value` - The endpoint name.
-
-
-
----
-
-### get\_credit\_balance
-
-```python
-def get_credit_balance() -> Union[float, None]
-```
-
-Get the remaining credits left on your account.
-
-**Returns**:
-
- The remaining credits on the account if successful, otherwise None.
-
-**Raises**:
-
-- `BadRequestError` - If there was an HTTP error.
-- `ValueError` - If there was an error parsing the JSON response.
-
-
-
-## Unify
-
-```python
-class Unify(Client)
-```
-
-Class for interacting with the Unify chat completions endpoint in a synchronous manner.
-
-
-
-## AsyncUnify
-
-```python
-class AsyncUnify(Client)
-```
-
-Class for interacting with the Unify chat completions endpoint in a synchronous manner.
-
-
diff --git a/python/exceptions.mdx b/python/exceptions.mdx
deleted file mode 100644
index 4643f8ca0..000000000
--- a/python/exceptions.mdx
+++ /dev/null
@@ -1,95 +0,0 @@
----
-title: 'exceptions'
----
-
-
-
-## UnifyError
-
-```python
-class UnifyError(Exception)
-```
-
-Base class for all custom exceptions in the Unify application.
-
-
-
-## BadRequestError
-
-```python
-class BadRequestError(UnifyError)
-```
-
-Exception raised for HTTP 400 Bad Request errors.
-
-
-
-## AuthenticationError
-
-```python
-class AuthenticationError(UnifyError)
-```
-
-Exception raised for HTTP 401 Unauthorized errors.
-
-
-
-## PermissionDeniedError
-
-```python
-class PermissionDeniedError(UnifyError)
-```
-
-Exception raised for HTTP 403 Forbidden errors.
-
-
-
-## NotFoundError
-
-```python
-class NotFoundError(UnifyError)
-```
-
-Exception raised for HTTP 404 Not Found errors.
-
-
-
-## ConflictError
-
-```python
-class ConflictError(UnifyError)
-```
-
-Exception raised for HTTP 409 Conflict errors.
-
-
-
-## UnprocessableEntityError
-
-```python
-class UnprocessableEntityError(UnifyError)
-```
-
-Exception raised for HTTP 422 Unprocessable Entity errors.
-
-
-
-## RateLimitError
-
-```python
-class RateLimitError(UnifyError)
-```
-
-Exception raised for HTTP 429 Too Many Requests errors.
-
-
-
-## InternalServerError
-
-```python
-class InternalServerError(UnifyError)
-```
-
-Exception raised for HTTP 500 Internal Server Error errors.
-
-
diff --git a/python/multi_llm.mdx b/python/multi_llm.mdx
deleted file mode 100644
index 105a8d23d..000000000
--- a/python/multi_llm.mdx
+++ /dev/null
@@ -1,172 +0,0 @@
----
-title: 'multi_llm'
----
-
-
-
-## MultiLLMClient
-
-```python
-class MultiLLMClient(ABC)
-```
-
-
-
----
-
-### get\_credit\_balance
-
-```python
-def get_credit_balance() -> Union[float, None]
-```
-
-Get the remaining credits left on your account.
-
-**Returns**:
-
- The remaining credits on the account if successful, otherwise None.
-
-**Raises**:
-
-- `BadRequestError` - If there was an HTTP error.
-- `ValueError` - If there was an error parsing the JSON response.
-
-
-
----
-
-### generate
-
-```python
-@abstractmethod
-def generate(user_prompt: Optional[str] = None,
- system_prompt: Optional[str] = None,
- messages: Optional[Iterable[ChatCompletionMessageParam]] = None,
- *,
- max_tokens: Optional[int] = 1024,
- stop: Union[Optional[str], List[str]] = None,
- temperature: Optional[float] = 1.0,
- frequency_penalty: Optional[float] = None,
- logit_bias: Optional[Dict[str, int]] = None,
- logprobs: Optional[bool] = None,
- top_logprobs: Optional[int] = None,
- n: Optional[int] = None,
- presence_penalty: Optional[float] = None,
- response_format: Optional[ResponseFormat] = None,
- seed: Optional[int] = None,
- top_p: Optional[float] = None,
- tools: Optional[Iterable[ChatCompletionToolParam]] = None,
- tool_choice: Optional[ChatCompletionToolChoiceOptionParam] = None,
- parallel_tool_calls: Optional[bool] = None,
- use_custom_keys: bool = False,
- tags: Optional[List[str]] = None,
- message_content_only: bool = True,
- extra_headers: Optional[Headers] = None,
- extra_query: Optional[Query] = None,
- **kwargs) -> Union[Generator[str, None, None], str]
-```
-
-Generate content using the Unify API.
-
-**Arguments**:
-
-- `user_prompt` - A string containing the user prompt.
- If provided, messages must be None.
-
-- `system_prompt` - An optional string containing the system prompt.
-
-- `messages` - A list of messages comprising the conversation so far. If provided, user_prompt must be None.
-
-- `max_tokens` - The maximum number of tokens that can be generated in the chat completion.
- The total length of input tokens and generated tokens is limited by the model's context length.
- Defaults to the provider's default max_tokens when the value is None.
-
-- `stop` - Up to 4 sequences where the API will stop generating further tokens.
-
-- `temperature` - What sampling temperature to use, between 0 and 2.
- Higher values like 0.8 will make the output more random,
- while lower values like 0.2 will make it more focused and deterministic.
- It is generally recommended to alter this or top_p, but not both.
- Defaults to the provider's default max_tokens when the value is None.
-
-- `frequency_penalty` - Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing
- frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
-
-- `logit_bias` - Modify the likelihood of specified tokens appearing in the completion.
- Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias
- value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to
- sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase
- likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the
- relevant token.
-
-- `logprobs` - Whether to return log probabilities of the output tokens or not. If true, returns the log
- probabilities of each output token returned in the content of message.
-
-- `top_logprobs` - An integer between 0 and 20 specifying the number of most likely tokens to return at each
- token position, each with an associated log probability. logprobs must be set to true if this parameter
- is used.
-
-- `n` - How many chat completion choices to generate for each input message. Note that you will be charged based
- on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs.
-
-- `presence_penalty` - Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they
- appear in the text so far, increasing the model's likelihood to talk about new topics.
-
-- `response_format` - An object specifying the format that the model must output.
- Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured Outputs which ensures the
- model will match your supplied JSON schema. Learn more in the Structured Outputs guide.
- Setting to `{ "type": "json_object" }` enables JSON mode, which ensures the message the model generates is
- valid JSON.
-
-- `seed` - If specified, a best effort attempt is made to sample deterministically, such that
- repeated requests with the same seed and parameters should return the same result. Determinism is not
- guaranteed, and you should refer to the system_fingerprint response parameter to monitor changes in the
- backend.
-
-- `top_p` - An alternative to sampling with temperature, called nucleus sampling, where the
- model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens
- comprising the top 10% probability mass are considered. Generally recommended to alter this or temperature,
- but not both.
-
-- `tools` - A list of tools the model may call. Currently, only
- functions are supported as a tool. Use this to provide a list of functions the model may generate JSON
- inputs for. A max of 128 functions are supported.
-
-- `tool_choice` - Controls which (if any) tool is called by the
- model. none means the model will not call any tool and instead generates a message. auto means the model can
- pick between generating a message or calling one or more tools. required means the model must call one or
- more tools. Specifying a particular tool via `{"type": "function", "function": {"name": "my_function"}}`
- forces the model to call that tool.
- none is the default when no tools are present. auto is the default if tools are present.
-
-- `parallel_tool_calls` - Whether to enable parallel function calling during tool use.
-
-- `use_custom_keys` - Whether to use custom API keys or our unified API keys with the backend provider.
-
-- `tags` - Arbitrary number of tags to classify this API query as needed. Helpful for
- generally grouping queries across tasks and users, for logging purposes.
-
-- `message_content_only` - If True, only return the message content
- chat_completion.choices[0].message.content.strip(" ") from the OpenAI return.
- Otherwise, the full response chat_completion is returned.
- Defaults to True.
-
-- `extra_headers` - Additional "passthrough" headers for the request which are provider-specific, and are not
- part of the OpenAI standard. They are handled by the provider-specific API.
-
-- `extra_query` - Additional "passthrough" query parameters for the request which are provider-specific, and are
- not part of the OpenAI standard. They are handled by the provider-specific API.
-
-- `kwargs` - Additional "passthrough" JSON properties for the body of the request, which are provider-specific,
- and are not part of the OpenAI standard. They will be handled by the provider-specific API.
-
-
-**Returns**:
-
- A dictionary of responses from each of the LLM clients.
-
-
-**Raises**:
-
-- `UnifyError` - If an error occurs during content generation.
-
diff --git a/python/utils.mdx b/python/utils.mdx
deleted file mode 100644
index f51220a15..000000000
--- a/python/utils.mdx
+++ /dev/null
@@ -1,342 +0,0 @@
----
-title: 'utils'
----
-
-
-
----
-
-### list\_models
-
-```python
-def list_models(provider: Optional[str] = None,
- api_key: Optional[str] = None) -> List[str]
-```
-
-Get a list of available models, either in total or for a specific provider.
-
-**Arguments**:
-
-- `provider` - If specified, returns the list of models supporting this provider.
-- `api_key` - If specified, unify API key to be used. Defaults
- to the value in the `UNIFY_KEY` environment variable.
-
-
-**Returns**:
-
- A list of available model names if successful, otherwise an empty list.
-
-**Raises**:
-
-- `BadRequestError` - If there was an HTTP error.
-- `ValueError` - If there was an error parsing the JSON response.
-
-
-
----
-
-### list\_providers
-
-```python
-def list_providers(model: Optional[str] = None,
- api_key: Optional[str] = None) -> List[str]
-```
-
-Get a list of available providers, either in total or for a specific model.
-
-**Arguments**:
-
-- `model` - If specified, returns the list of providers supporting this model.
-- `api_key` - If specified, unify API key to be used. Defaults
- to the value in the `UNIFY_KEY` environment variable.
-
-
-**Returns**:
-
- A list of provider names associated with the model if successful, otherwise an empty list.
-
-**Raises**:
-
-- `BadRequestError` - If there was an HTTP error.
-- `ValueError` - If there was an error parsing the JSON response.
-
-
-
----
-
-### list\_endpoints
-
-```python
-def list_endpoints(model: Optional[str] = None,
- provider: Optional[str] = None,
- api_key: Optional[str] = None) -> List[str]
-```
-
-Get a list of available endpoint, either in total or for a specific model or provider.
-
-**Arguments**:
-
-- `model` - If specified, returns the list of endpoint supporting this model.
-- `provider` - If specified, returns the list of endpoint supporting this provider.
-- `api_key` - If specified, unify API key to be used. Defaults to the value in the `UNIFY_KEY` environment variable.
-
-
-**Returns**:
-
- A list of endpoint names if successful, otherwise an empty list.
-
-**Raises**:
-
-- `BadRequestError` - If there was an HTTP error.
-- `ValueError` - If there was an error parsing the JSON response.
-
-
-
----
-
-### upload\_dataset\_from\_file
-
-```python
-def upload_dataset_from_file(name: str,
- path: str,
- api_key: Optional[str] = None) -> str
-```
-
-Uploads a local file as a dataset to the platform.
-
-**Arguments**:
-
-- `name` - Name given to the uploaded dataset.
-- `path` - Path to the file to be uploaded.
-- `api_key` - If specified, unify API key to be used. Defaults to the value in the `UNIFY_KEY` environment variable.
-
-
-**Returns**:
-
- Info msg with the response from the HTTP endpoint.
-
-**Raises**:
-
-- `ValueError` - If there was an HTTP error.
-
-
-
----
-
-### upload\_dataset\_from\_dictionary
-
-```python
-def upload_dataset_from_dictionary(name: str,
- content: List[Dict[str, str]],
- api_key: Optional[str] = None) -> str
-```
-
-Uploads a list of dictionaries as a dataset to the platform.
-Each dictionary in the list must contain a `prompt` key.
-
-**Arguments**:
-
-- `name` - Name given to the uploaded dataset.
-- `content` - Path to the file to be uploaded.
-- `api_key` - If specified, unify API key to be used. Defaults to the value in the `UNIFY_KEY` environment variable.
-
-
-**Returns**:
-
- Info msg with the response from the HTTP endpoint.
-
-**Raises**:
-
-- `ValueError` - If there was an HTTP error.
-
-
-
----
-
-### delete\_dataset
-
-```python
-def delete_dataset(name: str, api_key: Optional[str] = None) -> str
-```
-
-Deletes a dataset from the platform.
-
-**Arguments**:
-
-- `name` - Name given to the uploaded dataset.
-- `api_key` - If specified, unify API key to be used. Defaults to the value in the `UNIFY_KEY` environment variable.
-
-
-**Returns**:
-
-- `str` - Info msg with the response from the HTTP endpoint.
-
-**Raises**:
-
-- `ValueError` - If there was an HTTP error.
-
-
-
----
-
-### download\_dataset
-
-```python
-def download_dataset(name: str,
- path: Optional[str] = None,
- api_key: Optional[str] = None) -> Optional[str]
-```
-
-Downloads a dataset from the platform.
-
-**Arguments**:
-
-- `name` - Name of the dataset to download.
-- `path` - If specified, path to save the dataset.
-- `api_key` - If specified, unify API key to be used. Defaults to the value in the `UNIFY_KEY` environment variable.
-
-
-**Returns**:
-
- If path is not specified, returns the dataset content, if specified, returns None.
-
-**Raises**:
-
-- `ValueError` - If there was an HTTP error.
-
-
-
----
-
-### list\_datasets
-
-```python
-def list_datasets(api_key: Optional[str] = None) -> List[str]
-```
-
-Fetches a list of all uploaded datasets.
-
-**Arguments**:
-
-- `api_key` - If specified, unify API key to be used. Defaults to the value in the `UNIFY_KEY` environment variable.
-
-
-**Returns**:
-
- List with the names of the uploaded datasets.
-
-**Raises**:
-
-- `ValueError` - If there was an HTTP error.
-
-
-
----
-
-### evaluate
-
-```python
-def evaluate(dataset: str,
- endpoints: List[str],
- api_key: Optional[str] = None) -> str
-```
-
-Evaluates a list of endpoint on a given dataset.
-
-**Arguments**:
-
-- `dataset` - Name of the dataset to be uploaded.
-- `endpoints` - List of endpoints.
-- `api_key` - If specified, unify API key to be used. Defaults to the value in the `UNIFY_KEY` environment variable.
-
-
-**Returns**:
-
- Info msg with the response from the HTTP endpoint.
-
-**Raises**:
-
-- `ValueError` - If there was an HTTP error.
-
-
-
----
-
-### delete\_evaluation
-
-```python
-def delete_evaluation(name: str,
- endpoint: str,
- api_key: Optional[str] = None) -> str
-```
-
-Deletes an evaluation from the platform.
-
-**Arguments**:
-
-- `name` - Name of the dataset in the evaluation.
-- `endpoint` - Name of the endpoint whose evaluation will be removed.
-- `api_key` - If specified, unify API key to be used. Defaults to the value in the `UNIFY_KEY` environment variable.
-
-
-**Returns**:
-
- Info msg with the response from the HTTP endpoint.
-
-**Raises**:
-
-- `ValueError` - If there was an HTTP error.
-
-
-
----
-
-### list\_evaluations
-
-```python
-def list_evaluations(dataset: Optional[str] = None,
- api_key: Optional[str] = None) -> List[str]
-```
-
-Fetches a list of all evaluations.
-
-**Arguments**:
-
-- `dataset` - Name of the dataset to fetch evaluation from. If not specified, all evaluations will be returned.
-- `api_key` - If specified, unify API key to be used. Defaults to the value in the `UNIFY_KEY` environment variable.
-
-
-**Returns**:
-
- List with the names of the uploaded datasets.
-
-**Raises**:
-
-- `ValueError` - If there was an HTTP error.
-
-
-
----
-
-### get\_credits
-
-```python
-def get_credits(api_key: Optional[str] = None) -> float
-```
-
-Returns the credits remaining in the user account, in USD.
-
-**Arguments**:
-
-- `api_key` - If specified, unify API key to be used. Defaults to the value in the `UNIFY_KEY` environment variable.
-
-
-**Returns**:
-
- The credits remaining in USD.
-
-**Raises**:
-
-- `ValueError` - If there was an HTTP error.
-
-