Skip to content

Commit

Permalink
chore: update README
Browse files Browse the repository at this point in the history
  • Loading branch information
luochen1990 committed Aug 7, 2024
1 parent 6623b13 commit bf23d73
Show file tree
Hide file tree
Showing 2 changed files with 55 additions and 8 deletions.
25 changes: 17 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,15 +6,23 @@ AI Powered
Motivation
---

Currently, AI (primarily referring to Large Language Models) has reached a stage of considerable practicality, yet many traditional software applications have not yet benefited from it. This project aims to provide convenient tools for integrating AI capabilities into various software.
Currently, AI has reached a stage of considerable practicality, yet many traditional software applications have not yet benefited from it. This project aims to provide convenient tools for integrating AI capabilities (primarily referring to Large Language Models) into various software.

With these tools, you don't need to redesign the entire software to leverage AI capabilities, nor do you need to add any user-visible functional modules. As long as you find that a function in your project could potentially be better answered by AI, you can replace its implementation with AI.

Even so, you don't need to learn anything about the OpenAI SDK to achieve these tasks, all within just a few minutes!

Usage
---

Installation is done by `pip install ai_powered` or `poetry add ai_powered`.

And provide your API keys via environment variables ([more details](/doc/en/Configuration.md)):

```shell
export OPENAI_API_KEY=---YOUR-REAL-API-KEY---
```

It provides the following tools:

### `@ai_powered` Decorator
Expand All @@ -27,6 +35,7 @@ from ai_powered import ai_powered
@ai_powered
def get_python_expression(expr: str) -> str:
""" Convert the user-input mathematical expression into a valid Python expression """
...
```

You can also use more complex data structures in parameters and return values, but make sure they have complete type annotations
Expand All @@ -41,6 +50,7 @@ class UserInfo:
@ai_powered
def extract_user_info(raw_text: str) -> UserInfo:
''' Extract user information from this self-introduction '''
...
```

More examples can be found [here](/test/examples/ai_powered_decorator/)
Expand Down Expand Up @@ -91,14 +101,13 @@ More examples can be found [here](/test/examples/chat_bot/)
Current Limitations and Future Plans
----------------------------

- Currently, only a Python implementation is provided, but in fact, this model can be replicated in any other language that supports runtime type annotations.
- At present, OpenAI's function calling capability cannot recognize references in the schema, so problems may arise when types become complex enough to generate references. This can be alleviated with deref, but when encountering recursive types, ref has to be kept, so this issue may ultimately require LLM providers to add such data in their training datasets to truly resolve the problem.
- The data generated by the current LLM does not strictly adhere to the provided JSON Schema 100% of the time. The error rate may decrease as LLM providers continue to train, but in the end, we may need to introduce a retry mechanism to get it to approach 100% correctness (a retry mechanism is currently being planned).
- Currently, only a Python implementation is provided, but it is possible to be replicated in any other language that supports runtime type annotations.
- The data generated by the current LLM does not strictly adhere to the provided JSON Schema 100% of the time. The error rate may decrease as LLM providers continue to train, but in the end, we may need to introduce a retry mechanism to get it to approach 100% correctness (a retry mechanism is currently being planned). Things changed [now](https://openai.com/index/introducing-structured-outputs-in-the-api/)
- At present, recursive structures is not supported, since some LLM function calling capability cannot recognize references in the schema, so we deref all the references in the schema, but deref doesn't work with recursive types, so this issue may ultimately require LLM providers to add such data in their training datasets to truly resolve the problem.

Regarding Code Contribution
--------------------------

1. The project will always be open source, but I haven't decided which open-source license to use yet. If you don't mind this, you can directly PR; otherwise, we might need to discuss the license issue first.
2. Currently, all code is under Pyright strict mode type checking, and any type errors will be blocked by GitHub Actions. We do not recommend using `Any` or `#type: ignore` unless absolutely necessary.
3. Test coverage will be continuously monitored. It is recommended to always provide tests for your code, and even prepare the tests before coding.
4. Regarding the development environment, it is recommended to install `nix` and `direnv` so that you automatically get a usable development environment. Of course, `poetry shell` is also a good choice (if you are already using poetry).
1. Currently, all code is under Pyright strict mode type checking, and any type errors will be blocked by GitHub Actions. We do not recommend using `Any` or `#type: ignore` unless absolutely necessary.
2. Test coverage will be continuously monitored. It is recommended to always provide tests for your code, and even prepare the tests before coding. tips: you can mark tests for WIP features use `@pytest.mark.xfail`.
3. Regarding the development environment, it is recommended to install `nix` and `direnv` so that you automatically get a usable development environment. Of course, `poetry shell` is also a good choice (if you are already using poetry).
38 changes: 38 additions & 0 deletions doc/en/Configuration.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
Configuration
=============

### provide your API keys via environment variables:

```shell
export OPENAI_API_KEY=YOUR-REAL-API-KEY
```

### choose a model name:

```shell
export OPENAI_MODEL_NAME=gpt-4o
```

### specify a API base url (for non-OpenAI models):

```shell
export OPENAI_BASE_URL=https://api.deepseek.com
```

### tell about the model supported feature to enable Compatibility Mode:

```shell
export OPENAI_MODEL_FEATURES=tools
```

this tells `ai-powered` to use tools to get a strctured response.

or just:

```shell
export OPENAI_MODEL_FEATURES=
```

this tells `ai-powered` to use normal chat mode to get a structured response.

All features definition can be found [here](/src/ai_powered/llm/definitions.py)

0 comments on commit bf23d73

Please sign in to comment.