You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Looking to supercharge your React applications with AI capabilities? Meet OpenAssistant - your new favorite tool for seamlessly integrating AI power into existing React apps without the hassle.
7
7
8
8
Unlike general-purpose chatbot library, OpenAssistant takes a different approach. It's specifically engineered to be the bridge between Large Language Models (LLMs) and your application's functionality. Think of it as your application's AI co-pilot that can not only chat with users but also execute complex tasks by leveraging your app's features and external AI plugins.
9
9
10
10
Check out the following examples using OpenAssistant in action:
11
-
| kepler.gl AI Assistant (kepler.gl) |GeoDa.AI AI Assistant (geoda.ai)|
11
+
| kepler.gl AI Assistant (kepler.gl) | GeoDa.AI AI Assistant (geoda.ai) |
12
12
|----|----|
13
-
|[<imgwidth="215"alt="Screenshot 2024-12-08 at 9 12 22 PM"src="https://github.com/user-attachments/assets/edc11aee-8945-434b-bec9-cc202fee547c">](https://kepler.gl)|[<imgwidth="240"alt="Screenshot 2024-12-08 at 9 13 43 PM"src="https://github.com/user-attachments/assets/de418af5-7663-48fb-9410-74b4750bc944">](https://geoda.ai)|
13
+
|[<imgwidth="215"alt="Screenshot 2024-12-08 at 9 12 22 PM"src="https://github.com/user-attachments/assets/edc11aee-8945-434b-bec9-cc202fee547c">](https://kepler.gl)|[<imgwidth="240"alt="Screenshot 2024-12-08 at 9 13 43 PM"src="https://github.com/user-attachments/assets/de418af5-7663-48fb-9410-74b4750bc944">](https://geoda.ai)|
14
14
15
15
## 🌟 Features
16
16
@@ -41,7 +41,7 @@ Check out the following examples using OpenAssistant in action:
41
41
42
42
```bash
43
43
# Install the core package
44
-
npm install @openassistant/core @openassistant/ui
44
+
npm install @openassistant/core @openassistant/ui
45
45
```
46
46
47
47
## 🚀 Quick Start
@@ -64,6 +64,8 @@ function App() {
64
64
}
65
65
```
66
66
67
+
See the [tutorial](https://openassistant-doc.vercel.app/docs/tutorial-basics/add-config-ui) for more details.
68
+
67
69
To use the `Screenshot to Ask` feature, you just need to wrap your app with `ScreenshotWrapper` and pass the `startScreenCapture` and `screenCapturedBase64` to the `AiAssistant` component using e.g. redux state. See an example in kepler.gl: [app.tsx](https://github.com/keplergl/kepler.gl/blob/master/examples/demo-app/src/app.tsx) and [assistant-component.tsx](https://github.com/keplergl/kepler.gl/blob/master/src/ai-assistant/src/components/ai-assistant-component.tsx).
See the [tutorial](https://openassistant-doc.vercel.app/docs/tutorial-basics/screencapture) for more details.
109
+
106
110
For project with tailwindcss, you can add the following to your tailwind.config.js file:
107
111
108
112
```js
@@ -122,6 +126,63 @@ module.exports = {
122
126
};
123
127
```
124
128
129
+
## 🎯 How to use
130
+
131
+
OpenAssistant provides a new way that allows users to interact with the data and your application in a natural and creative way.
132
+
133
+
### 📸 Take a Screenshot to Ask
134
+
135
+
This feature enables users to capture a screenshot anywhere within kepler.gl application and ask questions about the screenshot.
136
+
137
+
For example:
138
+
- users can take a screenshot of the map (or partial of the map) and ask questions about the map e.g. _`how many counties are in this screenshot`_,
139
+
- or take a screenshot of the configuration panel and ask questions about how to use it, e.g. _`How can I adjust the parameters in this panel`_.
140
+
- users can even take a screenshot of the plots in the chat panel and ask questions about the plots e.g. _`Can you give me a summary of the plot?`_.
141
+
142
+

143
+
144
+
#### How to use this feature?
145
+
146
+
1. Click the "Screenshot to Ask" button in the chat interface
147
+
2. A semi-transparent overlay will appear
148
+
3. Click and drag to select the area you want to capture
149
+
4. Release to complete the capture
150
+
5. The screenshot will be displayed in the chat interface
151
+
6. You can click the x button on the top right corner of the screenshot to delete the screenshot
152
+
153
+
### 🗣️ Talk to Ask
154
+
155
+
This feature enables users to "talk" to the AI assistant. After clicking the "Talk to Ask" button, users can start talking using microphone. When clicking the same button again, the AI assistant will stop listening and send the transcript to the input box.
156
+
157
+
When using the voice-to-text feature for the first time, users will be prompted to grant microphone access. The browser will display a permission dialog that looks like this:
158
+
159
+

160
+
161
+
After granting access, users can start talking to the AI assistant.
162
+
163
+
### 📚 Function Calling Support
164
+
165
+
#### 🤖 Why use LLM function tools?
166
+
167
+
Function calling enables the AI Assistant to perform specialized tasks that LLMs cannot handle directly, such as complex calculations, data analysis, visualization generation, and integration with external services. This allows the assistant to execute specific operations within your application while maintaining natural language interaction with users.
168
+
169
+
#### 🔒 Is my data secure?
170
+
171
+
Yes, the data you used in your application stays within the browser, and will **never** be sent to the LLM. Using function tools, we can engineer the AI assistant to use only the meta data for function calling, e.g. the name of the dataset, the name of the layer, the name of the variables, etc. Here is a process diagram to show how the AI assistant works:
OpenAssistant provides great type support to help you create function tools. You can create a function tool by following the tutorial [here](<[https://openassistant-doc.vercel.app/tutorial-basics/add-function-tool](https://openassistant-doc.vercel.app/docs/tutorial-basics/function-call)>).
178
+
179
+
OpenAssistant also provides plugins for function tools, which you can use in your application with just a few lines of code. For example,
180
+
181
+
- the [DuckDB plugin](https://openassistant-doc.vercel.app/docs/tutorial-basics/add-function-tool) allows the AI assistant to query your data using DuckDB. See a tutorial [here](https://openassistant-doc.vercel.app/docs/tutorial-extras/duckdb-plugin).
182
+
- the [ECharts plugin](https://openassistant-doc.vercel.app/docs/tutorial-basics/add-function-tool) allows the AI assistant to visualize data using ECharts. See a tutorial [here](https://openassistant-doc.vercel.app/docs/tutorial-extras/echarts-plugin).
183
+
- the [Kepler.gl plugin](https://openassistant-doc.vercel.app/docs/tutorial-basics/add-function-tool) allows the AI assistant to create beautiful maps. See a tutorial [here](https://openassistant-doc.vercel.app/docs/tutorial-extras/keplergl-plugin).
184
+
- the [GeoDa plugin](https://openassistant-doc.vercel.app/docs/tutorial-basics/add-function-tool) allows the AI assistant to apply spatial data analysis using GeoDa. See a tutorial [here](https://openassistant-doc.vercel.app/docs/tutorial-extras/geoda-plugin).
0 commit comments