You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+12Lines changed: 12 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -8,6 +8,18 @@
8
8
9
9
**ExecuTorch** is a novel framework created by Meta that enables running AI models on devices such as mobile phones or microcontrollers. React Native ExecuTorch bridges the gap between React Native and native platform capabilities, allowing developers to run AI models locally on mobile devices with state-of-the-art performance, without requiring deep knowledge of native code or machine learning internals.
@@ -129,9 +129,33 @@ Given computational constraints, our architecture is designed to support only on
129
129
130
130
**`modelSource`** - A string that specifies the location of the model binary. For more information, take a look at [loading models](../fundamentals/loading-models.md) section.
131
131
132
-
**`tokenizerSource`** - URL to the JSON file which contains the tokenizer
132
+
**`tokenizerSource`** - URL to the JSON file which contains the tokenizer.
133
+
134
+
**`tokenizerConfigSource`** - URL to the JSON file which contains the tokenizer config.
133
135
134
-
**`tokenizerConfigSource`** - URL to the JSON file which contains the tokenizer config
136
+
**`preventLoad?`** - Boolean that can prevent automatic model loading (and downloading the data if you load it for the first time) after running the hook.
|`messageHistory`|`Message[]`| History containing all messages in conversation. This field is updated after model responds to `sendMessage`. |
143
+
|`response`|`string`| State of the generated response. This field is updated with each token generated by the model. |
144
+
|`isReady`|`boolean`| Indicates whether the model is ready. |
145
+
|`isGenerating`|`boolean`| Indicates whether the model is currently generating a response. |
146
+
|`downloadProgress`|`number`| Represents the download progress as a value between 0 and 1, indicating the extent of the model file retrieval. |
147
+
|`error`| <code>string | null</code> | Contains the error message if the model failed to load. |
148
+
|`configure`|`({ chatConfig?: Partial<ChatConfig>, toolsConfig?: ToolsConfig }) => void`| Configures chat and tool calling. See more details in [configuring the model](#configuring-the-model). |
149
+
|`sendMessage`|`(message: string, tools?: LLMTool[]) => Promise<void>`| Method to add user message to conversation. After model responds, `messageHistory` will be updated with both user message and model response. |
150
+
|`deleteMessage`|`(index: number) => void`| Deletes all messages starting with message on `index` position. After deletion `messageHistory` will be updated. |
151
+
|`generate`|`(messages: Message[], tools?: LLMTool[]) => Promise<void>`| Runs model to complete chat passed in `messages` argument. It doesn't manage conversation context. |
152
+
|`forward`|`(input: string) => Promise<void>`| Runs model inference with raw input string. You need to provide entire conversation and prompt (in correct format and with special tokens!) in input string to this method. It doesn't manage conversation context. It is intended for users that need access to the model itself without any wrapper. If you want simple chat with model consider using `sendMessage`. |
153
+
|`interrupt`|`() => void`| Function to interrupt the current inference. |
154
+
155
+
## Configuring the model
156
+
157
+
To configure model (i.e. change system prompt, load initial conversation history or manage tool calling) you can use
158
+
`configure` method. It accepts object with following fields:
135
159
136
160
**`chatConfig`** - Object configuring chat management, contains following properties:
137
161
@@ -149,22 +173,6 @@ Given computational constraints, our architecture is designed to support only on
149
173
150
174
-**`displayToolCalls`** - If set to true, JSON tool calls will be displayed in chat. If false, only answers will be displayed.
|`messageHistory`|`Message[]`| State of the generated response. This field is updated with each token generated by the model |
157
-
|`response`|`string`| State of the generated response. This field is updated with each token generated by the model |
158
-
|`isReady`|`boolean`| Indicates whether the model is ready |
159
-
|`isGenerating`|`boolean`| Indicates whether the model is currently generating a response |
160
-
|`downloadProgress`|`number`| Represents the download progress as a value between 0 and 1, indicating the extent of the model file retrieval. |
161
-
|`error`| <code>string | null</code> | Contains the error message if the model failed to load |
162
-
|`sendMessage`|`(message: string, tools?: LLMTool[]) => Promise<void>`| Method to add user message to conversation. After model responds, `messageHistory` will be updated with both user message and model response. |
163
-
|`deleteMessage`|`(index: number) => void`| Deletes all messages starting with message on `index` position. After deletion `messageHistory` will be updated. |
164
-
|`generate`|`(messages: Message[], tools?: LLMTool[]) => Promise<void>`| Runs model to complete chat passed in `messages` argument. It doesn't manage conversation context. |
165
-
|`forward`|`(input: string) => Promise<void>`| Runs model inference with raw input string. You need to provide entire conversation and prompt (in correct format and with special tokens!) in input string to this method. It doesn't manage conversation context. It is intended for users that need access to the model itself without any wrapper. If you want simple chat with model consider using `sendMessage`|
166
-
|`interrupt`|`() => void`| Function to interrupt the current inference |
167
-
168
176
## Sending a message
169
177
170
178
In order to send a message to the model, one can use the following code:
0 commit comments