You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Added generate function, and some little reformats
### Type of change
- [ ] Bug fix (non-breaking change which fixes an issue)
- [x] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing
functionality to not work as expected)
- [ ] Documentation update (improves or adds clarity to existing
documentation)
### Tested on
- [x] iOS
- [ ] Android
### Related issues
#226
### Checklist
- [x] I have performed a self-review of my code
- [x] I have updated the documentation accordingly
- [x] My changes generate no new warnings
---------
Co-authored-by: Norbert Klockiewicz <[email protected]>
@@ -85,12 +86,12 @@ type ResourceSource = string | number;
85
86
86
87
typeMessageRole='user'|'assistant'|'system';
87
88
88
-
interfaceMessageType {
89
+
interfaceMessage {
89
90
role:MessageRole;
90
91
content:string;
91
92
}
92
93
interfaceChatConfig {
93
-
initialMessageHistory:MessageType[];
94
+
initialMessageHistory:Message[];
94
95
contextWindowLength:number;
95
96
systemPrompt:string;
96
97
}
@@ -136,7 +137,7 @@ Given computational constraints, our architecture is designed to support only on
136
137
137
138
-**`systemPrompt`** - Often used to tell the model what is its purpose, for example - "Be a helpful translator".
138
139
139
-
-**`initialMessageHistory`** - An array of `MessageType` objects that represent the conversation history. This can be used to provide initial context to the model.
140
+
-**`initialMessageHistory`** - An array of `Message` objects that represent the conversation history. This can be used to provide initial context to the model.
140
141
141
142
-**`contextWindowLength`** - The number of messages from the current conversation that the model will use to generate a response. The higher the number, the more context the model will have. Keep in mind that using larger context windows will result in longer inference time and higher memory usage.
142
143
@@ -150,18 +151,19 @@ Given computational constraints, our architecture is designed to support only on
|`messageHistory`|`MessageType[]`| State of the generated response. This field is updated with each token generated by the model |
156
-
|`response`|`string`| State of the generated response. This field is updated with each token generated by the model |
157
-
|`isReady`|`boolean`| Indicates whether the model is ready |
158
-
|`isGenerating`|`boolean`| Indicates whether the model is currently generating a response |
159
-
|`downloadProgress`|`number`| Represents the download progress as a value between 0 and 1, indicating the extent of the model file retrieval. |
160
-
|`error`| <code>string | null</code> | Contains the error message if the model failed to load |
161
-
|`sendMessage`|`(message: string, tools?: LLMTool[]) => Promise<void>`| Method to add user message to conversation. After model responds, `messageHistory` will be updated with both user message and model response. |
162
-
|`deleteMessage`|`(index: number) => void`| Deletes all messages starting with message on `index` position. |
163
-
|`runInference`|`(input: string) => Promise<void>`| Runs model inference with raw input string. You need to provide entire conversation and prompt (in correct format and with special tokens!) in input string to this method. It doesn't manage conversation context. It is intended for users that need access to the model itself without any wrapper. If you want simple chat with model consider using `sendMessage`|
164
-
|`interrupt`|`() => void`| Function to interrupt the current inference |
|`messageHistory`|`Message[]`| State of the generated response. This field is updated with each token generated by the model |
157
+
|`response`|`string`| State of the generated response. This field is updated with each token generated by the model |
158
+
|`isReady`|`boolean`| Indicates whether the model is ready |
159
+
|`isGenerating`|`boolean`| Indicates whether the model is currently generating a response |
160
+
|`downloadProgress`|`number`| Represents the download progress as a value between 0 and 1, indicating the extent of the model file retrieval. |
161
+
|`error`| <code>string | null</code> | Contains the error message if the model failed to load |
162
+
|`sendMessage`|`(message: string, tools?: LLMTool[]) => Promise<void>`| Method to add user message to conversation. After model responds, `messageHistory` will be updated with both user message and model response. |
163
+
|`deleteMessage`|`(index: number) => void`| Deletes all messages starting with message on `index` position. After deletion `messageHistory` will be updated. |
164
+
|`generate`|`(messages: Message[], tools?: LLMTool[]) => Promise<void>`| Runs model to complete chat passed in `messages` argument. It doesn't manage conversation context. |
165
+
|`forward`|`(input: string) => Promise<void>`| Runs model inference with raw input string. You need to provide entire conversation and prompt (in correct format and with special tokens!) in input string to this method. It doesn't manage conversation context. It is intended for users that need access to the model itself without any wrapper. If you want simple chat with model consider using `sendMessage`|
166
+
|`interrupt`|`() => void`| Function to interrupt the current inference |
0 commit comments