All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog and this project adheres to Semantic Versioning.
- fix(chatwithtools): Expect a list of tool messages
- fix(main): Don't drop background task when app is disabled
- fix: Only run background thread once
- fix(summarize): Improve prompt
- fix(chat): Expect json-stringified messages in history {role, content}
- fixed failed import
enh(summarize): Try to make it use bulleted lists for longer texts enh: Implement chatwithtools task type feat: Use chunking if the text doesn't fit the context
fix(summarize): Use a better algorithm for chunked summaries fix(summarize): Always summarize at least once fix(ci): app_api is pre-installed from NC 31 (#37) Anupam Kumar* 03.10.24, 14:13
- update docker image version
- update context size for llama 3.1
- filename of the llama3.1 model in config
- catch JSONDecodeError for when server is in maintenance mode
- better app_enabled handling
- compare uppercase COMPUTE_DEVICE value (#27)
- Catch network exceptions and keep the loop going
- Migrate default config to llama compatible config
- Use COMPUTE_DEVICE to determine gpu offloading
- Use TaskProcessingProvider class for registration
- Better handling of app enabled state
- Download models on /init
- Disable ContextWrite chain as it does not work with Llama 3 / 3.1
- Requires Nextcloud 30 and AppAPI v3
feat: Update prompts and add new task types feat: Add task processing API