You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: admin_manual/ai/ai_as_a_service.rst
+9-1
Original file line number
Diff line number
Diff line change
@@ -11,8 +11,16 @@ Installation
11
11
12
12
In order to use these providers you will need to install the respective app from the app store:
13
13
14
-
* ``integration_openai`` (With this application, you can also connect to a self-hosted LocalAI instance or to any service that implements an API similar to OpenAI, for example Plusserver or MistralAI.)
14
+
* ``integration_openai``
15
15
16
16
* ``integration_replicate``
17
17
18
18
You can then add your API token and rate limits in the administration settings and set the providers live in the "Artificial intelligence" section of the admins settings.
19
+
20
+
21
+
OpenAI integration
22
+
------------------
23
+
24
+
With this application, you can also connect to a self-hosted LocalAI or Ollama instance or to any service that implements an API similar enough to the OpenAI API, for example Plusserver or MistralAI.
25
+
26
+
Do note however, that we test the Assistant tasks that this app implements only with OpenAI models and only against the OpenAI API, we thus cannot guarantee other models and APIs will work.
Copy file name to clipboardExpand all lines: admin_manual/ai/app_assistant.rst
+18-12
Original file line number
Diff line number
Diff line change
@@ -63,6 +63,20 @@ In order to make use of text processing features in the assistant, you will need
63
63
* :ref:`llm2<ai-app-llm2>` - Runs open source AI language models locally on your own server hardware (Customer support available upon request)
64
64
* *integration_openai* - Integrates with the OpenAI API to provide AI functionality from OpenAI servers (Customer support available upon request; see :ref:`AI as a Service<ai-ai_as_a_service>`)
65
65
66
+
These apps currently implement the following Assistant Tasks:
67
+
68
+
* *Generate text* (Tested with OpenAI GPT-3.5 and Llama 3.1 8B)
69
+
* *Summarize* (Tested with OpenAI GPT-3.5 and Llama 3.1 8B)
70
+
* *Generate headline* (Tested with OpenAI GPT-3.5 and Llama 3.1 8B)
71
+
* *Extract topics* (Tested with OpenAI GPT-3.5 and Llama 3.1 8B)
72
+
73
+
Additionally, *integration_openai* also implements the following Assistant Tasks:
74
+
75
+
* *Context write* (Tested with OpenAI GPT-3.5)
76
+
* *Reformulate text* (Tested with OpenAI GPT-3.5)
77
+
78
+
These tasks may work with other models, but we can give no guarantees.
79
+
66
80
Text-To-Image
67
81
~~~~~~~~~~~~~
68
82
@@ -79,6 +93,7 @@ In order to make use of our special Context Chat feature, offering in-context in
79
93
80
94
* :ref:`context_chat + context_chat_backend<ai-app-context_chat>` - (Customer support available upon request)
81
95
96
+
You will also need a text processing provider as specified above (ie. llm2 or integration_openai).
82
97
83
98
Configuration
84
99
-------------
@@ -161,16 +176,7 @@ This field is appended to the block of chat messages, i.e. attached after the me
161
176
The number of latest messages to consider for generating the next message. This does not include the user instructions, which is always considered in addition to this. This value should be adjusted in case you are hitting the token limit in your conversations too often.
162
177
The AI text generation provider should ideally handle the max token limit case.
163
178
164
-
Improve AI processing throughput
165
-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
166
-
167
-
Most AI tasks will be run as part of the background job system in Nextcloud which only runs jobs every 5 minutes by default.
168
-
To pick up scheduled jobs faster you can set up background job workers that process AI tasks as soon as they are scheduled:
169
-
170
-
run the following occ commands a daemon (you can also spawn multiple, for parallel processing):
Copy file name to clipboardExpand all lines: admin_manual/ai/app_context_chat.rst
+20-6
Original file line number
Diff line number
Diff line change
@@ -47,18 +47,32 @@ Installation
47
47
48
48
0. Make sure the :ref:`Nextcloud Assistant app<ai-app-assistant>` is installed
49
49
1. :ref:`Install AppAPI and setup a Deploy Demon<ai-app_api>`
50
-
2. Install the *context_chat_backend* ExApp via the "External Apps" admin page in Nextcloud
50
+
2. Install the *context_chat_backend* ExApp via the "External Apps" admin page in Nextcloud, or by executing
51
+
52
+
.. code-block::
53
+
54
+
occ app_api:app:register context_chat_backend
55
+
51
56
3. Install the *context_chat* app via the "Apps" page in Nextcloud, or by executing
52
57
53
58
.. code-block::
54
59
55
60
occ app:enable context_chat
56
61
57
-
4. Optionally, run two instances of this occ command for faster processing of requests:
62
+
4. Install a text generation backend like *llm2* (via the "External Apps" page in Nextcloud) or *integration_openai* (via the "Apps" page in Nextcloud), or by executing
5. Optionally but recommended, setup background workers for faster pickup of tasks. See :ref:`the relevant section in AI Overview<ai-overview_improve-ai-task-pickup-speed>` for more information.
62
76
63
77
**Note**: Both apps need to be installed and both major version and minor version of the two apps must match for the functionality to work (ie. "v1.3.4" and "v1.3.1"; but not "v1.3.4" and "v2.1.6"; and not "v1.3.4" and "v1.4.5"). Keep this in mind when updating.
64
78
@@ -69,15 +83,15 @@ Context chat will automatically load user data into the Vector DB using backgrou
set -e; while true; do sudo -u www-data occ background-job:worker -v -t 60 "OCA\ContextChat\BackgroundJobs\IndexerJob"; done
77
91
78
92
This will ensure that the necessary background jobs are run as often as possible: ``StorageCrawlJob`` will crawl Nextcloud storages and put files that it finds into a queue and ``IndexerJob`` will iterate over the queue and load the file content into the Vector DB.
79
93
80
-
Make sure to restart these daemons regularly. For example once a day.
94
+
See :ref:`the task speedup section in AI Overview<ai-overview_improve-ai-task-pickup-speed>` to know better ways to run these jobs.
Copy file name to clipboardExpand all lines: admin_manual/ai/app_llm2.rst
+8-3
Original file line number
Diff line number
Diff line change
@@ -6,10 +6,13 @@ App: Local large language model (llm2)
6
6
7
7
The *llm2* app is one of the apps that provide text processing functionality using Large language models in Nextcloud and act as a text processing backend for the :ref:`Nextcloud Assistant app<ai-app-assistant>`, the *mail* app and :ref:`other apps making use of the core Text Processing API<tp-consumer-apps>`. The *llm2* app specifically runs only open source models and does so entirely on-premises. Nextcloud can provide customer support upon request, please talk to your account manager for the possibilities.
8
8
9
-
This app uses `ctransformers<https://github.com/marella/ctransformers>`_ under the hood and is thus compatible with any model in *gguf* format. Output quality will differ depending on which model you use, we recommend the following models:
9
+
This app uses `llama.cpp<https://github.com/abetlen/llama-cpp-python>`_ under the hood and is thus compatible with any model in *gguf* format.
10
10
11
-
* `Llama3 8b Instruct <https://huggingface.co/QuantFactory/Meta-Llama-3-8B-Instruct-GGUF>`_ (reasonable quality; fast; good acclaim; multilingual output may not be optimal)
12
-
* `Llama3 70B Instruct <https://huggingface.co/QuantFactory/Meta-Llama-3-70B-Instruct-GGUF>`_ (good quality; good acclaim; good multilingual output)
11
+
However, we only test with Llama 3.1. Output quality will differ depending on which model you use and downstream tasks like summarization or Context Chat may not work on other models.
12
+
We thus recommend the following models:
13
+
14
+
* `Llama3.1 8b Instruct <https://huggingface.co/QuantFactory/Meta-Llama-3.1-8B-Instruct-GGUF>`_ (reasonable quality; fast; good acclaim; comes shipped with the app)
15
+
* `Llama3.1 70B Instruct <https://huggingface.co/bartowski/Meta-Llama-3.1-70B-Instruct-GGUF>`_ (good quality; good acclaim)
13
16
14
17
Multilinguality
15
18
---------------
@@ -27,6 +30,8 @@ Llama 3.1 `supports the following languages: <https://huggingface.co/meta-llama/
27
30
* Hindi
28
31
* Thai
29
32
33
+
Note, that other languages may work as well, but only the above languages are guaranteed to work.
Most AI tasks will be run as part of the background job system in Nextcloud which only runs jobs every 5 minutes by default.
184
+
To pick up scheduled jobs faster you can set up background job workers that process AI tasks as soon as they are scheduled.
185
+
If the PHP code or the Nextcloud settings values are changed while a worker is running, those changes won't be effective inside the runner. For that reason, the worker needs to be restarted regularly. It is done with a timeout of N seconds which means any changes to the settings or the code will be picked up after N seconds (worst case scenario). This timeout does not, in any way, affect the processing or the timeout of the AI tasks.
186
+
187
+
Screen or tmux session
188
+
^^^^^^^^^^^^^^^^^^^^^^
189
+
190
+
Run the following occ command inside a screen or a tmux session, preferably 4 or more times for parallel processing of multiple requests by different or the same user (and as a requirement for some apps like context_chat).
191
+
It would be best to run one command per screen session or per tmux window/pane to keep the logs visible and the worker easily restartable.
192
+
193
+
.. code-block::
194
+
195
+
set -e; while true; do sudo -u www-data occ background-job:worker -v -t 60 "OC\TaskProcessing\SynchronousBackgroundJob"; done
196
+
197
+
You may want to adjust the number of workers and the timeout (in seconds) to your needs.
198
+
The logs of the worker can be checked by attaching to the screen or tmux session.
199
+
200
+
Systemd service
201
+
^^^^^^^^^^^^^^^
202
+
203
+
1. Create a systemd service file in ``/etc/systemd/system/[email protected]`` with the following content:
``dav:dav:fix-missing-caldav-changes [user]`` tries to restore calendar sync changes when data in the calendarchanges table has been lost. If the user ID is omitted, the command runs for all users. This can take a while.
558
580
559
581
``dav::move-calendar [name] [sourceuid] [destinationuid]`` allows the admin
Copy file name to clipboardExpand all lines: admin_manual/windmill_workflows/index.rst
+8-1
Original file line number
Diff line number
Diff line change
@@ -9,7 +9,7 @@ Installation
9
9
10
10
* Install Windmill
11
11
12
-
* Either as a standalone install or via the Windmill External App in Nextcloud (see :ref:`External Apps<ai-app_api>`)
12
+
* Either as a standalone install or via the External App "Flow" in Nextcloud (see :ref:`External Apps<ai-app_api>`)
13
13
14
14
* Enable the ``webhook_listeners`` app that comes with Nextcloud
15
15
@@ -41,6 +41,13 @@ The magic listener script
41
41
42
42
The first script (after the "Input" block) in any workflow you build that should listen to a Nextcloud webhook must be ``CORE:LISTEN_TO_EVENT``. It must be an empty script with two parameters that you should fill statically: ``events``, which is a list of event IDs to listen to and ``filters`` a filter condition that allows more fine grained filtering for which events should be used. The filter condition as well as the available events with their payloads is documented in :ref:`the webhook_listeners documentation<webhook_listeners>`.
0 commit comments