Skip to content

Commit 1c822e3

Browse files
Merge branch 'master' into Documentation
2 parents 4ff3776 + 1cc78ea commit 1c822e3

File tree

20 files changed

+227
-66
lines changed

20 files changed

+227
-66
lines changed

.github/workflows/generate_catalog_templates.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ jobs:
3535
github-token: "${{ secrets.GITHUB_TOKEN }}"
3636
pull-request-number: ${{ steps.cpr.outputs.pull-request-number }}
3737

38-
- uses: pascalgn/[email protected].3
38+
- uses: pascalgn/[email protected].4
3939
if: steps.cpr.outputs.pull-request-operation == 'created'
4040
env:
4141
GITHUB_TOKEN: "${{ secrets.GITHUB_TOKEN }}"

.github/workflows/sphinxbuild.yml

+3-3
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ jobs:
2323
shell: bash
2424
run: tar czf /tmp/documentation.tar.gz -C user_manual/_build/html .
2525
- name: Upload static documentation
26-
uses: actions/[email protected].0
26+
uses: actions/[email protected].3
2727
with:
2828
name: User manual.zip
2929
path: "/tmp/documentation.tar.gz"
@@ -55,7 +55,7 @@ jobs:
5555
shell: bash
5656
run: tar czf /tmp/documentation.tar.gz -C developer_manual/_build/html/com .
5757
- name: Upload static documentation
58-
uses: actions/[email protected].0
58+
uses: actions/[email protected].3
5959
with:
6060
name: Developer manual.zip
6161
path: "/tmp/documentation.tar.gz"
@@ -75,7 +75,7 @@ jobs:
7575
shell: bash
7676
run: tar czf /tmp/documentation.tar.gz -C admin_manual/_build/html/com .
7777
- name: Upload static documentation
78-
uses: actions/[email protected].0
78+
uses: actions/[email protected].3
7979
with:
8080
name: Administration manual.zip
8181
path: "/tmp/documentation.tar.gz"

.github/workflows/transifex.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ jobs:
1717
name: Auto-merge
1818
needs: approve
1919
steps:
20-
- uses: pascalgn/[email protected].3
20+
- uses: pascalgn/[email protected].4
2121
if: github.actor == 'transifex-integration[bot]'
2222
env:
2323
GITHUB_TOKEN: "${{ secrets.GITHUB_TOKEN }}"

admin_manual/ai/ai_as_a_service.rst

+9-1
Original file line numberDiff line numberDiff line change
@@ -11,8 +11,16 @@ Installation
1111

1212
In order to use these providers you will need to install the respective app from the app store:
1313

14-
* ``integration_openai`` (With this application, you can also connect to a self-hosted LocalAI instance or to any service that implements an API similar to OpenAI, for example Plusserver or MistralAI.)
14+
* ``integration_openai``
1515

1616
* ``integration_replicate``
1717

1818
You can then add your API token and rate limits in the administration settings and set the providers live in the "Artificial intelligence" section of the admins settings.
19+
20+
21+
OpenAI integration
22+
------------------
23+
24+
With this application, you can also connect to a self-hosted LocalAI or Ollama instance or to any service that implements an API similar enough to the OpenAI API, for example Plusserver or MistralAI.
25+
26+
Do note however, that we test the Assistant tasks that this app implements only with OpenAI models and only against the OpenAI API, we thus cannot guarantee other models and APIs will work.

admin_manual/ai/app_assistant.rst

+18-12
Original file line numberDiff line numberDiff line change
@@ -63,6 +63,20 @@ In order to make use of text processing features in the assistant, you will need
6363
* :ref:`llm2<ai-app-llm2>` - Runs open source AI language models locally on your own server hardware (Customer support available upon request)
6464
* *integration_openai* - Integrates with the OpenAI API to provide AI functionality from OpenAI servers (Customer support available upon request; see :ref:`AI as a Service<ai-ai_as_a_service>`)
6565

66+
These apps currently implement the following Assistant Tasks:
67+
68+
* *Generate text* (Tested with OpenAI GPT-3.5 and Llama 3.1 8B)
69+
* *Summarize* (Tested with OpenAI GPT-3.5 and Llama 3.1 8B)
70+
* *Generate headline* (Tested with OpenAI GPT-3.5 and Llama 3.1 8B)
71+
* *Extract topics* (Tested with OpenAI GPT-3.5 and Llama 3.1 8B)
72+
73+
Additionally, *integration_openai* also implements the following Assistant Tasks:
74+
75+
* *Context write* (Tested with OpenAI GPT-3.5)
76+
* *Reformulate text* (Tested with OpenAI GPT-3.5)
77+
78+
These tasks may work with other models, but we can give no guarantees.
79+
6680
Text-To-Image
6781
~~~~~~~~~~~~~
6882

@@ -79,6 +93,7 @@ In order to make use of our special Context Chat feature, offering in-context in
7993

8094
* :ref:`context_chat + context_chat_backend<ai-app-context_chat>` - (Customer support available upon request)
8195

96+
You will also need a text processing provider as specified above (ie. llm2 or integration_openai).
8297

8398
Configuration
8499
-------------
@@ -161,16 +176,7 @@ This field is appended to the block of chat messages, i.e. attached after the me
161176
The number of latest messages to consider for generating the next message. This does not include the user instructions, which is always considered in addition to this. This value should be adjusted in case you are hitting the token limit in your conversations too often.
162177
The AI text generation provider should ideally handle the max token limit case.
163178

164-
Improve AI processing throughput
165-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
166-
167-
Most AI tasks will be run as part of the background job system in Nextcloud which only runs jobs every 5 minutes by default.
168-
To pick up scheduled jobs faster you can set up background job workers that process AI tasks as soon as they are scheduled:
169-
170-
run the following occ commands a daemon (you can also spawn multiple, for parallel processing):
171-
172-
.. code-block::
173-
174-
occ background-job:worker 'OC\TaskProcessing\SynchronousBackgroundJob'
179+
Improve AI task pickup speed
180+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
175181

176-
Make sure to restart these daemons regularly, for example once a day, to make sure the daemon runs the latest code.
182+
See :ref:`the relevant section in AI Overview<ai-overview_improve-ai-task-pickup-speed>` for more information.

admin_manual/ai/app_context_chat.rst

+20-6
Original file line numberDiff line numberDiff line change
@@ -47,18 +47,32 @@ Installation
4747

4848
0. Make sure the :ref:`Nextcloud Assistant app<ai-app-assistant>` is installed
4949
1. :ref:`Install AppAPI and setup a Deploy Demon<ai-app_api>`
50-
2. Install the *context_chat_backend* ExApp via the "External Apps" admin page in Nextcloud
50+
2. Install the *context_chat_backend* ExApp via the "External Apps" admin page in Nextcloud, or by executing
51+
52+
.. code-block::
53+
54+
occ app_api:app:register context_chat_backend
55+
5156
3. Install the *context_chat* app via the "Apps" page in Nextcloud, or by executing
5257

5358
.. code-block::
5459
5560
occ app:enable context_chat
5661
57-
4. Optionally, run two instances of this occ command for faster processing of requests:
62+
4. Install a text generation backend like *llm2* (via the "External Apps" page in Nextcloud) or *integration_openai* (via the "Apps" page in Nextcloud), or by executing
63+
64+
.. code-block::
65+
66+
occ app_api:app:register llm2
67+
68+
or
5869

5970
.. code-block::
6071
61-
occ background-job:worker 'OC\TaskProcessing\SynchronousBackgroundJob'
72+
occ app:enable integration_openai
73+
74+
75+
5. Optionally but recommended, setup background workers for faster pickup of tasks. See :ref:`the relevant section in AI Overview<ai-overview_improve-ai-task-pickup-speed>` for more information.
6276

6377
**Note**: Both apps need to be installed and both major version and minor version of the two apps must match for the functionality to work (ie. "v1.3.4" and "v1.3.1"; but not "v1.3.4" and "v2.1.6"; and not "v1.3.4" and "v1.4.5"). Keep this in mind when updating.
6478

@@ -69,15 +83,15 @@ Context chat will automatically load user data into the Vector DB using backgrou
6983

7084
.. code-block::
7185
72-
occ background-job:worker 'OCA\ContextChat\BackgroundJobs\StorageCrawlJob'
86+
set -e; while true; do sudo -u www-data occ background-job:worker -v -t 60 "OCA\ContextChat\BackgroundJobs\StorageCrawlJob"; done
7387
7488
.. code-block::
7589
76-
occ background-job:worker 'OCA\ContextChat\BackgroundJobs\IndexerJob'
90+
set -e; while true; do sudo -u www-data occ background-job:worker -v -t 60 "OCA\ContextChat\BackgroundJobs\IndexerJob"; done
7791
7892
This will ensure that the necessary background jobs are run as often as possible: ``StorageCrawlJob`` will crawl Nextcloud storages and put files that it finds into a queue and ``IndexerJob`` will iterate over the queue and load the file content into the Vector DB.
7993

80-
Make sure to restart these daemons regularly. For example once a day.
94+
See :ref:`the task speedup section in AI Overview<ai-overview_improve-ai-task-pickup-speed>` to know better ways to run these jobs.
8195

8296
Scaling
8397
-------

admin_manual/ai/app_llm2.rst

+8-3
Original file line numberDiff line numberDiff line change
@@ -6,10 +6,13 @@ App: Local large language model (llm2)
66

77
The *llm2* app is one of the apps that provide text processing functionality using Large language models in Nextcloud and act as a text processing backend for the :ref:`Nextcloud Assistant app<ai-app-assistant>`, the *mail* app and :ref:`other apps making use of the core Text Processing API<tp-consumer-apps>`. The *llm2* app specifically runs only open source models and does so entirely on-premises. Nextcloud can provide customer support upon request, please talk to your account manager for the possibilities.
88

9-
This app uses `ctransformers <https://github.com/marella/ctransformers>`_ under the hood and is thus compatible with any model in *gguf* format. Output quality will differ depending on which model you use, we recommend the following models:
9+
This app uses `llama.cpp <https://github.com/abetlen/llama-cpp-python>`_ under the hood and is thus compatible with any model in *gguf* format.
1010

11-
* `Llama3 8b Instruct <https://huggingface.co/QuantFactory/Meta-Llama-3-8B-Instruct-GGUF>`_ (reasonable quality; fast; good acclaim; multilingual output may not be optimal)
12-
* `Llama3 70B Instruct <https://huggingface.co/QuantFactory/Meta-Llama-3-70B-Instruct-GGUF>`_ (good quality; good acclaim; good multilingual output)
11+
However, we only test with Llama 3.1. Output quality will differ depending on which model you use and downstream tasks like summarization or Context Chat may not work on other models.
12+
We thus recommend the following models:
13+
14+
* `Llama3.1 8b Instruct <https://huggingface.co/QuantFactory/Meta-Llama-3.1-8B-Instruct-GGUF>`_ (reasonable quality; fast; good acclaim; comes shipped with the app)
15+
* `Llama3.1 70B Instruct <https://huggingface.co/bartowski/Meta-Llama-3.1-70B-Instruct-GGUF>`_ (good quality; good acclaim)
1316

1417
Multilinguality
1518
---------------
@@ -27,6 +30,8 @@ Llama 3.1 `supports the following languages: <https://huggingface.co/meta-llama/
2730
* Hindi
2831
* Thai
2932

33+
Note, that other languages may work as well, but only the above languages are guaranteed to work.
34+
3035
Requirements
3136
------------
3237

admin_manual/ai/overview.rst

+76
Original file line numberDiff line numberDiff line change
@@ -175,6 +175,82 @@ Apps can integrate their content with Context Chat to make it available for quer
175175
* *files*
176176
* `Analytics <https://apps.nextcloud.com/apps/analytics>`_
177177

178+
.. _ai-overview_improve-ai-task-pickup-speed:
179+
180+
Improve AI task pickup speed
181+
----------------------------
182+
183+
Most AI tasks will be run as part of the background job system in Nextcloud which only runs jobs every 5 minutes by default.
184+
To pick up scheduled jobs faster you can set up background job workers that process AI tasks as soon as they are scheduled.
185+
If the PHP code or the Nextcloud settings values are changed while a worker is running, those changes won't be effective inside the runner. For that reason, the worker needs to be restarted regularly. It is done with a timeout of N seconds which means any changes to the settings or the code will be picked up after N seconds (worst case scenario). This timeout does not, in any way, affect the processing or the timeout of the AI tasks.
186+
187+
Screen or tmux session
188+
^^^^^^^^^^^^^^^^^^^^^^
189+
190+
Run the following occ command inside a screen or a tmux session, preferably 4 or more times for parallel processing of multiple requests by different or the same user (and as a requirement for some apps like context_chat).
191+
It would be best to run one command per screen session or per tmux window/pane to keep the logs visible and the worker easily restartable.
192+
193+
.. code-block::
194+
195+
set -e; while true; do sudo -u www-data occ background-job:worker -v -t 60 "OC\TaskProcessing\SynchronousBackgroundJob"; done
196+
197+
You may want to adjust the number of workers and the timeout (in seconds) to your needs.
198+
The logs of the worker can be checked by attaching to the screen or tmux session.
199+
200+
Systemd service
201+
^^^^^^^^^^^^^^^
202+
203+
1. Create a systemd service file in ``/etc/systemd/system/[email protected]`` with the following content:
204+
205+
.. code-block::
206+
207+
[Unit]
208+
Description=Nextcloud AI worker %i
209+
After=network.target
210+
211+
[Service]
212+
ExecStart=/opt/nextcloud-ai-worker/taskprocessing.sh %i
213+
Restart=always
214+
215+
[Install]
216+
WantedBy=multi-user.target
217+
218+
2. Create a shell script in ``/opt/nextcloud-ai-worker/taskprocessing.sh`` with the following content and make sure to make it executable:
219+
220+
.. code-block::
221+
222+
#!/bin/sh
223+
echo "Starting Nextcloud AI Worker $1"
224+
cd /path/to/nextcloud
225+
sudo -u www-data php occ background-job:worker -t 60 'OC\TaskProcessing\SynchronousBackgroundJob'
226+
227+
You may want to adjust the timeout to your needs (in seconds).
228+
229+
3. Enable and start the service 4 or more times:
230+
231+
.. code-block::
232+
233+
for i in {1..4}; do systemctl enable --now nextcloud-ai-worker@$i.service; done
234+
235+
The status of the workers can be checked with (replace 1 with the worker number):
236+
237+
.. code-block::
238+
239+
systemctl status [email protected]
240+
241+
The list of workers can be checked with:
242+
243+
.. code-block::
244+
245+
systemctl list-units --type=service | grep nextcloud-ai-worker
246+
247+
The complete logs of the workers can be checked with (replace 1 with the worker number):
248+
249+
.. code-block::
250+
251+
journalctl -xeu [email protected] -f
252+
253+
178254
Frequently Asked Questions
179255
--------------------------
180256

admin_manual/installation/nginx-root.conf.sample

+1-1
Original file line numberDiff line numberDiff line change
@@ -184,7 +184,7 @@ server {
184184
access_log off; # Optional: Don't log access to assets
185185
}
186186

187-
location ~ \.woff2?$ {
187+
location ~ \.(otf|woff2?)$ {
188188
try_files $uri /index.php$request_uri;
189189
expires 7d; # Cache-Control policy borrowed from `.htaccess`
190190
access_log off; # Optional: Don't log access to assets

admin_manual/installation/nginx-subdir.conf.sample

+1-1
Original file line numberDiff line numberDiff line change
@@ -181,7 +181,7 @@ server {
181181
access_log off; # Optional: Don't log access to assets
182182
}
183183

184-
location ~ \.woff2?$ {
184+
location ~ \.(otf|woff2?)$ {
185185
try_files $uri /nextcloud/index.php$request_uri;
186186
expires 7d; # Cache-Control policy borrowed from `.htaccess`
187187
access_log off; # Optional: Don't log access to assets

admin_manual/occ_command.rst

+23-1
Original file line numberDiff line numberDiff line change
@@ -514,7 +514,9 @@ A set of commands to create and manage addressbooks and calendars::
514514

515515
dav
516516
dav:create-addressbook Create a dav addressbook
517+
dav:list-addressbooks List all addressbooks of a user
517518
dav:create-calendar Create a dav calendar
519+
dav:create-subscription Create a dav calendar subscription
518520
dav:delete-calendar Delete a dav calendar
519521
dav:fix-missing-caldav-changes Insert missing calendarchanges rows for existing events
520522
dav:list-calendars List all calendars of a user
@@ -536,6 +538,20 @@ This example creates a new calendar for molly::
536538

537539
Molly will immediately see these in the Calendar and Contacts apps.
538540

541+
The syntax for ``dav:create-subscription`` is
542+
``dav:create-subscription [user] [name] [url] [optional color]``. This example creates the subscription subscription for the lunar
543+
calendar ``Lunar Calendar`` for the user molly::
544+
545+
sudo -u www-data php occ dav:create-subscription molly "Lunar Calendar" webcal://cantonbecker.com/astronomy-calendar/astrocal.ics
546+
547+
Molly will immediately see this new subscription calendar in the Calendar app.
548+
549+
Optionally, a color for the new subscription calendar can be passed as a HEX color code::
550+
551+
sudo -u www-data php occ dav:create-subscription molly "Lunar Calendar" calendar webcal://cantonbecker.com/astronomy-calendar/astrocal.ics "#ff5733"
552+
553+
If not set, the theming default color will be used.
554+
539555
``dav:delete-calendar [--birthday] [-f|--force] <uid> [<name>]`` deletes the
540556
calendar named ``name`` (or the birthday calendar if ``--birthday`` is
541557
specified) of the user ``uid``. You can use the force option ``-f`` or
@@ -549,11 +565,17 @@ This example will delete the birthday calendar of user molly::
549565

550566
sudo -u www-data php occ dav:delete-calendar --birthday molly
551567

552-
``dav:lists-calendars [user]`` will display a table listing the calendars for a given user.
568+
``dav:list-calendars [user]`` and ``dav:list-addressbooks [user]`` will display a
569+
table listing the calendars or addressbooks for a given user.
570+
553571
This example will list all calendars for user annie::
554572

555573
sudo -u www-data php occ dav:list-calendars annie
556574

575+
This example will list all addressbooks for user annie::
576+
577+
sudo -u www-data php occ dav:list-addressbooks annie
578+
557579
``dav:dav:fix-missing-caldav-changes [user]`` tries to restore calendar sync changes when data in the calendarchanges table has been lost. If the user ID is omitted, the command runs for all users. This can take a while.
558580

559581
``dav::move-calendar [name] [sourceuid] [destinationuid]`` allows the admin

admin_manual/release_notes/upgrade_to_30.rst

+1
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,7 @@ System requirements
88
* PHP 8.1 is now deprecated but still supported.
99
* PHP 8.0 is no longer supported.
1010
* PostgreSQL 9.4 is no longer supported.
11+
* MariaDB 10.3 and 10.5 are no longer supported.
1112

1213
Web server configuration
1314
------------------------

admin_manual/windmill_workflows/index.rst

+8-1
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ Installation
99

1010
* Install Windmill
1111

12-
* Either as a standalone install or via the Windmill External App in Nextcloud (see :ref:`External Apps<ai-app_api>`)
12+
* Either as a standalone install or via the External App "Flow" in Nextcloud (see :ref:`External Apps<ai-app_api>`)
1313

1414
* Enable the ``webhook_listeners`` app that comes with Nextcloud
1515

@@ -41,6 +41,13 @@ The magic listener script
4141

4242
The first script (after the "Input" block) in any workflow you build that should listen to a Nextcloud webhook must be ``CORE:LISTEN_TO_EVENT``. It must be an empty script with two parameters that you should fill statically: ``events``, which is a list of event IDs to listen to and ``filters`` a filter condition that allows more fine grained filtering for which events should be used. The filter condition as well as the available events with their payloads is documented in :ref:`the webhook_listeners documentation<webhook_listeners>`.
4343

44+
You can copy the following Deno script for this:
45+
46+
.. code-block:: typescript
47+
48+
export async function main(events: string[], filters: object) { }
49+
50+
4451
Nextcloud Scripts
4552
-----------------
4653

0 commit comments

Comments
 (0)