diff --git a/admin_manual/ai/app_context_chat.rst b/admin_manual/ai/app_context_chat.rst index 9fc97f2cf04..ad021ca3472 100644 --- a/admin_manual/ai/app_context_chat.rst +++ b/admin_manual/ai/app_context_chat.rst @@ -31,6 +31,10 @@ Requirements * CPU Sizing * At least 12GB of system RAM + * Below version 3, 10-20 Cores, the more cores (physical cores) the faster the prompt processing will be. Overall performance will increase with an increase in memory bandwidth (more memory sticks and/or higher DDR version) + * Since version 3, this app makes use of the configured Text To Text Free prompt provider instead of running its own Language model, you will thus need only 4-8 cores for the embedding model + +* A dedicated machine is recommended Space usage ~~~~~~~~~~~ diff --git a/admin_manual/ai/app_stt_whisper2.rst b/admin_manual/ai/app_stt_whisper2.rst index d2444facf65..9850e609d59 100644 --- a/admin_manual/ai/app_stt_whisper2.rst +++ b/admin_manual/ai/app_stt_whisper2.rst @@ -25,6 +25,7 @@ Requirements * The more cores you have and the more powerful the CPU the better, we recommend 10-20 cores * The app will hog all cores by default, so it is usually better to run it on a separate machine + * 4GB for the app Installation ------------ diff --git a/admin_manual/ai/app_translate.rst b/admin_manual/ai/app_translate.rst index e4ea8b26523..3ebcf8e92e2 100644 --- a/admin_manual/ai/app_translate.rst +++ b/admin_manual/ai/app_translate.rst @@ -20,9 +20,11 @@ Requirements ------------ * Minimal Nextcloud version: 26 -* x86 CPU +* x86 CPU with 4-8 cores for the app to use (The more cores the faster it will be) +* 2GB of RAM for the app should be enough * GNU lib C (musl is not supported) * This app does not support using GPU for processing and may thus not be performing ideally for long texts +* The workload will run on the web server workers (*Note*: Nextcloud AIO is currently not supported due to it using musl)