You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: admin_manual/ai/ai_as_a_service.rst
+17-6
Original file line number
Diff line number
Diff line change
@@ -4,10 +4,10 @@ AI as a Service
4
4
5
5
.. _ai-ai_as_a_service:
6
6
7
-
At Nextcloud we focus on creating on-premise AI apps that run fully self-hosted on your own servers in order to preserve your privacy and data sovereignty.
7
+
At Nextcloud, we focus on creating on-premise AI apps that run fully self-hosted on your own servers in order to preserve your privacy and data sovereignty.
8
8
However, you can also offload these resource-heavy tasks to an "AI as a Service" provider offering API access in exchange for payment.
9
9
Examples of such providers are `OpenAI <https://platform.openai.com/>`_, with its ChatGPT APIs providing language model access
10
-
among other APIs as well as `Replicate <https://replicate.com/>`_.
10
+
among other APIs, as well as `Replicate <https://replicate.com/>`_ and `IBM watsonx <https://www.ibm.com/watsonx>`_.
11
11
12
12
Installation
13
13
------------
@@ -18,9 +18,11 @@ In order to use these providers you will need to install the respective app from
18
18
19
19
* ``integration_replicate``
20
20
21
-
You can then add your API token and rate limits in the administration settings and set the providers live in the "Artificial intelligence" section of the admins settings.
21
+
* ``integration_watsonx``
22
22
23
-
Optionally but recommended, setup background workers for faster pickup of tasks. See :ref:`the relevant section in AI Overview<ai-overview_improve-ai-task-pickup-speed>` for more information.
23
+
You can then add your account information, set rate limits, and set the providers live in the "Artificial intelligence" section of the administration settings.
24
+
25
+
Optionally (but recommended), setup background workers for faster pickup of tasks. See :ref:`the relevant section in AI Overview<ai-overview_improve-ai-task-pickup-speed>` for more information.
24
26
25
27
OpenAI integration
26
28
------------------
@@ -29,11 +31,20 @@ With this application, you can also connect to a self-hosted LocalAI or Ollama i
29
31
for example `IONOS AI Model Hub <https://docs.ionos.com/cloud/ai/ai-model-hub>`_,
30
32
`Plusserver <https://www.plusserver.com/en/ai-platform/>`_, `Groqcloud <https://console.groq.com>`_, `MistralAI <https://mistral.ai>`_ or `Together AI <https://together.ai>`_.
31
33
32
-
Do note however, that we test the Assistant tasks that this app implements only with OpenAI models and only against the OpenAI API, we thus cannot guarantee other models and APIs will work.
34
+
Do note, however, that we test the Assistant tasks that this app implements only with OpenAI models and only against the OpenAI API, we thus cannot guarantee other models and APIs will work.
33
35
Some APIs claiming to be compatible with OpenAI might not be fully compatible so we cannot guarantee that they will work with this app.
34
36
37
+
IBM watsonx.ai integration
38
+
--------------------------
39
+
40
+
With this application, you can also connect to a self-hosted cluster running the IBM watsonx.ai software.
41
+
42
+
Do note, however, that we test the Assistant tasks that this app implements only with the provided foundation models and only against IBM Cloud servers.
43
+
We thus cannot guarantee that other models or server instances will work.
44
+
35
45
36
46
Improve performance
37
47
-------------------
38
48
39
-
Prompts from integration_openai and integration_replicate can have a delay of 5 minutes. This can be optimized and more information can be found in :ref:`the relevant section in AI Overview <ai-overview_improve-ai-task-pickup-speed>`.
49
+
Prompts from these apps can have a delay of up to 5 minutes.
50
+
This can be optimized and more information can be found in :ref:`the relevant section in AI Overview <ai-overview_improve-ai-task-pickup-speed>`.
Copy file name to clipboardExpand all lines: admin_manual/ai/app_assistant.rst
+1
Original file line number
Diff line number
Diff line change
@@ -66,6 +66,7 @@ In order to make use of text processing features in the assistant, you will need
66
66
67
67
* :ref:`llm2<ai-app-llm2>` - Runs open source AI language models locally on your own server hardware (Customer support available upon request)
68
68
* *integration_openai* - Integrates with the OpenAI API to provide AI functionality from OpenAI servers (Customer support available upon request; see :ref:`AI as a Service<ai-ai_as_a_service>`)
69
+
* *integration_watsonx* - Integrates with the IBM watsonx.ai API to provide AI functionality from IBM Cloud servers (Customer support available upon request; see :ref:`AI as a Service<ai-ai_as_a_service>`)
69
70
70
71
These apps currently implement the following Assistant Tasks:
Copy file name to clipboardExpand all lines: admin_manual/ai/overview.rst
+3
Original file line number
Diff line number
Diff line change
@@ -33,6 +33,8 @@ Nextcloud uses modularity to separate raw AI functionality from the Graphical Us
33
33
"","`OpenAI and LocalAI integration (via Plusserver) <https://apps.nextcloud.com/apps/integration_openai>`_","Orange","No","Yes","No","No"
34
34
"","`OpenAI and LocalAI integration (via Groqcloud) <https://apps.nextcloud.com/apps/integration_openai>`_","Orange","No","Yes","No","No"
35
35
"","`OpenAI and LocalAI integration (via MistralAI) <https://apps.nextcloud.com/apps/integration_openai>`_","Orange","No","Yes","No","No"
36
+
"","`IBM watsonx.ai integration (via IBM watsonx.ai as a Service) <https://apps.nextcloud.com/apps/integration_watsonx>`_","Yellow","No","Yes - e.g. Granite models by IBM","Yes","No"
37
+
"","`IBM watsonx.ai integration (via IBM watsonx.ai software) <https://apps.nextcloud.com/apps/integration_watsonx>`_","Yellow","No","Yes - e.g. Granite models by IBM", "Yes","Yes"
"","`OpenAI and LocalAI integration (via OpenAI API) <https://apps.nextcloud.com/apps/integration_openai>`_","Red","No","No","No","No"
@@ -113,6 +115,7 @@ Backend apps
113
115
114
116
* :ref:`llm2<ai-app-llm2>` - Runs open source AI LLM models on your own server hardware (Customer support available upon request)
115
117
* `OpenAI and LocalAI integration (via OpenAI API) <https://apps.nextcloud.com/apps/integration_openai>`_ - Integrates with the OpenAI API to provide AI functionality from OpenAI servers (Customer support available upon request; see :ref:`AI as a Service<ai-ai_as_a_service>`)
118
+
* `IBM watsonx.ai integration (via IBM watsonx.ai as a Service) <https://apps.nextcloud.com/apps/integration_watsonx>`_ - Integrates with the IBM watsonx.ai API to provide AI functionality from IBM Cloud servers (Customer support available upon request; see :ref:`AI as a Service<ai-ai_as_a_service>`)
0 commit comments