Skip to content

Commit b184dc7

Browse files
authored
Merge pull request #13029 from nextcloud/feat/integration_watsonx
feat(admin): add integration_watsonx app information
2 parents 02176a8 + 7581ca9 commit b184dc7

File tree

4 files changed

+23
-6
lines changed

4 files changed

+23
-6
lines changed

admin_manual/ai/ai_as_a_service.rst

+17-6
Original file line numberDiff line numberDiff line change
@@ -4,10 +4,10 @@ AI as a Service
44

55
.. _ai-ai_as_a_service:
66

7-
At Nextcloud we focus on creating on-premise AI apps that run fully self-hosted on your own servers in order to preserve your privacy and data sovereignty.
7+
At Nextcloud, we focus on creating on-premise AI apps that run fully self-hosted on your own servers in order to preserve your privacy and data sovereignty.
88
However, you can also offload these resource-heavy tasks to an "AI as a Service" provider offering API access in exchange for payment.
99
Examples of such providers are `OpenAI <https://platform.openai.com/>`_, with its ChatGPT APIs providing language model access
10-
among other APIs as well as `Replicate <https://replicate.com/>`_.
10+
among other APIs, as well as `Replicate <https://replicate.com/>`_ and `IBM watsonx <https://www.ibm.com/watsonx>`_.
1111

1212
Installation
1313
------------
@@ -18,9 +18,11 @@ In order to use these providers you will need to install the respective app from
1818

1919
* ``integration_replicate``
2020

21-
You can then add your API token and rate limits in the administration settings and set the providers live in the "Artificial intelligence" section of the admins settings.
21+
* ``integration_watsonx``
2222

23-
Optionally but recommended, setup background workers for faster pickup of tasks. See :ref:`the relevant section in AI Overview<ai-overview_improve-ai-task-pickup-speed>` for more information.
23+
You can then add your account information, set rate limits, and set the providers live in the "Artificial intelligence" section of the administration settings.
24+
25+
Optionally (but recommended), setup background workers for faster pickup of tasks. See :ref:`the relevant section in AI Overview<ai-overview_improve-ai-task-pickup-speed>` for more information.
2426

2527
OpenAI integration
2628
------------------
@@ -29,11 +31,20 @@ With this application, you can also connect to a self-hosted LocalAI or Ollama i
2931
for example `IONOS AI Model Hub <https://docs.ionos.com/cloud/ai/ai-model-hub>`_,
3032
`Plusserver <https://www.plusserver.com/en/ai-platform/>`_, `Groqcloud <https://console.groq.com>`_, `MistralAI <https://mistral.ai>`_ or `Together AI <https://together.ai>`_.
3133

32-
Do note however, that we test the Assistant tasks that this app implements only with OpenAI models and only against the OpenAI API, we thus cannot guarantee other models and APIs will work.
34+
Do note, however, that we test the Assistant tasks that this app implements only with OpenAI models and only against the OpenAI API, we thus cannot guarantee other models and APIs will work.
3335
Some APIs claiming to be compatible with OpenAI might not be fully compatible so we cannot guarantee that they will work with this app.
3436

37+
IBM watsonx.ai integration
38+
--------------------------
39+
40+
With this application, you can also connect to a self-hosted cluster running the IBM watsonx.ai software.
41+
42+
Do note, however, that we test the Assistant tasks that this app implements only with the provided foundation models and only against IBM Cloud servers.
43+
We thus cannot guarantee that other models or server instances will work.
44+
3545

3646
Improve performance
3747
-------------------
3848

39-
Prompts from integration_openai and integration_replicate can have a delay of 5 minutes. This can be optimized and more information can be found in :ref:`the relevant section in AI Overview <ai-overview_improve-ai-task-pickup-speed>`.
49+
Prompts from these apps can have a delay of up to 5 minutes.
50+
This can be optimized and more information can be found in :ref:`the relevant section in AI Overview <ai-overview_improve-ai-task-pickup-speed>`.

admin_manual/ai/app_assistant.rst

+1
Original file line numberDiff line numberDiff line change
@@ -66,6 +66,7 @@ In order to make use of text processing features in the assistant, you will need
6666

6767
* :ref:`llm2<ai-app-llm2>` - Runs open source AI language models locally on your own server hardware (Customer support available upon request)
6868
* *integration_openai* - Integrates with the OpenAI API to provide AI functionality from OpenAI servers (Customer support available upon request; see :ref:`AI as a Service<ai-ai_as_a_service>`)
69+
* *integration_watsonx* - Integrates with the IBM watsonx.ai API to provide AI functionality from IBM Cloud servers (Customer support available upon request; see :ref:`AI as a Service<ai-ai_as_a_service>`)
6970

7071
These apps currently implement the following Assistant Tasks:
7172

admin_manual/ai/app_summary_bot.rst

+2
Original file line numberDiff line numberDiff line change
@@ -44,6 +44,8 @@ Installation
4444

4545
- `Nextcloud OpenAI and LocalAI integration app <https://apps.nextcloud.com/apps/integration_openai>`_
4646

47+
- `Nextcloud IBM watsonx.ai integration app <https://apps.nextcloud.com/apps/integration_watsonx>`_
48+
4749

4850
Setup (via App Store)
4951
~~~~~~~~~~~~~~~~~~~~~

admin_manual/ai/overview.rst

+3
Original file line numberDiff line numberDiff line change
@@ -33,6 +33,8 @@ Nextcloud uses modularity to separate raw AI functionality from the Graphical Us
3333
"","`OpenAI and LocalAI integration (via Plusserver) <https://apps.nextcloud.com/apps/integration_openai>`_","Orange","No","Yes","No","No"
3434
"","`OpenAI and LocalAI integration (via Groqcloud) <https://apps.nextcloud.com/apps/integration_openai>`_","Orange","No","Yes","No","No"
3535
"","`OpenAI and LocalAI integration (via MistralAI) <https://apps.nextcloud.com/apps/integration_openai>`_","Orange","No","Yes","No","No"
36+
"","`IBM watsonx.ai integration (via IBM watsonx.ai as a Service) <https://apps.nextcloud.com/apps/integration_watsonx>`_","Yellow","No","Yes - e.g. Granite models by IBM","Yes","No"
37+
"","`IBM watsonx.ai integration (via IBM watsonx.ai software) <https://apps.nextcloud.com/apps/integration_watsonx>`_","Yellow","No","Yes - e.g. Granite models by IBM", "Yes","Yes"
3638
"Machine translation","`Local Machine Translation 2 (ExApp) <https://apps.nextcloud.com/apps/translate2>`_","Green","Yes","Yes - MADLAD models by Google","Yes","Yes"
3739
"","`DeepL integration <https://apps.nextcloud.com/apps/integration_deepl>`_","Red","No","No","No","No"
3840
"","`OpenAI and LocalAI integration (via OpenAI API) <https://apps.nextcloud.com/apps/integration_openai>`_","Red","No","No","No","No"
@@ -113,6 +115,7 @@ Backend apps
113115

114116
* :ref:`llm2<ai-app-llm2>` - Runs open source AI LLM models on your own server hardware (Customer support available upon request)
115117
* `OpenAI and LocalAI integration (via OpenAI API) <https://apps.nextcloud.com/apps/integration_openai>`_ - Integrates with the OpenAI API to provide AI functionality from OpenAI servers (Customer support available upon request; see :ref:`AI as a Service<ai-ai_as_a_service>`)
118+
* `IBM watsonx.ai integration (via IBM watsonx.ai as a Service) <https://apps.nextcloud.com/apps/integration_watsonx>`_ - Integrates with the IBM watsonx.ai API to provide AI functionality from IBM Cloud servers (Customer support available upon request; see :ref:`AI as a Service<ai-ai_as_a_service>`)
116119

117120

118121
Machine translation

0 commit comments

Comments
 (0)