diff --git a/docs/about/changelog.md b/docs/about/changelog.md
index e4ee855..1a02306 100644
--- a/docs/about/changelog.md
+++ b/docs/about/changelog.md
@@ -13,12 +13,27 @@ Major features and changes are noted here. To review all updates, see the
Related: [Upgrade CodeGate](../how-to/install.md#upgrade-codegate)
-- **New integration: Open Interpreter** - xx Feb\
- 2025 CodeGate v0.1.16 introduces support for
+- **Model muxing** - 7 Feb, 2025\
+ With CodeGate v0.1.17 you can use the new `/v1/mux` endpoint to configure
+ model selection based on your workspace! Learn more in the
+ [model muxing guide](../features/muxing.md).
+
+- **OpenRouter endpoint** - 7 Feb, 2025\
+ CodeGate v0.1.17 adds a dedicated `/openrouter` provider endpoint for
+ OpenRouter users. This endpoint currently works with Continue, Cline, and Kodu
+ (Claude Coder).
+
+- **New integration: Open Interpreter** - 4 Feb, 2025\
+ CodeGate v0.1.16 added support for
[Open Interpreter](https://github.com/openinterpreter/open-interpreter) with
OpenAI-compatible APIs. Review the
[integration guide](../integrations/open-interpreter.mdx) to get started.
+- **New integration: Claude Coder** - 28 Jan, 2025\
+ CodeGate v0.1.14 also introduced support for Kodu's
+ [Claude Coder](https://www.kodu.ai/extension) extension. See the
+ [integration guide](../integrations/kodu.mdx) to learn more.
+
- **New integration: Cline** - 28 Jan, 2025\
CodeGate version 0.1.14 adds support for [Cline](https://cline.bot/) with
Anthropic, OpenAI, Ollama, and LM Studio. See the
diff --git a/docs/features/muxing.md b/docs/features/muxing.md
new file mode 100644
index 0000000..33bbc0a
--- /dev/null
+++ b/docs/features/muxing.md
@@ -0,0 +1,129 @@
+---
+title: Model muxing
+description: Configure a per-workspace LLM
+sidebar_position: 35
+---
+
+## Overview
+
+_Model muxing_ (or multiplexing), allows you to configure your AI assistant once
+and use [CodeGate workspaces](./workspaces.mdx) to switch between LLM providers
+and models without reconfiguring your development environment. This feature is
+especially useful when you're working on multiple projects or tasks that require
+different AI models.
+
+For each CodeGate workspace, you can select the AI provider and model
+combination you want to use. Then, configure your AI coding tool to use the
+CodeGate muxing endpoint `http://localhost:8989/v1/mux` as an OpenAI-compatible
+API provider.
+
+To change the model currently in use, simply switch your active CodeGate
+workspace.
+
+```mermaid
+flowchart LR
+ Client(AI Assistant/Agent)
+ CodeGate{CodeGate}
+ WS1[Workspace-A]
+ WS2[Workspace-B]
+ WS3[Workspace-C]
+ LLM1(OpenAI/
o3-mini)
+ LLM2(Ollama/
deepseek-r1)
+ LLM3(OpenRouter/
claude-35-sonnet)
+
+ Client ---|/v1/mux| CodeGate
+ CodeGate --> WS1
+ CodeGate --> WS2
+ CodeGate --> WS3
+ WS1 --> |api| LLM1
+ WS2 --> |api| LLM2
+ WS3 --> |api| LLM3
+```
+
+## Use cases
+
+- You have a project that requires a specific model for a particular task, but
+ you also need to switch between different models during the course of your
+ work.
+- You want to experiment with different LLM providers and models without having
+ to reconfigure your AI assistant/agent every time you switch.
+- Your AI coding assistant doesn't support a particular provider or model that
+ you want to use. CodeGate's muxing provides an OpenAI-compatible abstraction
+ layer.
+- You're working on a sensitive project and want to use a local model, but still
+ have the flexibility to switch to hosted models for other work.
+- You want to control your LLM provider spend by using lower-cost models for
+ some tasks that don't require the power of more advanced (and expensive)
+ reasoning models.
+
+## Configure muxing
+
+To use muxing with your AI coding assistant, you need to add one or more AI
+providers to CodeGate, then select the model you want to use on a workspace.
+
+CodeGate supports the following LLM providers for muxing:
+
+- Anthropic
+- llama.cpp
+- LM Studio
+- Ollama
+- OpenAI (and compatible APIs)
+- OpenRouter
+- vLLM
+
+### Add a provider
+
+1. In the [CodeGate dashboard](http://localhost:9090), open the **Providers**
+ page from the **Settings** menu.
+1. Click **Add Provider**.
+1. Enter a display name for the provider, then select the type from the
+ drop-down list. The default endpoint and authentication type are filled in
+ automatically.
+1. If you are using a non-default endpoint, update the **Endpoint** value.
+1. Optionally, add a **Description** for the provider.
+1. If the provider requires authentication, select the **API Key**
+ authentication option and enter your key.
+
+When you save the settings, CodeGate connects to the provider to retrieve the
+available models.
+
+:::note
+
+For locally-hosted models, you must use `http://host.docker.internal` instead of
+`http://localhost`
+
+:::
+
+### Select the model for a workspace
+
+Open the settings of one of your [workspaces](./workspaces.mdx) from the
+Workspace selection menu or the
+[Manage Workspaces](http://localhost:9090/workspaces) screen.
+
+In the **Preferred Model** section, select the model to use with the workspace.
+
+### Manage existing providers
+
+To edit a provider's settings, click the Manage button next to the provider in
+the list. For providers that require authentication, you can leave the API key
+field blank to preserve the current value.
+
+To delete a provider, click the trash icon next to it. If this provider was in
+use by any workspaces, you will need to update their settings to choose a
+different provider/model.
+
+### Refresh available models
+
+To refresh the list of models available from a provider, in the Providers list,
+click the Manage button next to the provider to refresh, then save it without
+making any changes.
+
+## Configure your client
+
+Configure the OpenAI-compatible API base URL of your AI coding assistant/agent
+to `http://localhost:8989/v1/mux`. If your client requires a model name and/or
+API key, you can enter any values since CodeGate manages the model selection and
+authentication.
+
+For specific instructions, see the
+[integration guide](../integrations/index.mdx) for your client.
diff --git a/docs/features/workspaces.mdx b/docs/features/workspaces.mdx
index 3fdd664..08dc34f 100644
--- a/docs/features/workspaces.mdx
+++ b/docs/features/workspaces.mdx
@@ -25,9 +25,13 @@ Workspaces offer several key features:
- **Custom instructions**: Customize your interactions with LLMs by augmenting
your AI assistant's system prompt, enabling tailored responses and behaviors
- for different types of tasks. CodeGate includes a library of community prompts
- that can be easily customized for specific tasks. You can also create your
- own.
+ for different types of tasks. Choose from CodeGate's library of community
+ prompts or create your own.
+
+- [**Model muxing**](./muxing.md): Configure the LLM provider/model for each
+ workspace, allowing you to configure your AI assistant/agent once and switch
+ between different models on the fly. This is useful when working on multiple
+ projects or tasks that require different AI models.
- **Prompt and alert history**: Your LLM interactions (prompt history) and
CodeGate security detections (alert history) are recorded in the active
@@ -112,7 +116,8 @@ In the workspace list, open the menu (**...**) next to a workspace to
**Activate**, **Edit**, or **Archive** the workspace.
**Edit** opens the workspace settings page. From here you can rename the
-workspace, set the custom prompt instructions, or archive the workspace.
+workspace, select the LLM provider and model (see [Model muxing](./muxing.md)),
+set the custom prompt instructions, or archive the workspace.
**Archived** workspaces can be restored or permanently deleted from the
workspace list or workspace settings screen.
diff --git a/docs/how-to/configure.md b/docs/how-to/configure.md
index 66dac3b..809e2f8 100644
--- a/docs/how-to/configure.md
+++ b/docs/how-to/configure.md
@@ -1,14 +1,15 @@
---
-title: Configure CodeGate
+title: Advanced configuration
description: Customizing CodeGate's application settings
-sidebar_position: 20
+sidebar_position: 30
---
## Customize CodeGate's behavior
-The CodeGate container runs with default settings to support Ollama, Anthropic,
-and OpenAI APIs with typical settings. To customize the behavior, you can add
-extra configuration parameters to the container as environment variables:
+The CodeGate container runs with defaults that work with supported LLM providers
+using typical settings. To customize CodeGate's application settings like
+provider endpoints and logging level, you can add extra configuration parameters
+to the container as environment variables:
```bash {2}
docker run --name codegate -d -p 8989:8989 -p 9090:9090 \
@@ -31,22 +32,13 @@ CodeGate supports the following parameters:
| `CODEGATE_OPENAI_URL` | `https://api.openai.com/v1` | Specifies the OpenAI engine API endpoint URL. |
| `CODEGATE_VLLM_URL` | `http://localhost:8000` | Specifies the URL of the vLLM server to use. |
-## Example: Use CodeGate with OpenRouter
+## Example: Use CodeGate with a remote Ollama server
-[OpenRouter](https://openrouter.ai/) is an interface to many large language
-models. CodeGate's vLLM provider works with OpenRouter's API when used with the
-Continue IDE plugin.
-
-To use OpenRouter, set the vLLM URL when you launch CodeGate:
+Set the Ollama server's URL when you launch CodeGate:
```bash {2}
docker run --name codegate -d -p 8989:8989 -p 9090:9090 \
- -e CODEGATE_VLLM_URL=https://openrouter.ai/api \
+ -e CODEGATE_OLLAMA_URL=https://my.ollama-server.example \
--mount type=volume,src=codegate_volume,dst=/app/codegate_volume \
--restart unless-stopped ghcr.io/stacklok/codegate
```
-
-Then,
-[configure the Continue IDE plugin](../integrations/continue.mdx?provider=vllm)
-to use CodeGate's vLLM endpoint (`http://localhost:8989/vllm`) along with the
-model you'd like to use and your OpenRouter API key.
diff --git a/docs/how-to/dashboard.md b/docs/how-to/dashboard.md
index a900588..f8c4735 100644
--- a/docs/how-to/dashboard.md
+++ b/docs/how-to/dashboard.md
@@ -1,7 +1,7 @@
---
title: Access the dashboard
description: View alerts and usage history
-sidebar_position: 30
+sidebar_position: 20
---
## Enable dashboard access
diff --git a/docs/index.md b/docs/index.md
index 1bc2b39..81beeca 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -31,6 +31,20 @@ sequenceDiagram
deactivate CodeGate
```
+## Key features
+
+CodeGate includes several key features for privacy, security, and coding
+efficiency, including:
+
+- [Secrets encryption](./features/secrets-encryption.md) to protect your
+ sensitive credentials
+- [Dependency risk awareness](./features/dependency-risk.md) to update the LLM's
+ knowledge of malicious or deprecated open source packages
+- [Model muxing](./features/muxing.md) to quickly select the best LLM
+ provider/model for your current task
+- [Workspaces](./features/workspaces.mdx) to organize and customize your LLM
+ interactions
+
## Supported environments
CodeGate supports several development environments and AI providers.
@@ -41,8 +55,8 @@ AI coding assistants / IDEs:
- **[Cline](./integrations/cline.mdx)** in Visual Studio Code
- CodeGate supports Ollama, Anthropic, OpenAI-compatible APIs, and LM Studio
- with Cline
+ CodeGate supports Ollama, Anthropic, OpenAI and compatible APIs, OpenRouter,
+ and LM Studio with Cline
- **[Continue](./integrations/continue.mdx)** with Visual Studio Code and
JetBrains IDEs
@@ -50,11 +64,14 @@ AI coding assistants / IDEs:
CodeGate supports the following AI model providers with Continue:
- Local / self-managed: Ollama, llama.cpp, vLLM
- - Hosted: Anthropic, OpenAI and OpenAI-compatible APIs like OpenRouter
+ - Hosted: Anthropic, OpenAI and compatible APIs, and OpenRouter
- **[GitHub Copilot](./integrations/copilot.mdx)** with Visual Studio Code
(JetBrains coming soon!)
+- **[Kodu / Claude Coder](./integrations/kodu.mdx)** in Visual Studio Code with
+ OpenAI-compatible APIs
+
- **[Open Interpreter](./integrations/open-interpreter.mdx)** with
OpenAI-compatible APIs
diff --git a/docs/integrations/aider.mdx b/docs/integrations/aider.mdx
index 57bf34f..5970caf 100644
--- a/docs/integrations/aider.mdx
+++ b/docs/integrations/aider.mdx
@@ -17,6 +17,9 @@ CodeGate works with the following AI model providers through aider:
- Hosted:
- [OpenAI](https://openai.com/api/) and OpenAI-compatible APIs
+You can also configure [CodeGate muxing](../features/muxing.md) to select your
+provider and model using [workspaces](../features/workspaces.mdx).
+
:::note
This guide assumes you have already installed aider using their
diff --git a/docs/integrations/cline.mdx b/docs/integrations/cline.mdx
index 03b2ffd..21d6db4 100644
--- a/docs/integrations/cline.mdx
+++ b/docs/integrations/cline.mdx
@@ -18,7 +18,11 @@ CodeGate works with the following AI model providers through Cline:
- [LM Studio](https://lmstudio.ai/)
- Hosted:
- [Anthropic](https://www.anthropic.com/api)
- - [OpenAI](https://openai.com/api/) and OpenAI-compatible APIs
+ - [OpenAI](https://openai.com/api/) and compatible APIs
+ - [OpenRouter](https://openrouter.ai/)
+
+You can also configure [CodeGate muxing](../features/muxing.md) to select your
+provider and model using [workspaces](../features/workspaces.mdx).
## Install the Cline extension
@@ -42,10 +46,36 @@ in the VS Code documentation.
import ClineProviders from '../partials/_cline-providers.mdx';
+:::note
+
+Cline has two modes: Plan and Act. Each mode can be uniquely configured with a
+different provider and model, so you need to configure both.
+
+:::
+
To configure Cline to send requests through CodeGate:
-1. Open the Cline extension sidebar from the VS Code Activity Bar and open its
- settings using the gear icon.
+1. Open the Cline extension sidebar from the VS Code Activity Bar. Note your
+ current mode, Plan or Act.
+
+
+
+
+1. Open the Cline settings using the gear icon.
-1. Click **Done** to save the settings.
+1. Click **Done** to save the settings for your current mode.
+
+1. Switch your Cline mode from Act to Plan or vice-versa, open the settings, and
+ repeat the configuration for your desired provider & model.
## Verify configuration
diff --git a/docs/integrations/continue.mdx b/docs/integrations/continue.mdx
index 0454949..27fd6a4 100644
--- a/docs/integrations/continue.mdx
+++ b/docs/integrations/continue.mdx
@@ -18,12 +18,15 @@ CodeGate works with the following AI model providers through Continue:
- Local / self-managed:
- [Ollama](https://ollama.com/)
- - [llama.cpp](https://github.com/ggerganov/llama.cpp)
- [vLLM](https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html)
+ - [llama.cpp](https://github.com/ggerganov/llama.cpp) (advanced)
- Hosted:
- - [OpenRouter](https://openrouter.ai/)
- [Anthropic](https://www.anthropic.com/api)
- [OpenAI](https://openai.com/api/)
+ - [OpenRouter](https://openrouter.ai/)
+
+You can also configure [CodeGate muxing](../features/muxing.md) to select your
+provider and model using [workspaces](../features/workspaces.mdx).
## Install the Continue plugin
@@ -92,8 +95,9 @@ To configure Continue to send requests through CodeGate:
"apiBase": "http://127.0.0.1:8989/"
```
- Replace `/` with one of: `/anthropic`, `/ollama`, `/openai`, or
- `/vllm` to match your LLM provider.
+ Replace `/` with one of: `/v1/mux` (for CodeGate muxing),
+ `/anthropic`, `/ollama`, `/openai`, `/openrouter`, or `/vllm` to match your
+ LLM provider.
If you used a different API port when launching the CodeGate container,
replace `8989` with your custom port number.
@@ -111,49 +115,37 @@ provider. Replace the values in ALL_CAPS. The configuration syntax is the same
for VS Code and JetBrains IDEs.
-
-
-You need Ollama installed on your local system with the server running
-(`ollama serve`) to use this provider.
-
-CodeGate connects to `http://host.docker.internal:11434` by default. If you
-changed the default Ollama server port or to connect to a remote Ollama
-instance, launch CodeGate with the `CODEGATE_OLLAMA_URL` environment variable
-set to the correct URL. See [Configure CodeGate](../how-to/configure.md).
+
-Replace `MODEL_NAME` with the names of model(s) you have installed locally using
-`ollama pull`. See Continue's
-[Ollama provider documentation](https://docs.continue.dev/customize/model-providers/ollama).
+First, configure your [provider(s)](../features/muxing.md#add-a-provider) and
+select a model for each of your
+[workspace(s)](../features/workspaces.mdx#manage-workspaces) in the CodeGate
+dashboard.
-We recommend the [Qwen2.5-Coder](https://ollama.com/library/qwen2.5-coder)
-series of models. Our minimum recommendation is:
-
-- `qwen2.5-coder:7b` for chat
-- `qwen2.5-coder:1.5b` for autocomplete
-
-These models balance performance and quality for typical systems with at least 4
-CPU cores and 16GB of RAM. If you have more compute resources available, our
-experimentation shows that larger models do yield better results.
+Configure Continue as shown. Note, the `model` and `apiKey` settings are
+required by Continue, but their value is not used.
```json title="~/.continue/config.json"
{
"models": [
{
- "title": "CodeGate-Ollama",
- "provider": "ollama",
- "model": "MODEL_NAME",
- "apiBase": "http://localhost:8989/ollama"
+ "title": "CodeGate-Mux",
+ "provider": "openai",
+ "model": "fake-value-not-used",
+ "apiKey": "fake-value-not-used",
+ "apiBase": "http://localhost:8989/v1/mux"
}
],
"modelRoles": {
- "default": "CodeGate-Ollama",
- "summarize": "CodeGate-Ollama"
+ "default": "CodeGate-Mux",
+ "summarize": "CodeGate-Mux"
},
"tabAutocompleteModel": {
- "title": "CodeGate-Ollama-Autocomplete",
- "provider": "ollama",
- "model": "MODEL_NAME",
- "apiBase": "http://localhost:8989/ollama"
+ "title": "CodeGate-Mux-Autocomplete",
+ "provider": "openai",
+ "model": "fake-value-not-used",
+ "apiKey": "fake-value-not-used",
+ "apiBase": "http://localhost:8989/v1/mux"
}
}
```
@@ -195,6 +187,54 @@ Replace `YOUR_API_KEY` with your
}
```
+
+
+
+You need Ollama installed on your local system with the server running
+(`ollama serve`) to use this provider.
+
+CodeGate connects to `http://host.docker.internal:11434` by default. If you
+changed the default Ollama server port or to connect to a remote Ollama
+instance, launch CodeGate with the `CODEGATE_OLLAMA_URL` environment variable
+set to the correct URL. See [Configure CodeGate](../how-to/configure.md).
+
+Replace `MODEL_NAME` with the names of model(s) you have installed locally using
+`ollama pull`. See Continue's
+[Ollama provider documentation](https://docs.continue.dev/customize/model-providers/ollama).
+
+We recommend the [Qwen2.5-Coder](https://ollama.com/library/qwen2.5-coder)
+series of models. Our minimum recommendation is:
+
+- `qwen2.5-coder:7b` for chat
+- `qwen2.5-coder:1.5b` for autocomplete
+
+These models balance performance and quality for typical systems with at least 4
+CPU cores and 16GB of RAM. If you have more compute resources available, our
+experimentation shows that larger models do yield better results.
+
+```json title="~/.continue/config.json"
+{
+ "models": [
+ {
+ "title": "CodeGate-Ollama",
+ "provider": "ollama",
+ "model": "MODEL_NAME",
+ "apiBase": "http://localhost:8989/ollama"
+ }
+ ],
+ "modelRoles": {
+ "default": "CodeGate-Ollama",
+ "summarize": "CodeGate-Ollama"
+ },
+ "tabAutocompleteModel": {
+ "title": "CodeGate-Ollama-Autocomplete",
+ "provider": "ollama",
+ "model": "MODEL_NAME",
+ "apiBase": "http://localhost:8989/ollama"
+ }
+}
+```
+
@@ -234,13 +274,26 @@ Replace `YOUR_API_KEY` with your
-CodeGate's vLLM provider supports OpenRouter, a unified interface for hundreds
-of commercial and open source models. You need an
-[OpenRouter](https://openrouter.ai/) account to use this provider.
+OpenRouter is a unified interface for hundreds of commercial and open source
+models. You need an [OpenRouter](https://openrouter.ai/) account to use this
+provider.
+
+:::info Known issues
+
+**Auto-completion support**: currently, CodeGate's `/openrouter` endpoint does
+not work with Continue's `tabAutocompleteModel` setting for fill-in-the-middle
+(FIM). We are
+[working to resolve this issue](https://github.com/stacklok/codegate/issues/980).
+
+**DeepSeek models**: there is a bug in the current release version of Continue
+affecting DeepSeek models (ex: `deepseek/deepseek-r1`), you need to run the
+pre-release version of the Continue extension.
+
+:::
Replace `MODEL_NAME` with one of the
[available models](https://openrouter.ai/models), for example
-`qwen/qwen-2.5-coder-32b-instruct`.
+`anthropic/claude-3.5-sonnet`.
Replace `YOUR_API_KEY` with your
[OpenRouter API key](https://openrouter.ai/keys).
@@ -250,20 +303,56 @@ Replace `YOUR_API_KEY` with your
"models": [
{
"title": "CodeGate-OpenRouter",
- "provider": "vllm",
+ "provider": "openrouter",
"model": "MODEL_NAME",
"apiKey": "YOUR_API_KEY",
- "apiBase": "http://localhost:8989/vllm"
+ "apiBase": "http://localhost:8989/openrouter"
}
],
"modelRoles": {
"default": "CodeGate-OpenRouter",
"summarize": "CodeGate-OpenRouter"
+ }
+}
+```
+
+
+
+
+You need a
+[vLLM server](https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html)
+running locally or access to a remote server to use this provider.
+
+CodeGate connects to `http://localhost:8000` by default. If you changed the
+default Ollama server port or to connect to a remote Ollama instance, launch
+CodeGate with the `CODEGATE_VLLM_URL` environment variable set to the correct
+URL. See [Configure CodeGate](../how-to/configure.md).
+
+A vLLM server hosts a single model. Continue automatically selects the available
+model, so the `model` parameter is not required. See Continue's
+[vLLM provider guide](https://docs.continue.dev/customize/model-providers/more/vllm)
+for more information.
+
+If your server requires an API key, replace `YOUR_API_KEY` with the key.
+Otherwise, remove the `apiKey` parameter from both sections.
+
+```json title="~/.continue/config.json"
+{
+ "models": [
+ {
+ "title": "CodeGate-vLLM",
+ "provider": "vllm",
+ "apiKey": "YOUR_API_KEY",
+ "apiBase": "http://localhost:8989/vllm"
+ }
+ ],
+ "modelRoles": {
+ "default": "CodeGate-vLLM",
+ "summarize": "CodeGate-vLLM"
},
"tabAutocompleteModel": {
- "title": "CodeGate-OpenRouter-Autocomplete",
+ "title": "CodeGate-vLLM-Autocomplete",
"provider": "vllm",
- "model": "MODEL_NAME",
"apiKey": "YOUR_API_KEY",
"apiBase": "http://localhost:8989/vllm"
}
@@ -331,49 +420,6 @@ In the Continue config file, replace `MODEL_NAME` with the file name without the
}
```
-
-
-
-You need a
-[vLLM server](https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html)
-running locally or access to a remote server to use this provider.
-
-CodeGate connects to `http://localhost:8000` by default. If you changed the
-default Ollama server port or to connect to a remote Ollama instance, launch
-CodeGate with the `CODEGATE_VLLM_URL` environment variable set to the correct
-URL. See [Configure CodeGate](../how-to/configure.md).
-
-A vLLM server hosts a single model. Continue automatically selects the available
-model, so the `model` parameter is not required. See Continue's
-[vLLM provider guide](https://docs.continue.dev/customize/model-providers/more/vllm)
-for more information.
-
-If your server requires an API key, replace `YOUR_API_KEY` with the key.
-Otherwise, remove the `apiKey` parameter from both sections.
-
-```json title="~/.continue/config.json"
-{
- "models": [
- {
- "title": "CodeGate-vLLM",
- "provider": "vllm",
- "apiKey": "YOUR_API_KEY",
- "apiBase": "http://localhost:8989/vllm"
- }
- ],
- "modelRoles": {
- "default": "CodeGate-vLLM",
- "summarize": "CodeGate-vLLM"
- },
- "tabAutocompleteModel": {
- "title": "CodeGate-vLLM-Autocomplete",
- "provider": "vllm",
- "apiKey": "YOUR_API_KEY",
- "apiBase": "http://localhost:8989/vllm"
- }
-}
-```
-
diff --git a/docs/integrations/copilot.mdx b/docs/integrations/copilot.mdx
index 0a528d8..1bd2988 100644
--- a/docs/integrations/copilot.mdx
+++ b/docs/integrations/copilot.mdx
@@ -235,7 +235,7 @@ Copilot chat and type `codegate version`. You should receive a response like
"CodeGate version 0.1.13".
+
+1. Select your provider and configure as detailed here:
+
+
+
+1. Click **Save Settings** and confirm that you want to apply the model.
+
+## Verify configuration
+
+To verify that you've successfully connected Claude Coder / Kodu to CodeGate,
+start a new task in the Claude Coder sidebar and type `codegate version`. You
+should receive a response like "CodeGate version: v0.1.15":
+
+
+
+Start a new task and try asking CodeGate about a known malicious Python package:
+
+```plain title="Claude Coder chat"
+Tell me how to use the invokehttp package from PyPI
+```
+
+CodeGate responds with a warning and a link to the Stacklok Insight report about
+this package:
+
+```plain title="Claude Coder chat"
+Warning: CodeGate detected one or more malicious, deprecated or archived packages.
+
+ • invokehttp: https://www.insight.stacklok.com/report/pypi/invokehttp
+
+The `invokehttp` package from PyPI has been identified as malicious and should
+not be used. Please avoid using this package and consider using a trusted
+alternative such as `requests` for making HTTP requests in Python.
+
+Here is an example of how to use the `requests` package:
+
+...
+```
+
+## Next steps
+
+Learn more about CodeGate's features and how to use them:
+
+- [Access the dashboard](../how-to/dashboard.md)
+- [CodeGate features](../features/index.mdx)
+
+## Remove CodeGate
+
+If you decide to stop using CodeGate, follow these steps to remove it and revert
+your environment.
+
+1. Remove the custom base URL from your Claude Coder provider settings.
+
+1. Stop and remove the CodeGate container:
+
+ ```bash
+ docker stop codegate && docker rm codegate
+ ```
+
+1. If you launched CodeGate with a persistent volume, delete it to remove the
+ CodeGate database and other files:
+
+ ```bash
+ docker volume rm codegate_volume
+ ```
diff --git a/docs/integrations/open-interpreter.mdx b/docs/integrations/open-interpreter.mdx
index 96ca7ba..49cf839 100644
--- a/docs/integrations/open-interpreter.mdx
+++ b/docs/integrations/open-interpreter.mdx
@@ -11,9 +11,12 @@ import TabItem from '@theme/TabItem';
[Open Interpreter](https://github.com/openinterpreter/open-interpreter) lets
LLMs run code locally through a ChatGPT-like interface in your terminal.
-CodeGate works with [OpenAI](https://openai.com/api/) and OpenAI-compatible APIs
+CodeGate works with [OpenAI](https://openai.com/api/) and compatible APIs
through Open Interpreter.
+You can also configure [CodeGate muxing](../features/muxing.md) to select your
+provider and model using [workspaces](../features/workspaces.mdx).
+
:::note
This guide assumes you have already installed Open Interpreter using their
@@ -26,34 +29,67 @@ This guide assumes you have already installed Open Interpreter using their
To configure Open Interpreter to send requests through CodeGate, run
`interpreter` with the
[API base setting](https://docs.openinterpreter.com/settings/all-settings#api-base)
-set to CodeGate's local API port, `http://localhost:8989/openai`.
+set to CodeGate's local API port, `http://localhost:8989/`.
-By default, CodeGate connects to the [OpenAI API](https://openai.com/api/). To
-use a different OpenAI-compatible endpoint, set the `CODEGATE_OPENAI_URL`
-[configuration parameter](../how-to/configure.md#config-parameters) when you run
-CodeGate.
+
+
+First, configure your [provider(s)](../features/muxing.md#add-a-provider) and
+select a model for each of your
+[workspace(s)](../features/workspaces.mdx#manage-workspaces) in the CodeGate dashboard.
-
-
- ```bash
- interpreter --api_base http://localhost:8989/openai --api_key YOUR_API_KEY --model MODEL_NAME
- ```
+When you run `interpreter`, the API key parameter is required but the value is
+not used. The `--model` setting must start with `openai/` but the actual model
+is determined by your CodeGate workspace.
-
-
- If you are running Open Interpreter's v1.0
- [development branch](https://github.com/OpenInterpreter/open-interpreter/tree/development):
+
+
+ ```bash
+ interpreter --api_base http://localhost:8989/v1/mux --api_key fake-value-not-used --model openai/fake-value-not-used
+ ```
- ```bash
- interpreter --api-base http://localhost:8989/openai --api-key YOUR_API_KEY --model MODEL_NAME
- ```
+
+
+ If you are running Open Interpreter's v1.0
+ [development branch](https://github.com/OpenInterpreter/open-interpreter/tree/development):
-
-
+ ```bash
+ interpreter --api-base http://localhost:8989/v1/mux --api-key fake-value-not-used --model openai/fake-value-not-used
+ ```
+
+
+
+
+
+
+You need an [OpenAI API](https://openai.com/api/) account to use this provider.
+To use a different OpenAI-compatible endpoint, set the `CODEGATE_OPENAI_URL`
+[configuration parameter](../how-to/configure.md) when you launch CodeGate.
+
+
+
+ ```bash
+ interpreter --api_base http://localhost:8989/openai --api_key YOUR_API_KEY --model MODEL_NAME
+ ```
+
+
+
+ If you are running Open Interpreter's v1.0
+ [development branch](https://github.com/OpenInterpreter/open-interpreter/tree/development):
+
+ ```bash
+ interpreter --api-base http://localhost:8989/openai --api-key YOUR_API_KEY --model MODEL_NAME
+ ```
+
+
+
+
Replace `YOUR_API_KEY` with your OpenAI API key, and `MODEL_NAME` with your
desired model, like `openai/gpt-4o-mini`.
+
+
+
:::info
The `--model` parameter value must start with `openai/` for CodeGate to properly
diff --git a/docs/partials/_aider-providers.mdx b/docs/partials/_aider-providers.mdx
index 9d887b7..bff3593 100644
--- a/docs/partials/_aider-providers.mdx
+++ b/docs/partials/_aider-providers.mdx
@@ -5,56 +5,24 @@ import TabItem from '@theme/TabItem';
import LocalModelRecommendation from './_local-model-recommendation.md';
-
-
+
+
+First, configure your [provider(s)](../features/muxing.md#add-a-provider) and
+select a model for each of your
+[workspace(s)](../features/workspaces.mdx#manage-workspaces) in the CodeGate dashboard.
-You need an [OpenAI API](https://openai.com/api/) account to use this provider.
-To use a different OpenAI-compatible endpoint, set the `CODEGATE_OPENAI_URL`
-[configuration parameter](../how-to/configure.md#config-parameters).
-
-Before you run aider, set environment variables for your API key and to set the
-API base URL to CodeGate's API port. Alternately, use one of aider's other
-[supported configuration methods](https://aider.chat/docs/config/api-keys.html)
-to set the corresponding values.
-
-
-
-
-```bash
-export OPENAI_API_KEY=
-export OPENAI_API_BASE=http://localhost:8989/openai
-```
-
-:::note
-
-To persist these variables, add them to your shell profile (e.g., `~/.bashrc` or
-`~/.zshrc`).
-
-:::
+Run aider with the OpenAI base URL set to `http://localhost:8989/v1/mux`. You
+can do this with the `OPENAI_API_BASE` environment variable or on the command
+line as shown below.
-
-
+The `--openai-api-key` parameter is required but the value is not used. The
+`--model` setting must start with `openai/` but the actual model is determined
+by your CodeGate workspace.
```bash
-setx OPENAI_API_KEY
-setx OPENAI_API_BASE http://localhost:8989/openai
+aider --openai-api-base http://localhost:8989/v1/mux --openai-api-key fake-value-not-used --model openai/fake-value-not-used
```
-:::note
-
-Restart your shell after running `setx`.
-
-:::
-
-
-
-
-Replace `` with your
-[OpenAI API key](https://platform.openai.com/api-keys).
-
-Then run `aider` as normal. For more information, see the
-[aider docs for connecting to OpenAI](https://aider.chat/docs/llms/openai.html).
-
@@ -116,5 +84,55 @@ locally using `ollama pull`.
For more information, see the
[aider docs for connecting to Ollama](https://aider.chat/docs/llms/ollama.html).
+
+
+
+You need an [OpenAI API](https://openai.com/api/) account to use this provider.
+To use a different OpenAI-compatible endpoint, set the `CODEGATE_OPENAI_URL`
+[configuration parameter](../how-to/configure.md#config-parameters).
+
+Before you run aider, set environment variables for your API key and to set the
+API base URL to CodeGate's API port. Alternately, use one of aider's other
+[supported configuration methods](https://aider.chat/docs/config/api-keys.html)
+to set the corresponding values.
+
+
+
+
+```bash
+export OPENAI_API_KEY=
+export OPENAI_API_BASE=http://localhost:8989/openai
+```
+
+:::note
+
+To persist these variables, add them to your shell profile (e.g., `~/.bashrc` or
+`~/.zshrc`).
+
+:::
+
+
+
+
+```bash
+setx OPENAI_API_KEY
+setx OPENAI_API_BASE http://localhost:8989/openai
+```
+
+:::note
+
+Restart your shell after running `setx`.
+
+:::
+
+
+
+
+Replace `` with your
+[OpenAI API key](https://platform.openai.com/api-keys).
+
+Then run `aider` as normal. For more information, see the
+[aider docs for connecting to OpenAI](https://aider.chat/docs/llms/openai.html).
+
diff --git a/docs/partials/_cline-providers.mdx b/docs/partials/_cline-providers.mdx
index 08a2a01..8f5d87d 100644
--- a/docs/partials/_cline-providers.mdx
+++ b/docs/partials/_cline-providers.mdx
@@ -7,8 +7,28 @@ import ThemedImage from '@theme/ThemedImage';
import LocalModelRecommendation from './_local-model-recommendation.md';
-
-
+
+
+First, configure your [provider(s)](../features/muxing.md#add-a-provider) and
+select a model for each of your
+[workspace(s)](../features/workspaces.mdx#manage-workspaces) in the CodeGate dashboard.
+
+In the Cline settings, choose **OpenAI Compatible** as your provider. Set the
+**Base URL** to `http://localhost:8989/v1/mux`. Enter anything you want into the
+API key and model ID fields; these are required but not used since the actual
+provider and model is determined by your CodeGate workspace.
+
+
+
+
+
You need an [Anthropic API](https://www.anthropic.com/api) account to use this
provider.
@@ -30,24 +50,47 @@ To enable CodeGate, enable **Use custom base URL** and enter
/>
-
+
-You need an [OpenAI API](https://openai.com/api/) account to use this provider.
-To use a different OpenAI-compatible endpoint, set the `CODEGATE_OPENAI_URL`
-[configuration parameter](../how-to/configure.md) when you launch CodeGate.
+You need LM Studio installed on your local system with a server running from LM
+Studio's **Developer** tab to use this provider. See the
+[LM Studio docs](https://lmstudio.ai/docs/api/server) for more information.
-In the Cline settings, choose **OpenAI Compatible** as your provider, enter your
-OpenAI API key, and set your preferred model (example: `gpt-4o-mini`).
+Cline uses large prompts, so you will likely need to increase the context length
+for the model you've loaded in LM Studio. In the Developer tab, select the model
+you'll use with CodeGate, open the **Load** tab on the right and increase the
+**Context Length** to _at least_ 18k (18,432) tokens, then reload the model.
-To enable CodeGate, set the **Base URL** to `https://localhost:8989/openai`.
+
+
+CodeGate connects to `http://host.docker.internal:1234` by default. If you
+changed the default LM Studio server port, launch CodeGate with the
+`CODEGATE_LM_STUDIO_URL` environment variable set to the correct URL. See
+[Configure CodeGate](/how-to/configure.md).
+
+In the Cline settings, choose LM Studio as your provider and set the **Base
+URL** to `http://localhost:8989/lm_studio`.
+
+Set the **Model ID** to `lm_studio/`, where `` is the
+name of the model you're serving through LM Studio (shown in the Developer tab),
+for example `lm_studio/qwen2.5-coder-7b-instruct`.
+
+
@@ -79,47 +122,46 @@ locally using `ollama pull`.
/>
-
+
-You need LM Studio installed on your local system with a server running from LM
-Studio's **Developer** tab to use this provider. See the
-[LM Studio docs](https://lmstudio.ai/docs/api/server) for more information.
+You need an [OpenAI API](https://openai.com/api/) account to use this provider.
+To use a different OpenAI-compatible endpoint, set the `CODEGATE_OPENAI_URL`
+[configuration parameter](../how-to/configure.md) when you launch CodeGate.
-Cline uses large prompts, so you will likely need to increase the context length
-for the model you've loaded in LM Studio. In the Developer tab, select the model
-you'll use with CodeGate, open the **Load** tab on the right and increase the
-**Context Length** to _at least_ 18k (18,432) tokens, then reload the model.
+In the Cline settings, choose **OpenAI Compatible** as your provider, enter your
+OpenAI API key, and set your preferred model (example: `gpt-4o-mini`).
+
+To enable CodeGate, set the **Base URL** to `http://localhost:8989/openai`.
-CodeGate connects to `http://host.docker.internal:1234` by default. If you
-changed the default LM Studio server port, launch CodeGate with the
-`CODEGATE_LM_STUDIO_URL` environment variable set to the correct URL. See
-[Configure CodeGate](/how-to/configure.md).
+
+
-In the Cline settings, choose LM Studio as your provider and set the **Base
-URL** to `http://localhost:8989/lm_studio`.
+You need an [OpenRouter](https://openrouter.ai/) account to use this provider.
-Set the **Model ID** to `lm_studio/`, where `` is the
-name of the model you're serving through LM Studio (shown in the Developer tab),
-for example `lm_studio/qwen2.5-coder-7b-instruct`.
+In the Cline settings, choose **OpenAI Compatible** as your provider (NOT
+OpenRouter), enter your
+[OpenRouter API key](https://openrouter.ai/settings/keys), and set your
+[preferred model](https://openrouter.ai/models) (example:
+`anthropic/claude-3.5-sonnet`).
-
+To enable CodeGate, set the **Base URL** to `http://localhost:8989/openrouter`.
diff --git a/docs/partials/_kodu-providers.mdx b/docs/partials/_kodu-providers.mdx
new file mode 100644
index 0000000..0e13dfc
--- /dev/null
+++ b/docs/partials/_kodu-providers.mdx
@@ -0,0 +1,45 @@
+{/* This content is pulled out as an include because Prettier can't handle the indentation needed to get this to appear in the right spot under a list item. */}
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+
+
+First, configure your [provider(s)](../features/muxing.md#add-a-provider) and
+select a model for each of your
+[workspace(s)](../features/workspaces.mdx#manage-workspaces) in the CodeGate dashboard.
+
+In the **Provider Settings** settings, select **OpenAI Compatible**. Set the
+**Base URL** to `http://localhost:8989/v1/mux`.
+
+Enter anything you want into the Model ID and API key fields; these are not used
+since the actual provider and model is determined by your CodeGate workspace.
+
+
+
+
+You need an [OpenAI API](https://openai.com/api/) account to use this provider.
+To use a different OpenAI-compatible endpoint, set the `CODEGATE_OPENAI_URL`
+[configuration parameter](../how-to/configure.md) when you launch CodeGate.
+
+In the **Provider Settings** settings, select **OpenAI Compatible**. Set the
+**Base URL** to `http://localhost:8989/openai`.
+
+Enter the **Model ID** and your
+[OpenAI API key](https://platform.openai.com/api-keys). A reasoning model like
+`o1-mini` or `o3-mini` is recommended.
+
+
+
+
+You need an [OpenRouter](https://openrouter.ai/) account to use this provider.
+
+In the **Provider Settings** settings, select **OpenAI Compatible**. Set the
+**Base URL** to `http://localhost:8989/openrouter`.
+
+Enter your [preferred model](https://openrouter.ai/models) for the **Model ID**
+(example: `anthropic/claude-3.5-sonnet`) and add your
+[OpenRouter API key](https://openrouter.ai/keys).
+
+
+
diff --git a/eslint.config.mjs b/eslint.config.mjs
index 9b7d706..ac144b2 100644
--- a/eslint.config.mjs
+++ b/eslint.config.mjs
@@ -10,10 +10,13 @@ export default [
{ ignores: ['.docusaurus/', 'build/', 'node_modules/'] },
{ files: ['**/*.{js,mjs,cjs,ts,jsx,tsx}'] },
{ languageOptions: { globals: globals.node } },
+
pluginJs.configs.recommended,
...tseslint.configs.recommended,
pluginReact.configs.flat.recommended,
eslintConfigPrettier,
+
+ // Configs for .mdx files
{
...mdx.flat,
processor: mdx.createRemarkProcessor({
@@ -23,8 +26,19 @@ export default [
rules: {
...mdx.flat.rules,
'react/no-unescaped-entities': 'off',
+ 'react/jsx-no-undef': ['error', { allowGlobals: true }],
+ },
+ languageOptions: {
+ ...mdx.flat.languageOptions,
+ globals: {
+ ...mdx.flat.languageOptions.globals,
+ // Add global components from src/theme/MDXComponents.tsx here
+ Columns: 'readonly',
+ Column: 'readonly',
+ },
},
},
+
{
settings: {
react: {
diff --git a/src/components/Column/index.tsx b/src/components/Column/index.tsx
new file mode 100644
index 0000000..b96a064
--- /dev/null
+++ b/src/components/Column/index.tsx
@@ -0,0 +1,21 @@
+import React, { ReactNode, CSSProperties } from 'react';
+// Import clsx library for conditional classes.
+import clsx from 'clsx';
+
+interface ColumnProps {
+ children: ReactNode;
+ className?: string;
+ style?: CSSProperties;
+}
+
+// Define the Column component as a function
+// with children, className, style as properties
+// Look https://infima.dev/docs/ for learn more.
+// Style only affects the element inside the column, but we could have also made the same distinction as for the classes.
+export default function Column({ children, className, style }: ColumnProps) {
+ return (
+
+ {children}
+
+ );
+}
diff --git a/src/components/Columns/index.tsx b/src/components/Columns/index.tsx
new file mode 100644
index 0000000..0771cef
--- /dev/null
+++ b/src/components/Columns/index.tsx
@@ -0,0 +1,24 @@
+import React, { ReactNode, CSSProperties } from 'react';
+// Import clsx library for conditional classes.
+import clsx from 'clsx';
+
+interface ColumnsProps {
+ children: ReactNode;
+ className?: string;
+ style?: CSSProperties;
+}
+
+// Define the Columns component as a function
+// with children, className, and style as properties
+// className will allow you to pass either your custom classes or the native infima classes https://infima.dev/docs/layout/grid.
+// Style" will allow you to either pass your custom styles directly, which can be an alternative to the "styles.module.css" file in certain cases.
+export default function Columns({ children, className, style }: ColumnsProps) {
+ return (
+ // This section encompasses the columns that we will integrate with children from a dedicated component to allow the addition of columns as needed
+
+ );
+}
diff --git a/src/theme/MDXComponents.tsx b/src/theme/MDXComponents.tsx
new file mode 100644
index 0000000..43bad93
--- /dev/null
+++ b/src/theme/MDXComponents.tsx
@@ -0,0 +1,18 @@
+/*
+ Extra components to load into the global scope.
+ See https://docusaurus.io/docs/markdown-features/react#mdx-component-scope
+
+ To avoid linting errors, add these to the `languageOptions.globals` section
+ for mdx files in the `eslint.config.mjs` file
+*/
+
+import MDXComponents from '@theme-original/MDXComponents';
+import Columns from '@site/src/components/Columns';
+import Column from '@site/src/components/Column';
+
+export default {
+ // Reusing the default mapping
+ ...MDXComponents,
+ Columns,
+ Column,
+};
diff --git a/static/img/integrations/cline-mode-act-dark.webp b/static/img/integrations/cline-mode-act-dark.webp
new file mode 100644
index 0000000..01e9f5f
Binary files /dev/null and b/static/img/integrations/cline-mode-act-dark.webp differ
diff --git a/static/img/integrations/cline-mode-act-light.webp b/static/img/integrations/cline-mode-act-light.webp
new file mode 100644
index 0000000..278b72a
Binary files /dev/null and b/static/img/integrations/cline-mode-act-light.webp differ
diff --git a/static/img/integrations/cline-mode-plan-dark.webp b/static/img/integrations/cline-mode-plan-dark.webp
new file mode 100644
index 0000000..90ac005
Binary files /dev/null and b/static/img/integrations/cline-mode-plan-dark.webp differ
diff --git a/static/img/integrations/cline-mode-plan-light.webp b/static/img/integrations/cline-mode-plan-light.webp
new file mode 100644
index 0000000..968d06f
Binary files /dev/null and b/static/img/integrations/cline-mode-plan-light.webp differ
diff --git a/static/img/integrations/cline-provider-mux-dark.webp b/static/img/integrations/cline-provider-mux-dark.webp
new file mode 100644
index 0000000..ad20e6a
Binary files /dev/null and b/static/img/integrations/cline-provider-mux-dark.webp differ
diff --git a/static/img/integrations/cline-provider-mux-light.webp b/static/img/integrations/cline-provider-mux-light.webp
new file mode 100644
index 0000000..966d4ed
Binary files /dev/null and b/static/img/integrations/cline-provider-mux-light.webp differ
diff --git a/static/img/integrations/cline-provider-openrouter-dark.webp b/static/img/integrations/cline-provider-openrouter-dark.webp
new file mode 100644
index 0000000..66b7ecc
Binary files /dev/null and b/static/img/integrations/cline-provider-openrouter-dark.webp differ
diff --git a/static/img/integrations/cline-provider-openrouter-light.webp b/static/img/integrations/cline-provider-openrouter-light.webp
new file mode 100644
index 0000000..d53e5f1
Binary files /dev/null and b/static/img/integrations/cline-provider-openrouter-light.webp differ
diff --git a/static/img/integrations/kodu-codegate-version-dark.webp b/static/img/integrations/kodu-codegate-version-dark.webp
new file mode 100644
index 0000000..2aac989
Binary files /dev/null and b/static/img/integrations/kodu-codegate-version-dark.webp differ
diff --git a/static/img/integrations/kodu-codegate-version-light.webp b/static/img/integrations/kodu-codegate-version-light.webp
new file mode 100644
index 0000000..2a6274b
Binary files /dev/null and b/static/img/integrations/kodu-codegate-version-light.webp differ
diff --git a/static/img/integrations/kodu-settings-dark.webp b/static/img/integrations/kodu-settings-dark.webp
new file mode 100644
index 0000000..3fca9bd
Binary files /dev/null and b/static/img/integrations/kodu-settings-dark.webp differ
diff --git a/static/img/integrations/kodu-settings-light.webp b/static/img/integrations/kodu-settings-light.webp
new file mode 100644
index 0000000..c0e73ec
Binary files /dev/null and b/static/img/integrations/kodu-settings-light.webp differ