Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(karpor): add AI proxy parameters to Karpor's value #104

Merged
merged 4 commits into from
Feb 8, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions charts/karpor/Chart.yaml
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
apiVersion: v2
name: karpor
version: 0.7.3
version: 0.7.4
type: application
appVersion: 0.6.1
appVersion: 0.6.2
description: A modern kubernetes visualization tool (Karpor).
home: https://github.com/KusionStack/karpor
icon: https://kusionstack.io/karpor/assets/logo/logo.svg
Expand Down
7 changes: 6 additions & 1 deletion charts/karpor/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,11 +70,16 @@ The Karpor Server Component is main backend server. It itself is an `apiserver`,

| Key | Type | Default | Description |
|-----|------|---------|-------------|
| server.ai | object | `{"authToken":"","backend":"openai","baseUrl":"","model":"gpt-3.5-turbo","temperature":1,"topP":1}` | AI configuration section. The AI analysis feature requires that [authToken, baseUrl] be assigned values. |
| server.ai | object | `{"authToken":"","backend":"openai","baseUrl":"","model":"gpt-3.5-turbo","proxy":{"enabled":false,"httpProxy":"","httpsProxy":"","noProxy":""},"temperature":1,"topP":1}` | AI configuration section. The AI analysis feature requires that [authToken, baseUrl] be assigned values. |
| server.ai.authToken | string | `""` | Authentication token for accessing the AI service. |
| server.ai.backend | string | `"openai"` | Backend service or platform that the AI model is hosted on. Available options: <br/>- `"openai"`: OpenAI API (default)<br/>- `"azureopenai"`: Azure OpenAI Service<br/>- `"huggingface"`: Hugging Face API<br/> If the backend you are using is compatible with OpenAI, then there is no need to make any changes here. |
| server.ai.baseUrl | string | `""` | Base URL of the AI service. e.g., "https://api.openai.com/v1". |
| server.ai.model | string | `"gpt-3.5-turbo"` | Name or identifier of the AI model to be used. e.g., "gpt-3.5-turbo". |
| server.ai.proxy | object | `{"enabled":false,"httpProxy":"","httpsProxy":"","noProxy":""}` | Proxy configuration for AI service connections |
| server.ai.proxy.enabled | bool | `false` | Enable proxy settings for AI service connections. When false, proxy settings will be ignored. |
| server.ai.proxy.httpProxy | string | `""` | HTTP proxy URL for AI service connections (e.g., "http://proxy.example.com:8080") |
| server.ai.proxy.httpsProxy | string | `""` | HTTPS proxy URL for AI service connections (e.g., "https://proxy.example.com:8080") |
| server.ai.proxy.noProxy | string | `""` | No proxy configuration for AI service connections (e.g., "localhost,127.0.0.1,example.com") |
| server.ai.temperature | float | `1` | Temperature parameter for the AI model. This controls the randomness of the output, where a higher value (e.g., 1.0) makes the output more random, and a lower value (e.g., 0.0) makes it more deterministic. |
| server.ai.topP | float | `1` | Top-p (nucleus sampling) parameter for the AI model. This controls Controls the probability mass to consider for sampling, where a higher value leads to greater diversity in the generated content (typically ranging from 0 to 1) |
| server.enableRbac | bool | `false` | Enable RBAC authorization if set to true. |
Expand Down
12 changes: 12 additions & 0 deletions charts/karpor/templates/karpor-server.yml
Original file line number Diff line number Diff line change
Expand Up @@ -65,6 +65,18 @@ spec:
{{- if .Values.server.ai.topP }}
- --ai-top-p={{ .Values.server.ai.topP }}
{{- end }}
{{- if .Values.server.ai.proxy.enabled }}
- --ai-proxy-enabled=true
{{- if .Values.server.ai.proxy.httpProxy }}
- --ai-http-proxy={{ .Values.server.ai.proxy.httpProxy }}
{{- end }}
{{- if .Values.server.ai.proxy.httpsProxy }}
- --ai-https-proxy={{ .Values.server.ai.proxy.httpsProxy }}
{{- end }}
{{- if .Values.server.ai.proxy.noProxy }}
- --ai-no-proxy={{ .Values.server.ai.proxy.noProxy }}
{{- end }}
{{- end }}
{{- end }}
command:
- /karpor
Expand Down
10 changes: 10 additions & 0 deletions charts/karpor/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -60,6 +60,16 @@ server:
# -- Top-p (nucleus sampling) parameter for the AI model. This controls Controls the probability mass to consider for
# sampling, where a higher value leads to greater diversity in the generated content (typically ranging from 0 to 1)
topP: 1.0
# -- Proxy configuration for AI service connections
proxy:
# -- Enable proxy settings for AI service connections. When false, proxy settings will be ignored.
enabled: false
# -- HTTP proxy URL for AI service connections (e.g., "http://proxy.example.com:8080")
httpProxy: ""
# -- HTTPS proxy URL for AI service connections (e.g., "https://proxy.example.com:8080")
httpsProxy: ""
# -- No proxy configuration for AI service connections (e.g., "localhost,127.0.0.1,example.com")
noProxy: ""

jinjiaKarl marked this conversation as resolved.
Show resolved Hide resolved
# Configuration for Karpor syncer
syncer:
Expand Down