Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test: add unit test for src/store/user/slices/modelList/selectors/keyVaults.ts #5733

Open
wants to merge 4 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -212,7 +212,7 @@ ENV \
# Upstage
UPSTAGE_API_KEY="" UPSTAGE_MODEL_LIST="" \
# Wenxin
WENXIN_ACCESS_KEY="" WENXIN_SECRET_KEY="" WENXIN_MODEL_LIST="" \
WENXIN_API_KEY="" WENXIN_MODEL_LIST="" \
# xAI
XAI_API_KEY="" XAI_MODEL_LIST="" XAI_PROXY_URL="" \
# 01.AI
Expand Down
2 changes: 1 addition & 1 deletion Dockerfile.database
Original file line number Diff line number Diff line change
Expand Up @@ -247,7 +247,7 @@ ENV \
# Upstage
UPSTAGE_API_KEY="" UPSTAGE_MODEL_LIST="" \
# Wenxin
WENXIN_ACCESS_KEY="" WENXIN_SECRET_KEY="" WENXIN_MODEL_LIST="" \
WENXIN_API_KEY="" WENXIN_MODEL_LIST="" \
# xAI
XAI_API_KEY="" XAI_MODEL_LIST="" XAI_PROXY_URL="" \
# 01.AI
Expand Down
1 change: 0 additions & 1 deletion package.json
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,6 @@
"@aws-sdk/s3-request-presigner": "^3.723.0",
"@azure/core-rest-pipeline": "1.16.0",
"@azure/openai": "1.0.0-beta.12",
"@baiducloud/qianfan": "^0.1.9",
"@cfworker/json-schema": "^4.1.0",
"@clerk/localizations": "^3.9.6",
"@clerk/nextjs": "^6.10.6",
Expand Down
27 changes: 0 additions & 27 deletions src/app/(backend)/webapi/chat/wenxin/route.test.ts

This file was deleted.

30 changes: 0 additions & 30 deletions src/app/(backend)/webapi/chat/wenxin/route.ts

This file was deleted.

44 changes: 0 additions & 44 deletions src/app/(main)/settings/llm/ProviderList/Wenxin/index.tsx

This file was deleted.

6 changes: 2 additions & 4 deletions src/app/(main)/settings/llm/ProviderList/providers.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,7 @@ import {
TaichuProviderCard,
TogetherAIProviderCard,
UpstageProviderCard,
WenxinProviderCard,
XAIProviderCard,
ZeroOneProviderCard,
ZhiPuProviderCard,
Expand All @@ -40,7 +41,6 @@ import { useGithubProvider } from './Github';
import { useHuggingFaceProvider } from './HuggingFace';
import { useOllamaProvider } from './Ollama';
import { useOpenAIProvider } from './OpenAI';
import { useWenxinProvider } from './Wenxin';

export const useProviderList = (): ProviderItem[] => {
const AzureProvider = useAzureProvider();
Expand All @@ -50,7 +50,6 @@ export const useProviderList = (): ProviderItem[] => {
const CloudflareProvider = useCloudflareProvider();
const GithubProvider = useGithubProvider();
const HuggingFaceProvider = useHuggingFaceProvider();
const WenxinProvider = useWenxinProvider();

return useMemo(
() => [
Expand All @@ -75,7 +74,7 @@ export const useProviderList = (): ProviderItem[] => {
UpstageProviderCard,
XAIProviderCard,
QwenProviderCard,
WenxinProvider,
WenxinProviderCard,
HunyuanProviderCard,
SparkProviderCard,
ZhiPuProviderCard,
Expand All @@ -99,7 +98,6 @@ export const useProviderList = (): ProviderItem[] => {
BedrockProvider,
CloudflareProvider,
GithubProvider,
WenxinProvider,
HuggingFaceProvider,
],
);
Expand Down
61 changes: 0 additions & 61 deletions src/app/(main)/settings/provider/(detail)/wenxin/page.tsx

This file was deleted.

24 changes: 12 additions & 12 deletions src/config/aiModels/wenxin.ts
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ const wenxinChatModels: AIChatModelCard[] = [
'百度自研的旗舰级大规模⼤语⾔模型,覆盖海量中英文语料,具有强大的通用能力,可满足绝大部分对话问答、创作生成、插件应用场景要求;支持自动对接百度搜索插件,保障问答信息时效。',
displayName: 'ERNIE 3.5 8K',
enabled: true,
id: 'ERNIE-3.5-8K',
id: 'ernie-3.5-8k',
pricing: {
currency: 'CNY',
input: 0.8,
Expand All @@ -20,7 +20,7 @@ const wenxinChatModels: AIChatModelCard[] = [
description:
'百度自研的旗舰级大规模⼤语⾔模型,覆盖海量中英文语料,具有强大的通用能力,可满足绝大部分对话问答、创作生成、插件应用场景要求;支持自动对接百度搜索插件,保障问答信息时效。',
displayName: 'ERNIE 3.5 8K Preview',
id: 'ERNIE-3.5-8K-Preview',
id: 'ernie-3.5-8k-preview',
pricing: {
currency: 'CNY',
input: 0.8,
Expand All @@ -34,7 +34,7 @@ const wenxinChatModels: AIChatModelCard[] = [
'百度自研的旗舰级大规模⼤语⾔模型,覆盖海量中英文语料,具有强大的通用能力,可满足绝大部分对话问答、创作生成、插件应用场景要求;支持自动对接百度搜索插件,保障问答信息时效。',
displayName: 'ERNIE 3.5 128K',
enabled: true,
id: 'ERNIE-3.5-128K',
id: 'ernie-3.5-128k',
pricing: {
currency: 'CNY',
input: 0.8,
Expand All @@ -48,7 +48,7 @@ const wenxinChatModels: AIChatModelCard[] = [
'百度自研的旗舰级超大规模⼤语⾔模型,相较ERNIE 3.5实现了模型能力全面升级,广泛适用于各领域复杂任务场景;支持自动对接百度搜索插件,保障问答信息时效。',
displayName: 'ERNIE 4.0 8K',
enabled: true,
id: 'ERNIE-4.0-8K-Latest',
id: 'ernie-4.0-8k-latest',
pricing: {
currency: 'CNY',
input: 30,
Expand All @@ -61,7 +61,7 @@ const wenxinChatModels: AIChatModelCard[] = [
description:
'百度自研的旗舰级超大规模⼤语⾔模型,相较ERNIE 3.5实现了模型能力全面升级,广泛适用于各领域复杂任务场景;支持自动对接百度搜索插件,保障问答信息时效。',
displayName: 'ERNIE 4.0 8K Preview',
id: 'ERNIE-4.0-8K-Preview',
id: 'ernie-4.0-8k-preview',
pricing: {
currency: 'CNY',
input: 30,
Expand All @@ -75,7 +75,7 @@ const wenxinChatModels: AIChatModelCard[] = [
'百度自研的旗舰级超大规模⼤语⾔模型,综合效果表现出色,广泛适用于各领域复杂任务场景;支持自动对接百度搜索插件,保障问答信息时效。相较于ERNIE 4.0在性能表现上更优秀',
displayName: 'ERNIE 4.0 Turbo 8K',
enabled: true,
id: 'ERNIE-4.0-Turbo-8K-Latest',
id: 'ernie-4.0-turbo-8k-latest',
pricing: {
currency: 'CNY',
input: 20,
Expand All @@ -89,7 +89,7 @@ const wenxinChatModels: AIChatModelCard[] = [
'百度自研的旗舰级超大规模⼤语⾔模型,综合效果表现出色,广泛适用于各领域复杂任务场景;支持自动对接百度搜索插件,保障问答信息时效。相较于ERNIE 4.0在性能表现上更优秀',
displayName: 'ERNIE 4.0 Turbo 128K',
enabled: true,
id: 'ERNIE-4.0-Turbo-128K',
id: 'ernie-4.0-turbo-128k',
pricing: {
currency: 'CNY',
input: 20,
Expand All @@ -102,7 +102,7 @@ const wenxinChatModels: AIChatModelCard[] = [
description:
'百度自研的旗舰级超大规模⼤语⾔模型,综合效果表现出色,广泛适用于各领域复杂任务场景;支持自动对接百度搜索插件,保障问答信息时效。相较于ERNIE 4.0在性能表现上更优秀',
displayName: 'ERNIE 4.0 Turbo 8K Preview',
id: 'ERNIE-4.0-Turbo-8K-Preview',
id: 'ernie-4.0-turbo-8k-preview',
pricing: {
currency: 'CNY',
input: 20,
Expand All @@ -116,7 +116,7 @@ const wenxinChatModels: AIChatModelCard[] = [
'百度自研的轻量级大语言模型,兼顾优异的模型效果与推理性能,效果比ERNIE Lite更优,适合低算力AI加速卡推理使用。',
displayName: 'ERNIE Lite Pro 128K',
enabled: true,
id: 'ERNIE-Lite-Pro-128K',
id: 'ernie-lite-pro-128k',
pricing: {
currency: 'CNY',
input: 0.2,
Expand All @@ -130,7 +130,7 @@ const wenxinChatModels: AIChatModelCard[] = [
'百度2024年最新发布的自研高性能大语言模型,通用能力优异,效果比ERNIE Speed更优,适合作为基座模型进行精调,更好地处理特定场景问题,同时具备极佳的推理性能。',
displayName: 'ERNIE Speed Pro 128K',
enabled: true,
id: 'ERNIE-Speed-Pro-128K',
id: 'ernie-speed-pro-128k',
pricing: {
currency: 'CNY',
input: 0.3,
Expand All @@ -143,7 +143,7 @@ const wenxinChatModels: AIChatModelCard[] = [
description:
'百度2024年最新发布的自研高性能大语言模型,通用能力优异,适合作为基座模型进行精调,更好地处理特定场景问题,同时具备极佳的推理性能。',
displayName: 'ERNIE Speed 128K',
id: 'ERNIE-Speed-128K',
id: 'ernie-speed-128k',
pricing: {
currency: 'CNY',
input: 0,
Expand All @@ -156,7 +156,7 @@ const wenxinChatModels: AIChatModelCard[] = [
description:
'百度自研的垂直场景大语言模型,适合游戏NPC、客服对话、对话角色扮演等应用场景,人设风格更为鲜明、一致,指令遵循能力更强,推理性能更优。',
displayName: 'ERNIE Character 8K',
id: 'ERNIE-Character-8K',
id: 'ernie-character-8k',
pricing: {
currency: 'CNY',
input: 4,
Expand Down
8 changes: 3 additions & 5 deletions src/config/llm.ts
Original file line number Diff line number Diff line change
Expand Up @@ -64,8 +64,7 @@ export const getLLMConfig = () => {
AWS_SESSION_TOKEN: z.string().optional(),

ENABLED_WENXIN: z.boolean(),
WENXIN_ACCESS_KEY: z.string().optional(),
WENXIN_SECRET_KEY: z.string().optional(),
WENXIN_API_KEY: z.string().optional(),

ENABLED_OLLAMA: z.boolean(),

Expand Down Expand Up @@ -186,9 +185,8 @@ export const getLLMConfig = () => {
AWS_SECRET_ACCESS_KEY: process.env.AWS_SECRET_ACCESS_KEY,
AWS_SESSION_TOKEN: process.env.AWS_SESSION_TOKEN,

ENABLED_WENXIN: !!process.env.WENXIN_ACCESS_KEY && !!process.env.WENXIN_SECRET_KEY,
WENXIN_ACCESS_KEY: process.env.WENXIN_ACCESS_KEY,
WENXIN_SECRET_KEY: process.env.WENXIN_SECRET_KEY,
ENABLED_WENXIN: !!process.env.WENXIN_API_KEY,
WENXIN_API_KEY: process.env.WENXIN_API_KEY,

ENABLED_OLLAMA: process.env.ENABLED_OLLAMA !== '0',

Expand Down
Loading