Skip to content

Conversation

@lss233
Copy link
Owner

@lss233 lss233 commented May 4, 2025

Sourcery 总结

通过引入全面的模型类型和能力分类机制,增强模型能力注释和检测系统

新特性:

  • 引入了新的 ModelType 枚举来对不同的模型类型进行分类
  • 为不同的模型类型(LLM、Embedding、Image、Audio)创建了详细的能力枚举
  • 实现了模型能力检测和匹配机制

增强功能:

  • 重构了模型检测和注册流程
  • 增加了更细粒度的模型能力跟踪
  • 改进了模型选择和过滤方法

杂项:

  • 更新了多个文件以支持新的模型类型和能力系统
  • 删除了已弃用的能力检测方法
Original summary in English

Summary by Sourcery

Enhance model capability annotation and detection system by introducing a comprehensive model type and ability classification mechanism

New Features:

  • Introduced a new ModelType enum to classify different model types
  • Created detailed ability enums for different model types (LLM, Embedding, Image, Audio)
  • Implemented model ability detection and matching mechanisms

Enhancements:

  • Refactored model detection and registration process
  • Added more granular model capability tracking
  • Improved model selection and filtering methods

Chores:

  • Updated multiple files to support new model type and ability system
  • Removed deprecated ability detection methods

lss233 added 2 commits April 27, 2025 00:46
- Introduced a new ModelConfig class to encapsulate model configuration details, including ID, type, and ability.
- Updated LLMBackendConfig to use a list of ModelConfig objects for supported models, allowing for richer model information.
- Implemented model validation and migration logic to convert legacy model formats to the new ModelConfig structure.
- Enhanced auto-detection methods in various LLM adapters to return ModelConfig instances, improving model handling and integration.
- Updated related methods and tests to ensure compatibility with the new model configuration structure.
- Included the LLMAbility import in llm_registry.py to ensure proper functionality of LLM backend registration.
- Updated the LLMBackendRegistry class to accept additional arguments in the registration method for enhanced flexibility.
@sourcery-ai
Copy link
Contributor

sourcery-ai bot commented May 4, 2025

## 审查者指南

此拉取请求重构了模型能力表示,引入了 `ModelType` 和 `ModelAbility` 枚举以及 `ModelConfig` 数据结构。它更新了 `LLMManager` 和各种 LLM 适配器,以使用这种新结构来存储、管理和自动检测模型能力,替换了以前基于字符串的模型列表和基本能力标志。

#### 模型自动检测和加载的序列图

```mermaid
sequenceDiagram
    participant Client
    participant Adapter as LLMBackendAdapter
    participant ExtAPI as External Model API
    participant Utils
    participant LLMManager

    Client->>Adapter: auto_detect_models()
    activate Adapter
    Adapter->>ExtAPI: Request model list (e.g., GET /models)
    activate ExtAPI
    ExtAPI-->>Adapter: Raw model data
    deactivate ExtAPI
    loop For each raw model
        Adapter->>Utils: guess_..._model(model_id)
        activate Utils
        Utils-->>Adapter: (ModelType, ability_bitmask)
        deactivate Utils
        Adapter->>Adapter: Create ModelConfig(id, type, ability)
    end
    Adapter-->>Client: List~ModelConfig~
    deactivate Adapter

    Client->>LLMManager: load_backend(backend_config)
    activate LLMManager
    Note right of LLMManager: Backend config contains List<ModelConfig>
    LLMManager->>LLMManager: Store models in model_info cache
    deactivate LLMManager

按能力查询模型的序列图

sequenceDiagram
    participant Requester as Requesting Code
    participant LLMManager
    participant ModelInfo as model_info Cache

    Requester->>LLMManager: get_supported_models(type, ability)
    activate LLMManager
    LLMManager->>ModelInfo: Iterate over cached ModelConfig
    activate ModelInfo
    loop For each ModelConfig
        ModelInfo->>ModelInfo: Check if config.type == type
        ModelInfo->>ModelInfo: Check if ability.is_capable(config.ability)
        alt Matches type and ability
            ModelInfo->>LLMManager: Add model_id to result list
        end
    end
    ModelInfo-->>LLMManager: 
    deactivate ModelInfo
    LLMManager-->>Requester: Return List~str~ (matching model IDs)
    deactivate LLMManager
Loading

模型能力重构的类图

classDiagram
    direction LR

    class ModelType {
        <<Enumeration>>
        LLM
        Embedding
        ImageGeneration
        Audio
        +from_str(str) ModelType
    }

    class ModelAbility {
        <<Interface>>
        +is_capable(int ability) bool
    }

    class LLMAbility {
        <<Enumeration>>
        +Unknown: 0
        +Chat: 1 << 1
        +TextInput: 1 << 2
        +TextOutput: 1 << 3
        +ImageInput: 1 << 4
        +ImageOutput: 1 << 5
        +AudioInput: 1 << 6
        +AudioOutput: 1 << 7
        +FunctionCalling: 1 << 8
        +TextChat
        +is_capable(int ability) bool
    }
    ModelAbility <|.. LLMAbility
    note for LLMAbility "Other ability enums (Embedding, Image, Audio) follow similar pattern"

    class ModelConfig {
        +id: str
        +type: str
        +ability: int
    }

    class LLMBackendConfig {
        +adapter: str
        +config: Dict
        +enable: bool
        +models: List~ModelConfig~
        +migrate_models_format() validator
    }
    LLMBackendConfig o-- "*" ModelConfig : contains
    note for LLMBackendConfig "models field changed from List<str> to List<ModelConfig>.
Migration validator added."

    class LLMManager {
        -model_info: Dict~str, ModelConfig~
        +load_backend(str)
        +unload_backend(str)
        +get_supported_models(ModelType, ModelAbility) List~str~
        +get_models_by_type(ModelType) List~str~
        +get_models_by_ability(ModelType, ModelAbility) str
        +get_llm_id_by_ability(ModelAbility) str
    }
    LLMManager o-- "*" ModelConfig : stores

    class LLMBackendRegistry {
        - _adapters: Dict
        - _configs: Dict
        - _ability_registry: Dict # REMOVED
        +register(str, Type, Type) # REMOVED ability param
        - get_adapter_by_ability() # REMOVED
        - search_adapter_by_ability() # REMOVED
    }
    note for LLMBackendRegistry "Ability registration and lookup removed."

    class LLMBackendAdapter {
        <<Abstract>>
        +config: BaseModel
        +auto_detect_models() List~ModelConfig~
    }
    note for LLMBackendAdapter "auto_detect_models now returns List<ModelConfig>"

    class OpenAIAdapter {
        +config: OpenAIConfig
        +auto_detect_models() List~ModelConfig~
        +get_models() list~str~
    }
    LLMBackendAdapter <|-- OpenAIAdapter

    class Utils {
        <<Module>>
        +guess_openai_model(str) Tuple
        +guess_qwen_model(str) Tuple
    }
    OpenAIAdapter ..> Utils : uses
    AlibabaCloudAdapter ..> Utils : uses
Loading

文件级别变更

变更 详情 文件
引入了一种结构化的方式来定义模型类型和能力。
  • 添加了 ModelType 和各种 ModelAbility 枚举。
  • 添加了 ModelConfig Pydantic 模型来保存模型 ID、类型和能力位掩码。
  • 更新了 LLMBackendConfig 以使用 List[ModelConfig] 作为 models
  • LLMBackendConfig 添加了一个验证器,以自动迁移旧的基于字符串的模型配置。
kirara_ai/llm/model_types.py
kirara_ai/config/global_config.py
更新了 LLMManager 以处理新的 ModelConfig 结构。
  • 修改了 load_backendunload_backendis_backend_available 以使用 ModelConfig
  • 添加了 model_info 字典来存储 ModelConfig
  • 添加了 get_supported_models 以按类型和特定能力查找模型。
  • 添加了 get_models_by_abilityget_models_by_type
  • 弃用了 get_llm_id_by_ability
kirara_ai/llm/llm_manager.py
在适配器中实现了能力自动检测和猜测逻辑。
  • 添加了 guess_openai_modelguess_qwen_model 实用程序函数。
  • 修改了各种适配器(OpenAI、Volcengine、AlibabaCloud、Gemini、Ollama、OpenRouter)中的 auto_detect_models 以返回 List[ModelConfig]
  • 在适配器或实用程序函数中实现了基于模型 ID 或 API 元数据推断 ModelTypeability 的逻辑。
  • guess_openai_model 添加了单元测试。
kirara_ai/plugins/llm_preset_adapters/utils.py
kirara_ai/plugins/llm_preset_adapters/volcengine_adapter.py
kirara_ai/plugins/llm_preset_adapters/openrouter_adapter.py
kirara_ai/plugins/llm_preset_adapters/openai_adapter.py
kirara_ai/plugins/llm_preset_adapters/alibabacloud_adapter.py
kirara_ai/plugins/llm_preset_adapters/gemini_adapter.py
kirara_ai/plugins/llm_preset_adapters/ollama_adapter.py
kirara_ai/llm/adapter.py
kirara_ai/plugins/llm_preset_adapters/tests/test_utils.py
更新了 API 端点和注册表以与新的能力结构对齐。
  • 修改了 /api/llm/auto_detect_models 端点以返回 ModelConfigListResponse
  • LLMBackendRegistry.register 中删除了 ability 参数。
  • 更新了适配器注册以删除现在未使用的 ability 参数。
kirara_ai/web/api/llm/routes.py
kirara_ai/web/api/llm/models.py
kirara_ai/llm/llm_registry.py
kirara_ai/plugins/llm_preset_adapters/__init__.py
对工作流规则、系统实用程序和 Web 应用程序进行了小幅更新。
  • 向调度规则中的 match 方法添加了 container 参数。
  • 调整了系统实用程序中的内存使用情况报告。
  • 在 Web 应用程序启动中添加了 SO_REUSEADDR 套接字选项。
kirara_ai/workflow/core/dispatch/rules/sender_rules.py
kirara_ai/workflow/core/dispatch/rules/system_rules.py
kirara_ai/web/api/system/utils.py
kirara_ai/web/app.py

提示和命令

与 Sourcery 互动

  • 触发新的审查: 在拉取请求上评论 @sourcery-ai review
  • 继续讨论: 直接回复 Sourcery 的审查评论。
  • 从审查评论生成 GitHub 问题: 通过回复审查评论,要求 Sourcery 从审查评论创建一个问题。您也可以回复审查评论并使用 @sourcery-ai issue 从中创建一个问题。
  • 生成拉取请求标题: 在拉取请求标题中的任何位置写入 @sourcery-ai 以随时生成标题。您也可以在拉取请求上评论 @sourcery-ai title 以随时(重新)生成标题。
  • 生成拉取请求摘要: 在拉取请求正文中的任何位置写入 @sourcery-ai summary 以随时在您想要的位置生成 PR 摘要。您也可以在拉取请求上评论 @sourcery-ai summary 以随时(重新)生成摘要。
  • 生成审查者指南: 在拉取请求上评论 @sourcery-ai guide 以随时(重新)生成审查者指南。
  • 解决所有 Sourcery 评论: 在拉取请求上评论 @sourcery-ai resolve 以解决所有 Sourcery 评论。如果您已经解决了所有评论并且不想再看到它们,这将非常有用。
  • 驳回所有 Sourcery 审查: 在拉取请求上评论 @sourcery-ai dismiss 以驳回所有现有的 Sourcery 审查。如果您想从新的审查开始,这将特别有用 - 不要忘记评论 @sourcery-ai review 以触发新的审查!

自定义您的体验

访问您的 仪表板 以:

  • 启用或禁用审查功能,例如 Sourcery 生成的拉取请求摘要、审查者指南等。
  • 更改审查语言。
  • 添加、删除或编辑自定义审查说明。
  • 调整其他审查设置。

获得帮助

```
Original review guide in English

Reviewer's Guide

This pull request refactors the model capability representation by introducing ModelType and ModelAbility enums and a ModelConfig data structure. It updates the LLMManager and various LLM adapters to use this new structure for storing, managing, and auto-detecting model capabilities, replacing the previous string-based model list and basic ability flags.

Sequence Diagram for Model Auto-Detection and Loading

sequenceDiagram
    participant Client
    participant Adapter as LLMBackendAdapter
    participant ExtAPI as External Model API
    participant Utils
    participant LLMManager

    Client->>Adapter: auto_detect_models()
    activate Adapter
    Adapter->>ExtAPI: Request model list (e.g., GET /models)
    activate ExtAPI
    ExtAPI-->>Adapter: Raw model data
    deactivate ExtAPI
    loop For each raw model
        Adapter->>Utils: guess_..._model(model_id)
        activate Utils
        Utils-->>Adapter: (ModelType, ability_bitmask)
        deactivate Utils
        Adapter->>Adapter: Create ModelConfig(id, type, ability)
    end
    Adapter-->>Client: List~ModelConfig~
    deactivate Adapter

    Client->>LLMManager: load_backend(backend_config)
    activate LLMManager
    Note right of LLMManager: Backend config contains List<ModelConfig>
    LLMManager->>LLMManager: Store models in model_info cache
    deactivate LLMManager
Loading

Sequence Diagram for Querying Models by Capability

sequenceDiagram
    participant Requester as Requesting Code
    participant LLMManager
    participant ModelInfo as model_info Cache

    Requester->>LLMManager: get_supported_models(type, ability)
    activate LLMManager
    LLMManager->>ModelInfo: Iterate over cached ModelConfig
    activate ModelInfo
    loop For each ModelConfig
        ModelInfo->>ModelInfo: Check if config.type == type
        ModelInfo->>ModelInfo: Check if ability.is_capable(config.ability)
        alt Matches type and ability
            ModelInfo->>LLMManager: Add model_id to result list
        end
    end
    ModelInfo-->>LLMManager: 
    deactivate ModelInfo
    LLMManager-->>Requester: Return List~str~ (matching model IDs)
    deactivate LLMManager
Loading

Class Diagram for Model Capability Refactoring

classDiagram
    direction LR

    class ModelType {
        <<Enumeration>>
        LLM
        Embedding
        ImageGeneration
        Audio
        +from_str(str) ModelType
    }

    class ModelAbility {
        <<Interface>>
        +is_capable(int ability) bool
    }

    class LLMAbility {
        <<Enumeration>>
        +Unknown: 0
        +Chat: 1 << 1
        +TextInput: 1 << 2
        +TextOutput: 1 << 3
        +ImageInput: 1 << 4
        +ImageOutput: 1 << 5
        +AudioInput: 1 << 6
        +AudioOutput: 1 << 7
        +FunctionCalling: 1 << 8
        +TextChat
        +is_capable(int ability) bool
    }
    ModelAbility <|.. LLMAbility
    note for LLMAbility "Other ability enums (Embedding, Image, Audio) follow similar pattern"

    class ModelConfig {
        +id: str
        +type: str
        +ability: int
    }

    class LLMBackendConfig {
        +adapter: str
        +config: Dict
        +enable: bool
        +models: List~ModelConfig~
        +migrate_models_format() validator
    }
    LLMBackendConfig o-- "*" ModelConfig : contains
    note for LLMBackendConfig "models field changed from List<str> to List<ModelConfig>.
Migration validator added."

    class LLMManager {
        -model_info: Dict~str, ModelConfig~
        +load_backend(str)
        +unload_backend(str)
        +get_supported_models(ModelType, ModelAbility) List~str~
        +get_models_by_type(ModelType) List~str~
        +get_models_by_ability(ModelType, ModelAbility) str
    }
    LLMManager o-- "*" ModelConfig : stores

    class LLMBackendRegistry {
        - _adapters: Dict
        - _configs: Dict
        - _ability_registry: Dict # REMOVED
        +register(str, Type, Type) # REMOVED ability param
        - get_adapter_by_ability() # REMOVED
        - search_adapter_by_ability() # REMOVED
    }
    note for LLMBackendRegistry "Ability registration and lookup removed."

    class LLMBackendAdapter {
        <<Abstract>>
        +config: BaseModel
        +auto_detect_models() List~ModelConfig~
    }
    note for LLMBackendAdapter "auto_detect_models now returns List<ModelConfig>"

    class OpenAIAdapter {
        +config: OpenAIConfig
        +auto_detect_models() List~ModelConfig~
        +get_models() list~str~
    }
    LLMBackendAdapter <|-- OpenAIAdapter

    class Utils {
        <<Module>>
        +guess_openai_model(str) Tuple
        +guess_qwen_model(str) Tuple
    }
    OpenAIAdapter ..> Utils : uses
    AlibabaCloudAdapter ..> Utils : uses
Loading

File-Level Changes

Change Details Files
Introduced a structured way to define model types and capabilities.
  • Added ModelType and various ModelAbility enums.
  • Added ModelConfig Pydantic model to hold model ID, type, and ability bitmask.
  • Updated LLMBackendConfig to use List[ModelConfig] for models.
  • Added a validator to LLMBackendConfig to automatically migrate old string-based model configurations.
kirara_ai/llm/model_types.py
kirara_ai/config/global_config.py
Updated LLMManager to handle the new ModelConfig structure.
  • Modified load_backend, unload_backend, is_backend_available to work with ModelConfig.
  • Added model_info dictionary to store ModelConfig.
  • Added get_supported_models to find models by type and specific ability.
  • Added get_models_by_ability and get_models_by_type.
  • Deprecated get_llm_id_by_ability.
kirara_ai/llm/llm_manager.py
Implemented capability auto-detection and guessing logic in adapters.
  • Added guess_openai_model and guess_qwen_model utility functions.
  • Modified auto_detect_models in various adapters (OpenAI, Volcengine, AlibabaCloud, Gemini, Ollama, OpenRouter) to return List[ModelConfig].
  • Implemented logic within adapters or utility functions to infer ModelType and ability based on model ID or API metadata.
  • Added unit tests for guess_openai_model.
kirara_ai/plugins/llm_preset_adapters/utils.py
kirara_ai/plugins/llm_preset_adapters/volcengine_adapter.py
kirara_ai/plugins/llm_preset_adapters/openrouter_adapter.py
kirara_ai/plugins/llm_preset_adapters/openai_adapter.py
kirara_ai/plugins/llm_preset_adapters/alibabacloud_adapter.py
kirara_ai/plugins/llm_preset_adapters/gemini_adapter.py
kirara_ai/plugins/llm_preset_adapters/ollama_adapter.py
kirara_ai/llm/adapter.py
kirara_ai/plugins/llm_preset_adapters/tests/test_utils.py
Updated API endpoints and registry to align with new capability structure.
  • Modified /api/llm/auto_detect_models endpoint to return ModelConfigListResponse.
  • Removed the ability parameter from LLMBackendRegistry.register.
  • Updated adapter registrations to remove the now-unused ability argument.
kirara_ai/web/api/llm/routes.py
kirara_ai/web/api/llm/models.py
kirara_ai/llm/llm_registry.py
kirara_ai/plugins/llm_preset_adapters/__init__.py
Minor updates to workflow rules, system utilities, and web app.
  • Added container parameter to match methods in dispatch rules.
  • Adjusted memory usage reporting in system utils.
  • Added SO_REUSEADDR socket option in web app startup.
kirara_ai/workflow/core/dispatch/rules/sender_rules.py
kirara_ai/workflow/core/dispatch/rules/system_rules.py
kirara_ai/web/api/system/utils.py
kirara_ai/web/app.py

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@github-actions
Copy link

github-actions bot commented May 4, 2025

MyPy 类型检查通过 ✅

PR 修改的代码行通过了类型检查。

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lss233 - 我已经审查了你的更改,发现了一些需要解决的问题。

阻塞性问题:

  • OpenAI 适配器配置包含 API 密钥。(link)

  • 基于模型 ID 中的字符串匹配的模型能力猜测逻辑(guess_openai_modelguess_qwen_model)可能很脆弱;考虑直接从提供商 API 获取能力,或者更多地依赖显式配置。

  • auto_detect_models 中用于确定模型能力的方法在不同适配器之间差异很大;尽可能争取更一致的方法。

以下是我在审查期间查看的内容
  • 🟡 一般问题:发现 1 个问题
  • 🔴 安全:1 个阻塞性问题
  • 🟡 测试:发现 1 个问题
  • 🟡 复杂性:发现 1 个问题
  • 🟢 文档:一切看起来都很好

Sourcery 对开源项目是免费的 - 如果你喜欢我们的审查,请考虑分享它们 ✨
帮助我更有用!请点击每个评论上的 👍 或 👎,我将使用反馈来改进你的审查。
Original comment in English

Hey @lss233 - I've reviewed your changes and found some issues that need to be addressed.

Blocking issues:

  • The OpenAI adapter configuration contains an API key. (link)

  • The model capability guessing logic based on string matching in model IDs (guess_openai_model, guess_qwen_model) could be brittle; consider fetching capabilities directly from provider APIs where available or relying more on explicit configuration.

  • The methods for determining model capabilities in auto_detect_models vary significantly across adapters; strive for a more consistent approach where possible.

Here's what I looked at during the review
  • 🟡 General issues: 1 issue found
  • 🔴 Security: 1 blocking issue
  • 🟡 Testing: 1 issue found
  • 🟡 Complexity: 1 issue found
  • 🟢 Documentation: all looks good

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

}
]

def test_guess_openai_model():
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (testing): 缺少 guess_qwen_model 函数的测试。

添加一个 test_guess_qwen_model 套件,镜像 test_guess_openai_model,以验证各种 Qwen 模型 ID 和预期类型/能力。

Original comment in English

issue (testing): Missing tests for guess_qwen_model function.

Add a test_guess_qwen_model suite mirroring test_guess_openai_model to validate various Qwen model IDs and expected types/abilities.

return None
return None

def guess_openai_model(model_id: str) -> Tuple[ModelType, int] | None:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (complexity): 考虑将 guess_openai_modelguess_qwen_model 中嵌套的 if-else 逻辑重构为声明式规则列表,以提高可读性和可维护性。

考虑将分支逻辑提取到声明式规则列表中,而不是深度嵌套的 if-else 块。这可以降低复杂性,并使每个案例更容易检查和修改。例如,您可以像这样重构 guess_openai_model

def guess_openai_model(model_id: str) -> Tuple[ModelType, int] | None:
    model_id = model_id.lower()

    def embedding_rule(m: str):
        return "embedding" in m, (ModelType.Embedding, EmbeddingModelAbility.TextEmbedding.value | EmbeddingModelAbility.Batch.value)

    def image_rule(m: str):
        if "dall-e" in m or "gpt-image" in m:
            ability = ImageModelAbility.TextToImage.value
            if "dall-e-2" in m or "gpt-image" in m:
                ability |= ImageModelAbility.ImageEdit.value | ImageModelAbility.Inpainting.value
            return True, (ModelType.ImageGeneration, ability)
        return False, None

    # Define additional rules in similar fashion ...

    rules = [embedding_rule, image_rule]  
    # Append additional rules as needed.

    for rule in rules:
        match, result = rule(model_id)
        if match:
            return result

    # Fall back to your original LLM logic as default.
    ability = LLMAbility.TextChat.value
    # ... remaining conditions applied declaratively if possible.
    return (ModelType.LLM, ability)

您可以对 guess_qwen_model 应用类似的重构。这种方法:

  • 减少深度嵌套: 每个规则都是独立的。
  • 提高可维护性: 可以将新规则添加为函数或 lambda 条目。
  • 保持功能完整: 逻辑保持不变,只是结构不同。

尝试一次重构几个部分,并通过单元测试验证行为,以确保现有功能得到保留。

Original comment in English

issue (complexity): Consider refactoring the nested if-else logic in guess_openai_model and guess_qwen_model into a declarative list of rules to improve readability and maintainability.

Consider extracting the branching logic into a declarative list of rules instead of deeply nested if-else blocks. This can reduce complexity and make each case easier to inspect and modify. For example, you could refactor guess_openai_model like so:

def guess_openai_model(model_id: str) -> Tuple[ModelType, int] | None:
    model_id = model_id.lower()

    def embedding_rule(m: str):
        return "embedding" in m, (ModelType.Embedding, EmbeddingModelAbility.TextEmbedding.value | EmbeddingModelAbility.Batch.value)

    def image_rule(m: str):
        if "dall-e" in m or "gpt-image" in m:
            ability = ImageModelAbility.TextToImage.value
            if "dall-e-2" in m or "gpt-image" in m:
                ability |= ImageModelAbility.ImageEdit.value | ImageModelAbility.Inpainting.value
            return True, (ModelType.ImageGeneration, ability)
        return False, None

    # Define additional rules in similar fashion ...

    rules = [embedding_rule, image_rule]  
    # Append additional rules as needed.

    for rule in rules:
        match, result = rule(model_id)
        if match:
            return result

    # Fall back to your original LLM logic as default.
    ability = LLMAbility.TextChat.value
    # ... remaining conditions applied declaratively if possible.
    return (ModelType.LLM, ability)

You can apply a similar restructure for guess_qwen_model. This approach:

  • Reduces deep nesting: Each rule is self-contained.
  • Improves maintainability: New rules can be added as functions or lambda entries.
  • Keeps functionality intact: The logic remains the same, only structured differently.

Try refactoring a few sections at a time and verify behavior with unit tests to ensure existing functionality is preserved.

lss233 and others added 3 commits May 5, 2025 01:55
Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>
… TelegramAdapter

- Added try-except blocks to handle exceptions during resource cleanup in MCPServer and TelegramAdapter.
- Enhanced logging to capture errors that occur during shutdown processes, improving overall reliability and debuggability.
- Changed the expected memory usage percentage in the test from 2.5 to 0.5 to reflect accurate resource utilization.
- Ensured that the test remains aligned with the current system status reporting.
@lss233 lss233 force-pushed the feature/llm_ability branch from c8de649 to 4bddfe2 Compare May 4, 2025 21:29
@lss233 lss233 enabled auto-merge May 4, 2025 21:29
@codecov
Copy link

codecov bot commented May 4, 2025

Codecov Report

Attention: Patch coverage is 79.90431% with 42 lines in your changes missing coverage. Please review.

Project coverage is 66.17%. Comparing base (55e8837) to head (5e47776).

✅ All tests successful. No failed tests found.

Files with missing lines Patch % Lines
kirara_ai/media/manager.py 31.81% 15 Missing ⚠️
kirara_ai/llm/llm_manager.py 65.51% 10 Missing ⚠️
kirara_ai/llm/model_types.py 89.65% 6 Missing ⚠️
kirara_ai/web/api/llm/routes.py 20.00% 4 Missing ⚠️
kirara_ai/web/api/media/routes.py 93.87% 3 Missing ⚠️
kirara_ai/mcp/server.py 50.00% 2 Missing ⚠️
kirara_ai/web/app.py 0.00% 1 Missing ⚠️
...ara_ai/workflow/implementations/blocks/llm/chat.py 50.00% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master    #1474      +/-   ##
==========================================
+ Coverage   65.84%   66.17%   +0.33%     
==========================================
  Files         161      162       +1     
  Lines        8148     8299     +151     
==========================================
+ Hits         5365     5492     +127     
- Misses       2783     2807      +24     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

lss233 added 2 commits May 5, 2025 19:37
- Introduced MediaConfig class to manage media-related settings, including cleanup duration and auto removal of unreferenced files.
- Enhanced MediaManager with a setup_cleanup_task method to schedule automatic cleanup of unreferenced media files based on configuration.
- Added new API endpoints for retrieving system information and updating media configuration, allowing for dynamic management of media settings.
- Implemented tests for the new media management features to ensure functionality and reliability.
… logging

- Added logging for errors during adapter stopping and starting processes to improve debuggability.
- Enhanced the update_adapter function to handle adapter renaming and type validation more robustly.
- Implemented checks for existing adapter names and types, returning appropriate error messages for invalid requests.
- Updated the response structure to reflect the new adapter name and running status after updates.
@lss233 lss233 disabled auto-merge May 5, 2025 16:55
@lss233 lss233 merged commit bc892dd into master May 5, 2025
2 of 6 checks passed
@lss233 lss233 deleted the feature/llm_ability branch May 5, 2025 16:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants