-
Notifications
You must be signed in to change notification settings - Fork 5
feat(llm): enhance LiteLLMCaller to support custom API key handling a… #118
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…nd environment variable expansion
🔒 Security Scan ResultsSecurity Scan SummaryScan ResultsPython SAST (Bandit)Recommendations
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This pull request enhances the LiteLLMCaller class to improve API key handling for custom LLM endpoints and environment variable expansion. The changes enable better support for custom OpenAI-compatible LLM endpoints by ensuring API keys are consistently passed both as kwargs to LiteLLM and as provider-specific environment variables.
Key changes:
- Modified
LiteLLMCaller._get_model_kwargs()to always passapi_keyin kwargs for all providers, not just OpenRouter - Added fallback logic to set
OPENAI_API_KEYenvironment variable for custom endpoints (assuming most custom endpoints are OpenAI-compatible) - Added comprehensive test coverage for custom endpoints with environment variable and literal API keys, extra headers, and backward compatibility for known providers
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.
| File | Description |
|---|---|
| backend/modules/llm/litellm_caller.py | Modified API key handling to always pass api_key in kwargs for all providers and added OPENAI_API_KEY fallback for custom endpoints |
| backend/tests/test_llm_env_expansion.py | Added four new test methods covering custom endpoint scenarios with env var/literal API keys, extra headers, and backward compatibility verification for known providers |
| def test_custom_endpoint_with_literal_api_key(self): | ||
| """Custom endpoint should pass api_key in kwargs when using literal value.""" | ||
| # Create LLM config for custom endpoint with literal api_key | ||
| llm_config = LLMConfig( | ||
| models={ | ||
| "custom-model": ModelConfig( | ||
| model_name="custom-model-name", | ||
| model_url="https://custom-llm.example.com/v1", | ||
| api_key="sk-literal-custom-key" | ||
| ) | ||
| } | ||
| ) | ||
|
|
||
| # Create LiteLLMCaller | ||
| caller = LiteLLMCaller(llm_config, debug_mode=True) | ||
|
|
||
| # Get model kwargs | ||
| model_kwargs = caller._get_model_kwargs("custom-model") | ||
|
|
||
| # Verify that api_key is in kwargs (critical for custom endpoints) | ||
| assert "api_key" in model_kwargs | ||
| assert model_kwargs["api_key"] == "sk-literal-custom-key" | ||
|
|
||
| # Verify that api_base is set for custom endpoint | ||
| assert "api_base" in model_kwargs | ||
| assert model_kwargs["api_base"] == "https://custom-llm.example.com/v1" |
Copilot
AI
Nov 24, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Test isolation issue: This test modifies os.environ["OPENAI_API_KEY"] via the implementation code (line 107 in litellm_caller.py) but doesn't clean it up, which could pollute other tests. Consider using monkeypatch fixture to ensure the environment is restored after the test, similar to test_custom_endpoint_with_env_var_api_key.
| def test_known_providers_still_get_api_key_in_kwargs(self): | ||
| """Verify that known providers also get api_key in kwargs (backward compatibility).""" | ||
| # Test OpenAI | ||
| llm_config = LLMConfig( | ||
| models={ | ||
| "openai-model": ModelConfig( | ||
| model_name="gpt-4", | ||
| model_url="https://api.openai.com/v1", | ||
| api_key="sk-openai-test" | ||
| ) | ||
| } | ||
| ) | ||
| caller = LiteLLMCaller(llm_config, debug_mode=True) | ||
| model_kwargs = caller._get_model_kwargs("openai-model") | ||
|
|
||
| # OpenAI should get api_key in kwargs | ||
| assert "api_key" in model_kwargs | ||
| assert model_kwargs["api_key"] == "sk-openai-test" | ||
|
|
||
| # Test OpenRouter | ||
| llm_config = LLMConfig( | ||
| models={ | ||
| "openrouter-model": ModelConfig( | ||
| model_name="meta-llama/llama-3-70b", | ||
| model_url="https://openrouter.ai/api/v1", | ||
| api_key="sk-or-test" | ||
| ) | ||
| } | ||
| ) | ||
| caller = LiteLLMCaller(llm_config, debug_mode=True) | ||
| model_kwargs = caller._get_model_kwargs("openrouter-model") | ||
|
|
||
| # OpenRouter should get api_key in kwargs | ||
| assert "api_key" in model_kwargs | ||
| assert model_kwargs["api_key"] == "sk-or-test" |
Copilot
AI
Nov 24, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Test isolation issue: This test modifies environment variables (OPENAI_API_KEY, OPENROUTER_API_KEY) via the implementation code but doesn't use monkeypatch to clean them up. This could pollute other tests. Consider adding monkeypatch as a parameter and using it to set/verify environment variables to ensure proper cleanup.
🔒 Security Scan ResultsSecurity Scan SummaryScan ResultsPython SAST (Bandit)Recommendations
|
1 similar comment
🔒 Security Scan ResultsSecurity Scan SummaryScan ResultsPython SAST (Bandit)Recommendations
|
🔒 Security Scan ResultsSecurity Scan SummaryScan ResultsPython SAST (Bandit)Recommendations
|
…nd environment variable expansion
@ktpedre