Skip to content

Conversation

@garland3
Copy link
Collaborator

…nd environment variable expansion

@ktpedre

Copilot AI review requested due to automatic review settings November 24, 2025 23:47
@github-actions
Copy link

🔒 Security Scan Results

Security Scan Summary

Scan Results

Python SAST (Bandit)

⚠️ Security issues found in Python code

Recommendations

  • Review all SARIF files uploaded to GitHub Security tab
  • Address high and critical severity vulnerabilities immediately
  • Run npm audit fix and pip-audit locally to fix dependencies
  • Consider implementing additional security controls

Copilot finished reviewing on behalf of garland3 November 24, 2025 23:51
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This pull request enhances the LiteLLMCaller class to improve API key handling for custom LLM endpoints and environment variable expansion. The changes enable better support for custom OpenAI-compatible LLM endpoints by ensuring API keys are consistently passed both as kwargs to LiteLLM and as provider-specific environment variables.

Key changes:

  • Modified LiteLLMCaller._get_model_kwargs() to always pass api_key in kwargs for all providers, not just OpenRouter
  • Added fallback logic to set OPENAI_API_KEY environment variable for custom endpoints (assuming most custom endpoints are OpenAI-compatible)
  • Added comprehensive test coverage for custom endpoints with environment variable and literal API keys, extra headers, and backward compatibility for known providers

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.

File Description
backend/modules/llm/litellm_caller.py Modified API key handling to always pass api_key in kwargs for all providers and added OPENAI_API_KEY fallback for custom endpoints
backend/tests/test_llm_env_expansion.py Added four new test methods covering custom endpoint scenarios with env var/literal API keys, extra headers, and backward compatibility verification for known providers

Comment on lines +194 to +219
def test_custom_endpoint_with_literal_api_key(self):
"""Custom endpoint should pass api_key in kwargs when using literal value."""
# Create LLM config for custom endpoint with literal api_key
llm_config = LLMConfig(
models={
"custom-model": ModelConfig(
model_name="custom-model-name",
model_url="https://custom-llm.example.com/v1",
api_key="sk-literal-custom-key"
)
}
)

# Create LiteLLMCaller
caller = LiteLLMCaller(llm_config, debug_mode=True)

# Get model kwargs
model_kwargs = caller._get_model_kwargs("custom-model")

# Verify that api_key is in kwargs (critical for custom endpoints)
assert "api_key" in model_kwargs
assert model_kwargs["api_key"] == "sk-literal-custom-key"

# Verify that api_base is set for custom endpoint
assert "api_base" in model_kwargs
assert model_kwargs["api_base"] == "https://custom-llm.example.com/v1"
Copy link

Copilot AI Nov 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test isolation issue: This test modifies os.environ["OPENAI_API_KEY"] via the implementation code (line 107 in litellm_caller.py) but doesn't clean it up, which could pollute other tests. Consider using monkeypatch fixture to ensure the environment is restored after the test, similar to test_custom_endpoint_with_env_var_api_key.

Copilot uses AI. Check for mistakes.
Comment on lines 259 to 293
def test_known_providers_still_get_api_key_in_kwargs(self):
"""Verify that known providers also get api_key in kwargs (backward compatibility)."""
# Test OpenAI
llm_config = LLMConfig(
models={
"openai-model": ModelConfig(
model_name="gpt-4",
model_url="https://api.openai.com/v1",
api_key="sk-openai-test"
)
}
)
caller = LiteLLMCaller(llm_config, debug_mode=True)
model_kwargs = caller._get_model_kwargs("openai-model")

# OpenAI should get api_key in kwargs
assert "api_key" in model_kwargs
assert model_kwargs["api_key"] == "sk-openai-test"

# Test OpenRouter
llm_config = LLMConfig(
models={
"openrouter-model": ModelConfig(
model_name="meta-llama/llama-3-70b",
model_url="https://openrouter.ai/api/v1",
api_key="sk-or-test"
)
}
)
caller = LiteLLMCaller(llm_config, debug_mode=True)
model_kwargs = caller._get_model_kwargs("openrouter-model")

# OpenRouter should get api_key in kwargs
assert "api_key" in model_kwargs
assert model_kwargs["api_key"] == "sk-or-test"
Copy link

Copilot AI Nov 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test isolation issue: This test modifies environment variables (OPENAI_API_KEY, OPENROUTER_API_KEY) via the implementation code but doesn't use monkeypatch to clean them up. This could pollute other tests. Consider adding monkeypatch as a parameter and using it to set/verify environment variables to ensure proper cleanup.

Copilot uses AI. Check for mistakes.
@github-actions
Copy link

🔒 Security Scan Results

Security Scan Summary

Scan Results

Python SAST (Bandit)

⚠️ Security issues found in Python code

Recommendations

  • Review all SARIF files uploaded to GitHub Security tab
  • Address high and critical severity vulnerabilities immediately
  • Run npm audit fix and pip-audit locally to fix dependencies
  • Consider implementing additional security controls

1 similar comment
@github-actions
Copy link

🔒 Security Scan Results

Security Scan Summary

Scan Results

Python SAST (Bandit)

⚠️ Security issues found in Python code

Recommendations

  • Review all SARIF files uploaded to GitHub Security tab
  • Address high and critical severity vulnerabilities immediately
  • Run npm audit fix and pip-audit locally to fix dependencies
  • Consider implementing additional security controls

@github-actions
Copy link

🔒 Security Scan Results

Security Scan Summary

Scan Results

Python SAST (Bandit)

⚠️ Security issues found in Python code

Recommendations

  • Review all SARIF files uploaded to GitHub Security tab
  • Address high and critical severity vulnerabilities immediately
  • Run npm audit fix and pip-audit locally to fix dependencies
  • Consider implementing additional security controls

@garland3 garland3 merged commit 0823345 into main Nov 26, 2025
9 checks passed
@garland3 garland3 deleted the fix/custom-llm-api-key-handling branch November 26, 2025 03:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants