Skip to content

feat: Add OpenAI integration with model configuration preparation #1

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jan 26, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 12 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,17 @@
# Changelog

## [0.1.3] - 2024-01-26

### Added
- OpenAI integration support with prepare_model_config functionality
- Test suite for OpenAI integration features
- Example implementation for OpenAI chat completions

### Changed
- Enhanced model configuration preparation with better validation
- Improved error handling for invalid memory formats
- Updated documentation with OpenAI integration examples

## [0.1.2] - 2024-03-19

### Added
Expand Down
49 changes: 49 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,10 @@ A Python library for managing and using prompts with Promptix Studio integration
- 🔄 **Version Control** - Track changes with live/draft states for each prompt
- 🔌 **Simple Integration** - Easy-to-use Python interface
- 📝 **Variable Substitution** - Dynamic prompts using `{{variable_name}}` syntax
- 🤖 **LLM Integration** - Direct integration with OpenAI and other LLM providers
- 🏃 **Local First** - No external API dependencies
- 🎨 **Web Interface** - Edit and manage prompts through a modern UI
- 🔍 **Schema Validation** - Automatic validation of prompt variables and structure

## Installation

Expand Down Expand Up @@ -54,6 +56,37 @@ support_prompt = Promptix.get_prompt(
)
```

## OpenAI Integration

Promptix provides seamless integration with OpenAI's chat models:

```python
from promptix import Promptix
import openai

client = openai.OpenAI()

# Prepare model configuration with conversation memory
memory = [
{"role": "user", "content": "I'm having trouble resetting my password"},
{"role": "assistant", "content": "I understand you're having password reset issues. Could you tell me what happens when you try?"}
]

model_config = Promptix.prepare_model_config(
prompt_template="CustomerSupport",
user_name="John Doe",
issue_type="password reset",
technical_level="intermediate",
interaction_history="2 previous tickets about 2FA setup",
issue_description="User is unable to reset their password after multiple attempts",
custom_data={"product_version": "2.1.0", "subscription_tier": "standard"},
memory=memory,
)

# Use the configuration with OpenAI
response = client.chat.completions.create(**model_config)
```

## Advanced Usage

### Version Control
Expand All @@ -73,6 +106,22 @@ prompt_latest = Promptix.get_prompt(
)
```

### Schema Validation

Promptix automatically validates your prompt variables against defined schemas:

```python
# Schema validation ensures correct variable types and values
try:
prompt = Promptix.get_prompt(
prompt_template="CustomerSupport",
user_name="John",
technical_level="expert" # Will raise error if not in ["beginner", "intermediate", "advanced"]
)
except ValueError as e:
print(f"Validation Error: {str(e)}")
```

### Error Handling

```python
Expand Down
35 changes: 35 additions & 0 deletions examples/openai-integration.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
from promptix import Promptix
import openai

def main():

client = openai.OpenAI()

# New prepare_model_config example that returns model config
print("Using prepare_model_config:")
memory = [
{"role": "user", "content": "I'm having trouble resetting my password"},
{"role": "assistant", "content": "I understand you're having password reset issues. Could you tell me what happens when you try?"}
]


model_config = Promptix.prepare_model_config(
prompt_template="CustomerSupport",
user_name="John Doe",
issue_type="password reset",
technical_level="intermediate",
interaction_history="2 previous tickets about 2FA setup",
product_version="2.1.0",
issue_description="User is unable to reset their password after multiple attempts",
custom_data={"product_version": "2.1.0", "subscription_tier": "standard"},
memory=memory,
)

response = client.chat.completions.create(**model_config)


print("Model Config:", model_config)
print("\nResponse:", response)

if __name__ == "__main__":
main()
2 changes: 1 addition & 1 deletion prompts.json
Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,7 @@
"system_message": "You are a customer support agent helping with a {{issue_type}} issue.\n\n{% if priority == 'high' %}URGENT: This requires immediate attention!\n{% endif %}\n\nUser: {{user_name}}\nTechnical Level: {{technical_level}}\nProduct Version: {{custom_data.product_version}}\n{% if custom_data.subscription_tier == 'premium' %}Premium Support Level\n{% endif %}\n\nPlease assist with the following issue:\n{{issue_description}}",
"temperature": 0.7,
"max_tokens": 2000,
"top_p": 1,
"top_p": null,
"frequency_penalty": 0,
"presence_penalty": 0,
"tools": [],
Expand Down
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"

[project]
name = "promptix"
version = "0.1.2"
version = "0.1.3"
description = "A simple library for managing and using prompts locally with Promptix Studio"
readme = "README.md"
requires-python = ">=3.8"
Expand Down
138 changes: 116 additions & 22 deletions src/promptix/core/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -97,31 +97,19 @@ def _validate_variables(

@classmethod
def _find_live_version(cls, versions: Dict[str, Any]) -> Optional[str]:
"""Find the 'latest' live version based on 'last_modified' or version naming."""
# Filter only versions where is_live == True
live_versions = {k: v for k, v in versions.items() if v.get("is_live", False)}
"""Find the live version. Only one version should be live at a time."""
# Find versions where is_live == True
live_versions = [k for k, v in versions.items() if v.get("is_live", False)]

if not live_versions:
return None

if len(live_versions) > 1:
raise ValueError(
f"Multiple live versions found: {live_versions}. Only one version can be live at a time."
)

# Strategy: pick the version with the largest "last_modified" timestamp
# (Alternate: pick the lexicographically largest version name, etc.)
# We'll parse the "last_modified" as an ISO string if possible.
def parse_iso(dt_str: str) -> float:
# Convert "YYYY-MM-DDTHH:MM:SS" into a float (timestamp) for easy comparison
import datetime
try:
return datetime.datetime.fromisoformat(dt_str).timestamp()
except Exception:
# fallback if parse fails
return 0.0

live_versions_list = list(live_versions.items())
live_versions_list.sort(
key=lambda x: parse_iso(x[1].get("last_modified", "1970-01-01T00:00:00")),
reverse=True
)
# Return the key of the version with the newest last_modified
return live_versions_list[0][0] # (version_key, version_data)
return live_versions[0]

@classmethod
def get_prompt(cls, prompt_template: str, version: Optional[str] = None, **variables) -> str:
Expand Down Expand Up @@ -191,3 +179,109 @@ def get_prompt(cls, prompt_template: str, version: Optional[str] = None, **varia
result = result.replace("\\n", "\n")

return result


@classmethod
def prepare_model_config(cls, prompt_template: str, memory: List[Dict[str, str]], version: Optional[str] = None, **variables) -> Dict[str, Any]:
"""Prepare a model configuration ready for OpenAI chat completion API.

Args:
prompt_template (str): The name of the prompt template to use
memory (List[Dict[str, str]]): List of previous messages in the conversation
version (Optional[str]): Specific version to use (e.g. "v1").
If None, uses the latest live version.
**variables: Variable key-value pairs to fill in the prompt template

Returns:
Dict[str, Any]: Configuration dictionary for OpenAI chat completion API

Raises:
ValueError: If the prompt template is not found, required variables are missing, or system message is empty
TypeError: If a variable doesn't match the schema type or memory format is invalid
"""
# Validate memory format
if not isinstance(memory, list):
raise TypeError("Memory must be a list of message dictionaries")

for msg in memory:
if not isinstance(msg, dict):
raise TypeError("Each memory item must be a dictionary")
if "role" not in msg or "content" not in msg:
raise ValueError("Each memory item must have 'role' and 'content' keys")
if msg["role"] not in ["user", "assistant", "system"]:
raise ValueError("Message role must be 'user', 'assistant', or 'system'")
if not isinstance(msg["content"], str):
raise TypeError("Message content must be a string")
if not msg["content"].strip():
raise ValueError("Message content cannot be empty")

# Get the system message using existing get_prompt method
system_message = cls.get_prompt(prompt_template, version, **variables)

if not system_message.strip():
raise ValueError("System message cannot be empty")

# Get the prompt configuration
if not cls._prompts:
cls._load_prompts()

if prompt_template not in cls._prompts:
raise ValueError(f"Prompt template '{prompt_template}' not found in prompts.json.")

prompt_data = cls._prompts[prompt_template]
versions = prompt_data.get("versions", {})

# Determine which version to use
version_data = None
if version:
if version not in versions:
raise ValueError(f"Version '{version}' not found for prompt '{prompt_template}'.")
version_data = versions[version]
else:
live_version_key = cls._find_live_version(versions)
if not live_version_key:
raise ValueError(f"No live version found for prompt '{prompt_template}'.")
version_data = versions[live_version_key]

# Get model configuration from version data
version_data = versions[live_version_key]

# Initialize the base configuration with required parameters
model_config = {
"messages": [{"role": "system", "content": system_message}]
}
model_config["messages"].extend(memory)

# Model is required for OpenAI API
if "model" not in version_data:
raise ValueError(f"Model must be specified in the version data for prompt '{prompt_template}'")
model_config["model"] = version_data["model"]

# Add optional configuration parameters only if they are present and not null
optional_params = [
("temperature", (int, float)),
("max_tokens", int),
("top_p", (int, float)),
("frequency_penalty", (int, float)),
("presence_penalty", (int, float))
]

for param_name, expected_type in optional_params:
if param_name in version_data and version_data[param_name] is not None:
value = version_data[param_name]
if not isinstance(value, expected_type):
raise ValueError(f"{param_name} must be of type {expected_type}")
model_config[param_name] = value

# Add tools configuration if present and non-empty
if "tools" in version_data and version_data["tools"]:
tools = version_data["tools"]
if not isinstance(tools, list):
raise ValueError("Tools configuration must be a list")
model_config["tools"] = tools

# If tools are present, also set tool_choice if specified
if "tool_choice" in version_data:
model_config["tool_choice"] = version_data["tool_choice"]

return model_config
63 changes: 63 additions & 0 deletions tests/test_openai.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
import pytest
from promptix import Promptix

def test_prepare_model_config_basic():
memory = [
{"role": "user", "content": "Test message"},
{"role": "assistant", "content": "Test response"}
]

model_config = Promptix.prepare_model_config(
prompt_template="CustomerSupport",
user_name="Test User",
issue_type="test",
technical_level="beginner",
interaction_history="none",
product_version="1.0.0",
issue_description="Test description",
custom_data={"product_version": "1.0.0", "subscription_tier": "standard"},
memory=memory,
)

assert isinstance(model_config, dict)
assert "messages" in model_config
assert "model" in model_config
assert len(model_config["messages"]) > 0

def test_prepare_model_config_memory_validation():
with pytest.raises(ValueError):
Promptix.prepare_model_config(
prompt_template="CustomerSupport",
user_name="Test User",
memory=[{"invalid": "format"}] # Invalid memory format
)

def test_prepare_model_config_required_fields():
with pytest.raises(ValueError):
Promptix.prepare_model_config(
prompt_template="CustomerSupport",
memory=[],
user_name="Test User",
issue_type="invalid"
)

def test_prepare_model_config_custom_data():
memory = [
{"role": "user", "content": "Test message"}
]

model_config = Promptix.prepare_model_config(
prompt_template="CustomerSupport",
user_name="Test User",
issue_type="general",
issue_description="Test issue",
technical_level="intermediate",
memory=memory,
custom_data={
"special_field": "test_value",
"priority": "high"
}
)

assert isinstance(model_config, dict)
assert "messages" in model_config
Loading