Skip to content

Commit b49d02d

Browse files
authored
feat: Add OpenAI integration with model configuration preparation (#1)
- Implement OpenAI integration support with `prepare_model_config` method - Add comprehensive validation for memory and model configuration - Create test suite for OpenAI integration features - Update README with OpenAI integration examples - Bump version to 0.1.3 - Add example implementation for OpenAI chat completions
1 parent d247e7b commit b49d02d

File tree

7 files changed

+277
-24
lines changed

7 files changed

+277
-24
lines changed

CHANGELOG.md

+12
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,17 @@
11
# Changelog
22

3+
## [0.1.3] - 2024-01-26
4+
5+
### Added
6+
- OpenAI integration support with prepare_model_config functionality
7+
- Test suite for OpenAI integration features
8+
- Example implementation for OpenAI chat completions
9+
10+
### Changed
11+
- Enhanced model configuration preparation with better validation
12+
- Improved error handling for invalid memory formats
13+
- Updated documentation with OpenAI integration examples
14+
315
## [0.1.2] - 2024-03-19
416

517
### Added

README.md

+49
Original file line numberDiff line numberDiff line change
@@ -12,8 +12,10 @@ A Python library for managing and using prompts with Promptix Studio integration
1212
- 🔄 **Version Control** - Track changes with live/draft states for each prompt
1313
- 🔌 **Simple Integration** - Easy-to-use Python interface
1414
- 📝 **Variable Substitution** - Dynamic prompts using `{{variable_name}}` syntax
15+
- 🤖 **LLM Integration** - Direct integration with OpenAI and other LLM providers
1516
- 🏃 **Local First** - No external API dependencies
1617
- 🎨 **Web Interface** - Edit and manage prompts through a modern UI
18+
- 🔍 **Schema Validation** - Automatic validation of prompt variables and structure
1719

1820
## Installation
1921

@@ -54,6 +56,37 @@ support_prompt = Promptix.get_prompt(
5456
)
5557
```
5658

59+
## OpenAI Integration
60+
61+
Promptix provides seamless integration with OpenAI's chat models:
62+
63+
```python
64+
from promptix import Promptix
65+
import openai
66+
67+
client = openai.OpenAI()
68+
69+
# Prepare model configuration with conversation memory
70+
memory = [
71+
{"role": "user", "content": "I'm having trouble resetting my password"},
72+
{"role": "assistant", "content": "I understand you're having password reset issues. Could you tell me what happens when you try?"}
73+
]
74+
75+
model_config = Promptix.prepare_model_config(
76+
prompt_template="CustomerSupport",
77+
user_name="John Doe",
78+
issue_type="password reset",
79+
technical_level="intermediate",
80+
interaction_history="2 previous tickets about 2FA setup",
81+
issue_description="User is unable to reset their password after multiple attempts",
82+
custom_data={"product_version": "2.1.0", "subscription_tier": "standard"},
83+
memory=memory,
84+
)
85+
86+
# Use the configuration with OpenAI
87+
response = client.chat.completions.create(**model_config)
88+
```
89+
5790
## Advanced Usage
5891

5992
### Version Control
@@ -73,6 +106,22 @@ prompt_latest = Promptix.get_prompt(
73106
)
74107
```
75108

109+
### Schema Validation
110+
111+
Promptix automatically validates your prompt variables against defined schemas:
112+
113+
```python
114+
# Schema validation ensures correct variable types and values
115+
try:
116+
prompt = Promptix.get_prompt(
117+
prompt_template="CustomerSupport",
118+
user_name="John",
119+
technical_level="expert" # Will raise error if not in ["beginner", "intermediate", "advanced"]
120+
)
121+
except ValueError as e:
122+
print(f"Validation Error: {str(e)}")
123+
```
124+
76125
### Error Handling
77126

78127
```python

examples/openai-integration.py

+35
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,35 @@
1+
from promptix import Promptix
2+
import openai
3+
4+
def main():
5+
6+
client = openai.OpenAI()
7+
8+
# New prepare_model_config example that returns model config
9+
print("Using prepare_model_config:")
10+
memory = [
11+
{"role": "user", "content": "I'm having trouble resetting my password"},
12+
{"role": "assistant", "content": "I understand you're having password reset issues. Could you tell me what happens when you try?"}
13+
]
14+
15+
16+
model_config = Promptix.prepare_model_config(
17+
prompt_template="CustomerSupport",
18+
user_name="John Doe",
19+
issue_type="password reset",
20+
technical_level="intermediate",
21+
interaction_history="2 previous tickets about 2FA setup",
22+
product_version="2.1.0",
23+
issue_description="User is unable to reset their password after multiple attempts",
24+
custom_data={"product_version": "2.1.0", "subscription_tier": "standard"},
25+
memory=memory,
26+
)
27+
28+
response = client.chat.completions.create(**model_config)
29+
30+
31+
print("Model Config:", model_config)
32+
print("\nResponse:", response)
33+
34+
if __name__ == "__main__":
35+
main()

prompts.json

+1-1
Original file line numberDiff line numberDiff line change
@@ -99,7 +99,7 @@
9999
"system_message": "You are a customer support agent helping with a {{issue_type}} issue.\n\n{% if priority == 'high' %}URGENT: This requires immediate attention!\n{% endif %}\n\nUser: {{user_name}}\nTechnical Level: {{technical_level}}\nProduct Version: {{custom_data.product_version}}\n{% if custom_data.subscription_tier == 'premium' %}Premium Support Level\n{% endif %}\n\nPlease assist with the following issue:\n{{issue_description}}",
100100
"temperature": 0.7,
101101
"max_tokens": 2000,
102-
"top_p": 1,
102+
"top_p": null,
103103
"frequency_penalty": 0,
104104
"presence_penalty": 0,
105105
"tools": [],

pyproject.toml

+1-1
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
44

55
[project]
66
name = "promptix"
7-
version = "0.1.2"
7+
version = "0.1.3"
88
description = "A simple library for managing and using prompts locally with Promptix Studio"
99
readme = "README.md"
1010
requires-python = ">=3.8"

src/promptix/core/base.py

+116-22
Original file line numberDiff line numberDiff line change
@@ -97,31 +97,19 @@ def _validate_variables(
9797

9898
@classmethod
9999
def _find_live_version(cls, versions: Dict[str, Any]) -> Optional[str]:
100-
"""Find the 'latest' live version based on 'last_modified' or version naming."""
101-
# Filter only versions where is_live == True
102-
live_versions = {k: v for k, v in versions.items() if v.get("is_live", False)}
100+
"""Find the live version. Only one version should be live at a time."""
101+
# Find versions where is_live == True
102+
live_versions = [k for k, v in versions.items() if v.get("is_live", False)]
103+
103104
if not live_versions:
104105
return None
106+
107+
if len(live_versions) > 1:
108+
raise ValueError(
109+
f"Multiple live versions found: {live_versions}. Only one version can be live at a time."
110+
)
105111

106-
# Strategy: pick the version with the largest "last_modified" timestamp
107-
# (Alternate: pick the lexicographically largest version name, etc.)
108-
# We'll parse the "last_modified" as an ISO string if possible.
109-
def parse_iso(dt_str: str) -> float:
110-
# Convert "YYYY-MM-DDTHH:MM:SS" into a float (timestamp) for easy comparison
111-
import datetime
112-
try:
113-
return datetime.datetime.fromisoformat(dt_str).timestamp()
114-
except Exception:
115-
# fallback if parse fails
116-
return 0.0
117-
118-
live_versions_list = list(live_versions.items())
119-
live_versions_list.sort(
120-
key=lambda x: parse_iso(x[1].get("last_modified", "1970-01-01T00:00:00")),
121-
reverse=True
122-
)
123-
# Return the key of the version with the newest last_modified
124-
return live_versions_list[0][0] # (version_key, version_data)
112+
return live_versions[0]
125113

126114
@classmethod
127115
def get_prompt(cls, prompt_template: str, version: Optional[str] = None, **variables) -> str:
@@ -191,3 +179,109 @@ def get_prompt(cls, prompt_template: str, version: Optional[str] = None, **varia
191179
result = result.replace("\\n", "\n")
192180

193181
return result
182+
183+
184+
@classmethod
185+
def prepare_model_config(cls, prompt_template: str, memory: List[Dict[str, str]], version: Optional[str] = None, **variables) -> Dict[str, Any]:
186+
"""Prepare a model configuration ready for OpenAI chat completion API.
187+
188+
Args:
189+
prompt_template (str): The name of the prompt template to use
190+
memory (List[Dict[str, str]]): List of previous messages in the conversation
191+
version (Optional[str]): Specific version to use (e.g. "v1").
192+
If None, uses the latest live version.
193+
**variables: Variable key-value pairs to fill in the prompt template
194+
195+
Returns:
196+
Dict[str, Any]: Configuration dictionary for OpenAI chat completion API
197+
198+
Raises:
199+
ValueError: If the prompt template is not found, required variables are missing, or system message is empty
200+
TypeError: If a variable doesn't match the schema type or memory format is invalid
201+
"""
202+
# Validate memory format
203+
if not isinstance(memory, list):
204+
raise TypeError("Memory must be a list of message dictionaries")
205+
206+
for msg in memory:
207+
if not isinstance(msg, dict):
208+
raise TypeError("Each memory item must be a dictionary")
209+
if "role" not in msg or "content" not in msg:
210+
raise ValueError("Each memory item must have 'role' and 'content' keys")
211+
if msg["role"] not in ["user", "assistant", "system"]:
212+
raise ValueError("Message role must be 'user', 'assistant', or 'system'")
213+
if not isinstance(msg["content"], str):
214+
raise TypeError("Message content must be a string")
215+
if not msg["content"].strip():
216+
raise ValueError("Message content cannot be empty")
217+
218+
# Get the system message using existing get_prompt method
219+
system_message = cls.get_prompt(prompt_template, version, **variables)
220+
221+
if not system_message.strip():
222+
raise ValueError("System message cannot be empty")
223+
224+
# Get the prompt configuration
225+
if not cls._prompts:
226+
cls._load_prompts()
227+
228+
if prompt_template not in cls._prompts:
229+
raise ValueError(f"Prompt template '{prompt_template}' not found in prompts.json.")
230+
231+
prompt_data = cls._prompts[prompt_template]
232+
versions = prompt_data.get("versions", {})
233+
234+
# Determine which version to use
235+
version_data = None
236+
if version:
237+
if version not in versions:
238+
raise ValueError(f"Version '{version}' not found for prompt '{prompt_template}'.")
239+
version_data = versions[version]
240+
else:
241+
live_version_key = cls._find_live_version(versions)
242+
if not live_version_key:
243+
raise ValueError(f"No live version found for prompt '{prompt_template}'.")
244+
version_data = versions[live_version_key]
245+
246+
# Get model configuration from version data
247+
version_data = versions[live_version_key]
248+
249+
# Initialize the base configuration with required parameters
250+
model_config = {
251+
"messages": [{"role": "system", "content": system_message}]
252+
}
253+
model_config["messages"].extend(memory)
254+
255+
# Model is required for OpenAI API
256+
if "model" not in version_data:
257+
raise ValueError(f"Model must be specified in the version data for prompt '{prompt_template}'")
258+
model_config["model"] = version_data["model"]
259+
260+
# Add optional configuration parameters only if they are present and not null
261+
optional_params = [
262+
("temperature", (int, float)),
263+
("max_tokens", int),
264+
("top_p", (int, float)),
265+
("frequency_penalty", (int, float)),
266+
("presence_penalty", (int, float))
267+
]
268+
269+
for param_name, expected_type in optional_params:
270+
if param_name in version_data and version_data[param_name] is not None:
271+
value = version_data[param_name]
272+
if not isinstance(value, expected_type):
273+
raise ValueError(f"{param_name} must be of type {expected_type}")
274+
model_config[param_name] = value
275+
276+
# Add tools configuration if present and non-empty
277+
if "tools" in version_data and version_data["tools"]:
278+
tools = version_data["tools"]
279+
if not isinstance(tools, list):
280+
raise ValueError("Tools configuration must be a list")
281+
model_config["tools"] = tools
282+
283+
# If tools are present, also set tool_choice if specified
284+
if "tool_choice" in version_data:
285+
model_config["tool_choice"] = version_data["tool_choice"]
286+
287+
return model_config

tests/test_openai.py

+63
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,63 @@
1+
import pytest
2+
from promptix import Promptix
3+
4+
def test_prepare_model_config_basic():
5+
memory = [
6+
{"role": "user", "content": "Test message"},
7+
{"role": "assistant", "content": "Test response"}
8+
]
9+
10+
model_config = Promptix.prepare_model_config(
11+
prompt_template="CustomerSupport",
12+
user_name="Test User",
13+
issue_type="test",
14+
technical_level="beginner",
15+
interaction_history="none",
16+
product_version="1.0.0",
17+
issue_description="Test description",
18+
custom_data={"product_version": "1.0.0", "subscription_tier": "standard"},
19+
memory=memory,
20+
)
21+
22+
assert isinstance(model_config, dict)
23+
assert "messages" in model_config
24+
assert "model" in model_config
25+
assert len(model_config["messages"]) > 0
26+
27+
def test_prepare_model_config_memory_validation():
28+
with pytest.raises(ValueError):
29+
Promptix.prepare_model_config(
30+
prompt_template="CustomerSupport",
31+
user_name="Test User",
32+
memory=[{"invalid": "format"}] # Invalid memory format
33+
)
34+
35+
def test_prepare_model_config_required_fields():
36+
with pytest.raises(ValueError):
37+
Promptix.prepare_model_config(
38+
prompt_template="CustomerSupport",
39+
memory=[],
40+
user_name="Test User",
41+
issue_type="invalid"
42+
)
43+
44+
def test_prepare_model_config_custom_data():
45+
memory = [
46+
{"role": "user", "content": "Test message"}
47+
]
48+
49+
model_config = Promptix.prepare_model_config(
50+
prompt_template="CustomerSupport",
51+
user_name="Test User",
52+
issue_type="general",
53+
issue_description="Test issue",
54+
technical_level="intermediate",
55+
memory=memory,
56+
custom_data={
57+
"special_field": "test_value",
58+
"priority": "high"
59+
}
60+
)
61+
62+
assert isinstance(model_config, dict)
63+
assert "messages" in model_config

0 commit comments

Comments
 (0)