Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

release: 1.60.0 #2044

Merged
merged 4 commits into from
Jan 22, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
".": "1.59.9"
".": "1.60.0"
}
2 changes: 1 addition & 1 deletion .stats.yml
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
configured_endpoints: 69
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai-b5b0e2c794b012919701c3fd43286af10fa25d33ceb8a881bec2636028f446e0.yml
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai-3904ef6b29a89c98f93a9b7da19879695f3c440564be6384db7af1b734611ede.yml
18 changes: 18 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,23 @@
# Changelog

## 1.60.0 (2025-01-22)

Full Changelog: [v1.59.9...v1.60.0](https://github.com/openai/openai-python/compare/v1.59.9...v1.60.0)

### Features

* **api:** update enum values, comments, and examples ([#2045](https://github.com/openai/openai-python/issues/2045)) ([e8205fd](https://github.com/openai/openai-python/commit/e8205fd58f0d677f476c577a8d9afb90f5710506))


### Chores

* **internal:** minor style changes ([#2043](https://github.com/openai/openai-python/issues/2043)) ([89a9dd8](https://github.com/openai/openai-python/commit/89a9dd821eaf5300ad11b0270b61fdfa4fd6e9b6))


### Documentation

* **readme:** mention failed requests in request IDs ([5f7c30b](https://github.com/openai/openai-python/commit/5f7c30bc006ffb666c324011a68aae357cb33e35))

## 1.59.9 (2025-01-20)

Full Changelog: [v1.59.8...v1.59.9](https://github.com/openai/openai-python/compare/v1.59.8...v1.59.9)
Expand Down
15 changes: 15 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -499,6 +499,21 @@ Note that unlike other properties that use an `_` prefix, the `_request_id` prop
*is* public. Unless documented otherwise, *all* other `_` prefix properties,
methods and modules are *private*.

> [!IMPORTANT]
> If you need to access request IDs for failed requests you must catch the `APIStatusError` exception

```python
import openai

try:
completion = await client.chat.completions.create(
messages=[{"role": "user", "content": "Say this is a test"}], model="gpt-4"
)
except openai.APIStatusError as exc:
print(exc.request_id) # req_123
raise exc
```


### Retries

Expand Down
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[project]
name = "openai"
version = "1.59.9"
version = "1.60.0"
description = "The official Python library for the openai API"
dynamic = ["readme"]
license = "Apache-2.0"
Expand Down
4 changes: 2 additions & 2 deletions src/openai/_legacy_response.py
Original file line number Diff line number Diff line change
Expand Up @@ -205,6 +205,8 @@ def _parse(self, *, to: type[_T] | None = None) -> R | _T:
if cast_to and is_annotated_type(cast_to):
cast_to = extract_type_arg(cast_to, 0)

origin = get_origin(cast_to) or cast_to

if self._stream:
if to:
if not is_stream_class_type(to):
Expand Down Expand Up @@ -261,8 +263,6 @@ def _parse(self, *, to: type[_T] | None = None) -> R | _T:
if cast_to == bool:
return cast(R, response.text.lower() == "true")

origin = get_origin(cast_to) or cast_to

if inspect.isclass(origin) and issubclass(origin, HttpxBinaryResponseContent):
return cast(R, cast_to(response)) # type: ignore

Expand Down
4 changes: 2 additions & 2 deletions src/openai/_response.py
Original file line number Diff line number Diff line change
Expand Up @@ -136,6 +136,8 @@ def _parse(self, *, to: type[_T] | None = None) -> R | _T:
if cast_to and is_annotated_type(cast_to):
cast_to = extract_type_arg(cast_to, 0)

origin = get_origin(cast_to) or cast_to

if self._is_sse_stream:
if to:
if not is_stream_class_type(to):
Expand Down Expand Up @@ -195,8 +197,6 @@ def _parse(self, *, to: type[_T] | None = None) -> R | _T:
if cast_to == bool:
return cast(R, response.text.lower() == "true")

origin = get_origin(cast_to) or cast_to

# handle the legacy binary response case
if inspect.isclass(cast_to) and cast_to.__name__ == "HttpxBinaryResponseContent":
return cast(R, cast_to(response)) # type: ignore
Expand Down
2 changes: 1 addition & 1 deletion src/openai/_version.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

__title__ = "openai"
__version__ = "1.59.9" # x-release-please-version
__version__ = "1.60.0" # x-release-please-version
16 changes: 8 additions & 8 deletions src/openai/resources/audio/speech.py
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ def create(
*,
input: str,
model: Union[str, SpeechModel],
voice: Literal["alloy", "echo", "fable", "onyx", "nova", "shimmer"],
voice: Literal["alloy", "ash", "coral", "echo", "fable", "onyx", "nova", "sage", "shimmer"],
response_format: Literal["mp3", "opus", "aac", "flac", "wav", "pcm"] | NotGiven = NOT_GIVEN,
speed: float | NotGiven = NOT_GIVEN,
# Use the following arguments if you need to pass additional parameters to the API that aren't available via kwargs.
Expand All @@ -73,9 +73,9 @@ def create(
One of the available [TTS models](https://platform.openai.com/docs/models#tts):
`tts-1` or `tts-1-hd`

voice: The voice to use when generating the audio. Supported voices are `alloy`,
`echo`, `fable`, `onyx`, `nova`, and `shimmer`. Previews of the voices are
available in the
voice: The voice to use when generating the audio. Supported voices are `alloy`, `ash`,
`coral`, `echo`, `fable`, `onyx`, `nova`, `sage` and `shimmer`. Previews of the
voices are available in the
[Text to speech guide](https://platform.openai.com/docs/guides/text-to-speech#voice-options).

response_format: The format to audio in. Supported formats are `mp3`, `opus`, `aac`, `flac`,
Expand Down Expand Up @@ -137,7 +137,7 @@ async def create(
*,
input: str,
model: Union[str, SpeechModel],
voice: Literal["alloy", "echo", "fable", "onyx", "nova", "shimmer"],
voice: Literal["alloy", "ash", "coral", "echo", "fable", "onyx", "nova", "sage", "shimmer"],
response_format: Literal["mp3", "opus", "aac", "flac", "wav", "pcm"] | NotGiven = NOT_GIVEN,
speed: float | NotGiven = NOT_GIVEN,
# Use the following arguments if you need to pass additional parameters to the API that aren't available via kwargs.
Expand All @@ -157,9 +157,9 @@ async def create(
One of the available [TTS models](https://platform.openai.com/docs/models#tts):
`tts-1` or `tts-1-hd`

voice: The voice to use when generating the audio. Supported voices are `alloy`,
`echo`, `fable`, `onyx`, `nova`, and `shimmer`. Previews of the voices are
available in the
voice: The voice to use when generating the audio. Supported voices are `alloy`, `ash`,
`coral`, `echo`, `fable`, `onyx`, `nova`, `sage` and `shimmer`. Previews of the
voices are available in the
[Text to speech guide](https://platform.openai.com/docs/guides/text-to-speech#voice-options).

response_format: The format to audio in. Supported formats are `mp3`, `opus`, `aac`, `flac`,
Expand Down
48 changes: 28 additions & 20 deletions src/openai/resources/beta/realtime/sessions.py
Original file line number Diff line number Diff line change
Expand Up @@ -46,18 +46,19 @@ def with_streaming_response(self) -> SessionsWithStreamingResponse:
def create(
self,
*,
input_audio_format: Literal["pcm16", "g711_ulaw", "g711_alaw"] | NotGiven = NOT_GIVEN,
input_audio_transcription: session_create_params.InputAudioTranscription | NotGiven = NOT_GIVEN,
instructions: str | NotGiven = NOT_GIVEN,
max_response_output_tokens: Union[int, Literal["inf"]] | NotGiven = NOT_GIVEN,
modalities: List[Literal["text", "audio"]] | NotGiven = NOT_GIVEN,
model: Literal[
"gpt-4o-realtime-preview",
"gpt-4o-realtime-preview-2024-10-01",
"gpt-4o-realtime-preview-2024-12-17",
"gpt-4o-mini-realtime-preview",
"gpt-4o-mini-realtime-preview-2024-12-17",
],
input_audio_format: Literal["pcm16", "g711_ulaw", "g711_alaw"] | NotGiven = NOT_GIVEN,
input_audio_transcription: session_create_params.InputAudioTranscription | NotGiven = NOT_GIVEN,
instructions: str | NotGiven = NOT_GIVEN,
max_response_output_tokens: Union[int, Literal["inf"]] | NotGiven = NOT_GIVEN,
modalities: List[Literal["text", "audio"]] | NotGiven = NOT_GIVEN,
]
| NotGiven = NOT_GIVEN,
output_audio_format: Literal["pcm16", "g711_ulaw", "g711_alaw"] | NotGiven = NOT_GIVEN,
temperature: float | NotGiven = NOT_GIVEN,
tool_choice: str | NotGiven = NOT_GIVEN,
Expand All @@ -81,9 +82,9 @@ def create(
the Realtime API.

Args:
model: The Realtime model used for this session.

input_audio_format: The format of input audio. Options are `pcm16`, `g711_ulaw`, or `g711_alaw`.
input_audio_format: The format of input audio. Options are `pcm16`, `g711_ulaw`, or `g711_alaw`. For
`pcm16`, input audio must be 16-bit PCM at a 24kHz sample rate, single channel
(mono), and little-endian byte order.

input_audio_transcription: Configuration for input audio transcription, defaults to off and can be set to
`null` to turn off once on. Input audio transcription is not native to the
Expand All @@ -110,7 +111,10 @@ def create(
modalities: The set of modalities the model can respond with. To disable audio, set this to
["text"].

model: The Realtime model used for this session.

output_audio_format: The format of output audio. Options are `pcm16`, `g711_ulaw`, or `g711_alaw`.
For `pcm16`, output audio is sampled at a rate of 24kHz.

temperature: Sampling temperature for the model, limited to [0.6, 1.2]. Defaults to 0.8.

Expand Down Expand Up @@ -140,12 +144,12 @@ def create(
"/realtime/sessions",
body=maybe_transform(
{
"model": model,
"input_audio_format": input_audio_format,
"input_audio_transcription": input_audio_transcription,
"instructions": instructions,
"max_response_output_tokens": max_response_output_tokens,
"modalities": modalities,
"model": model,
"output_audio_format": output_audio_format,
"temperature": temperature,
"tool_choice": tool_choice,
Expand Down Expand Up @@ -185,18 +189,19 @@ def with_streaming_response(self) -> AsyncSessionsWithStreamingResponse:
async def create(
self,
*,
input_audio_format: Literal["pcm16", "g711_ulaw", "g711_alaw"] | NotGiven = NOT_GIVEN,
input_audio_transcription: session_create_params.InputAudioTranscription | NotGiven = NOT_GIVEN,
instructions: str | NotGiven = NOT_GIVEN,
max_response_output_tokens: Union[int, Literal["inf"]] | NotGiven = NOT_GIVEN,
modalities: List[Literal["text", "audio"]] | NotGiven = NOT_GIVEN,
model: Literal[
"gpt-4o-realtime-preview",
"gpt-4o-realtime-preview-2024-10-01",
"gpt-4o-realtime-preview-2024-12-17",
"gpt-4o-mini-realtime-preview",
"gpt-4o-mini-realtime-preview-2024-12-17",
],
input_audio_format: Literal["pcm16", "g711_ulaw", "g711_alaw"] | NotGiven = NOT_GIVEN,
input_audio_transcription: session_create_params.InputAudioTranscription | NotGiven = NOT_GIVEN,
instructions: str | NotGiven = NOT_GIVEN,
max_response_output_tokens: Union[int, Literal["inf"]] | NotGiven = NOT_GIVEN,
modalities: List[Literal["text", "audio"]] | NotGiven = NOT_GIVEN,
]
| NotGiven = NOT_GIVEN,
output_audio_format: Literal["pcm16", "g711_ulaw", "g711_alaw"] | NotGiven = NOT_GIVEN,
temperature: float | NotGiven = NOT_GIVEN,
tool_choice: str | NotGiven = NOT_GIVEN,
Expand All @@ -220,9 +225,9 @@ async def create(
the Realtime API.

Args:
model: The Realtime model used for this session.

input_audio_format: The format of input audio. Options are `pcm16`, `g711_ulaw`, or `g711_alaw`.
input_audio_format: The format of input audio. Options are `pcm16`, `g711_ulaw`, or `g711_alaw`. For
`pcm16`, input audio must be 16-bit PCM at a 24kHz sample rate, single channel
(mono), and little-endian byte order.

input_audio_transcription: Configuration for input audio transcription, defaults to off and can be set to
`null` to turn off once on. Input audio transcription is not native to the
Expand All @@ -249,7 +254,10 @@ async def create(
modalities: The set of modalities the model can respond with. To disable audio, set this to
["text"].

model: The Realtime model used for this session.

output_audio_format: The format of output audio. Options are `pcm16`, `g711_ulaw`, or `g711_alaw`.
For `pcm16`, output audio is sampled at a rate of 24kHz.

temperature: Sampling temperature for the model, limited to [0.6, 1.2]. Defaults to 0.8.

Expand Down Expand Up @@ -279,12 +287,12 @@ async def create(
"/realtime/sessions",
body=await async_maybe_transform(
{
"model": model,
"input_audio_format": input_audio_format,
"input_audio_transcription": input_audio_transcription,
"instructions": instructions,
"max_response_output_tokens": max_response_output_tokens,
"modalities": modalities,
"model": model,
"output_audio_format": output_audio_format,
"temperature": temperature,
"tool_choice": tool_choice,
Expand Down
18 changes: 0 additions & 18 deletions src/openai/resources/chat/completions.py
Original file line number Diff line number Diff line change
Expand Up @@ -251,9 +251,6 @@ def create(
tier with a lower uptime SLA and no latency guarentee.
- When not set, the default behavior is 'auto'.

When this parameter is set, the response body will include the `service_tier`
utilized.

stop: Up to 4 sequences where the API will stop generating further tokens.

store: Whether or not to store the output of this chat completion request for use in
Expand Down Expand Up @@ -509,9 +506,6 @@ def create(
tier with a lower uptime SLA and no latency guarentee.
- When not set, the default behavior is 'auto'.

When this parameter is set, the response body will include the `service_tier`
utilized.

stop: Up to 4 sequences where the API will stop generating further tokens.

store: Whether or not to store the output of this chat completion request for use in
Expand Down Expand Up @@ -760,9 +754,6 @@ def create(
tier with a lower uptime SLA and no latency guarentee.
- When not set, the default behavior is 'auto'.

When this parameter is set, the response body will include the `service_tier`
utilized.

stop: Up to 4 sequences where the API will stop generating further tokens.

store: Whether or not to store the output of this chat completion request for use in
Expand Down Expand Up @@ -1112,9 +1103,6 @@ async def create(
tier with a lower uptime SLA and no latency guarentee.
- When not set, the default behavior is 'auto'.

When this parameter is set, the response body will include the `service_tier`
utilized.

stop: Up to 4 sequences where the API will stop generating further tokens.

store: Whether or not to store the output of this chat completion request for use in
Expand Down Expand Up @@ -1370,9 +1358,6 @@ async def create(
tier with a lower uptime SLA and no latency guarentee.
- When not set, the default behavior is 'auto'.

When this parameter is set, the response body will include the `service_tier`
utilized.

stop: Up to 4 sequences where the API will stop generating further tokens.

store: Whether or not to store the output of this chat completion request for use in
Expand Down Expand Up @@ -1621,9 +1606,6 @@ async def create(
tier with a lower uptime SLA and no latency guarentee.
- When not set, the default behavior is 'auto'.

When this parameter is set, the response body will include the `service_tier`
utilized.

stop: Up to 4 sequences where the API will stop generating further tokens.

store: Whether or not to store the output of this chat completion request for use in
Expand Down
6 changes: 4 additions & 2 deletions src/openai/resources/embeddings.py
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,8 @@ def create(
`text-embedding-ada-002`), cannot be an empty string, and any array must be 2048
dimensions or less.
[Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken)
for counting tokens.
for counting tokens. Some models may also impose a limit on total number of
tokens summed across inputs.

model: ID of the model to use. You can use the
[List models](https://platform.openai.com/docs/api-reference/models/list) API to
Expand Down Expand Up @@ -180,7 +181,8 @@ async def create(
`text-embedding-ada-002`), cannot be an empty string, and any array must be 2048
dimensions or less.
[Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken)
for counting tokens.
for counting tokens. Some models may also impose a limit on total number of
tokens summed across inputs.

model: ID of the model to use. You can use the
[List models](https://platform.openai.com/docs/api-reference/models/list) API to
Expand Down
6 changes: 3 additions & 3 deletions src/openai/types/audio/speech_create_params.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,11 +20,11 @@ class SpeechCreateParams(TypedDict, total=False):
`tts-1` or `tts-1-hd`
"""

voice: Required[Literal["alloy", "echo", "fable", "onyx", "nova", "shimmer"]]
voice: Required[Literal["alloy", "ash", "coral", "echo", "fable", "onyx", "nova", "sage", "shimmer"]]
"""The voice to use when generating the audio.

Supported voices are `alloy`, `echo`, `fable`, `onyx`, `nova`, and `shimmer`.
Previews of the voices are available in the
Supported voices are `alloy`, `ash`, `coral`, `echo`, `fable`, `onyx`, `nova`,
`sage` and `shimmer`. Previews of the voices are available in the
[Text to speech guide](https://platform.openai.com/docs/guides/text-to-speech#voice-options).
"""

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,9 +20,10 @@ class ConversationItemCreateEvent(BaseModel):
"""Optional client-generated ID used to identify this event."""

previous_item_id: Optional[str] = None
"""The ID of the preceding item after which the new item will be inserted.

If not set, the new item will be appended to the end of the conversation. If
set, it allows an item to be inserted mid-conversation. If the ID cannot be
found, an error will be returned and the item will not be added.
"""
The ID of the preceding item after which the new item will be inserted. If not
set, the new item will be appended to the end of the conversation. If set to
`root`, the new item will be added to the beginning of the conversation. If set
to an existing ID, it allows an item to be inserted mid-conversation. If the ID
cannot be found, an error will be returned and the item will not be added.
"""
Loading
Loading