-
-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add a configuration option to make callback logging synchronous #8202
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: B-Step62 <[email protected]>
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
Signed-off-by: B-Step62 <[email protected]>
ab616dc
to
f356096
Compare
target=self.run_success_logging_and_cache_storage, | ||
args=(response, cache_hit), | ||
).start() # log response | ||
if litellm.sync_logging: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
having so many if/else blocks re: logging in the codebase can lead to bugs
can we use a more general pattern / function here which can ensure consistent behaviour? @B-Step62 @ishaan-jaff
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@krrishdholakia Sure, I can pull this combo to a shared utility function. Is that what you suggested?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@krrishdholakia I've updated the PR to encapsulate the conditional logic into a single place. Would you mind taking another look? Thank you in advance.
Signed-off-by: B-Step62 <[email protected]>
complete_streaming_response, None, None, cache_hit | ||
) | ||
else: | ||
executor.submit( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
note: I don't have a clear context for why TPE is used in some place only, so did not try to remove this if-else and keep the behavior same.
Title
LiteLLM executes success handlers in a background thread. This is generally preferred to avoid overhead in main application, however, we sometimes want invoke callbacks synchronously.
One example is that debugging with inline trace rendering. MLflow supports rendering trace object in the jupyter notebook directly (ref). In this case, trace must be completed by the end of the cell, therefore, non-blocking callback does not work well.
This PR adds a configuration
litellm.sync_logging
to make callback execution blocking for such debugging purpose. The default behavior remains the same (non-blocking) so there is no downside. Other tools like LangChain, LlamaIndex allow developers to control it as well.Type
🆕 New Feature
Changes
[REQUIRED] Testing - Attach a screenshot of any new tests passing locally
Sync call
Sync streaming
Async call
Async streaming