-
Notifications
You must be signed in to change notification settings - Fork 875
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add a simple Gradio UI for Open Deep Research #525
base: main
Are you sure you want to change the base?
Conversation
@@ -455,7 +454,7 @@ def planning_step(self, task, is_first_step: bool, step: int) -> None: | |||
""" | |||
if is_first_step: | |||
message_prompt_facts = { | |||
"role": MessageRole.SYSTEM, | |||
"role": MessageRole.USER, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Won't this actually break other models?..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK I did some searching and it seems like there was a change recently that broke Gemini compatibility:
the above change makes the request to model()
not include any USER , as it moves the task description to be a SYSTEM message. This change could have been made for a) increasing accuracy or b) simplyfing the prompting implementation. In any case, the case of models not supporting being sent a single system model was probably unknown and understadably overlooked. I can revert my change, the above change, or wait for instructions.
Regarding breaking compatibility, I know most (if not all) support being sent a single user message. The opposite is not true, as at least Gemini models do not yet support this
src/smolagents/models.py
Outdated
@@ -686,6 +691,12 @@ def __call__( | |||
) -> ChatMessage: | |||
import litellm | |||
|
|||
# IMPORTANT - Set this to TRUE to add the function to the prompt for Non OpenAI LLMs | |||
if litellm.supports_function_calling(model=self.model_id) == True: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if litellm.supports_function_calling(model=self.model_id):
Or
if litellm.supports_function_calling(model=self.model_id) is True:
if you want to be very specific.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm new to this project so I'm not sure if some idioms are acceptable or not.
What do you think of:
litellm.add_function_to_prompt = not litellm.supports_function_calling(model=self.model_id)
Hi,
This PR makes a few changes to get open deep researcher working smoothly with Gemini 2.0 Flash through LiteLLM, and sets up a basic Gradio demo to show it off. I focused on Gemini 2.0 Flash specifically because it's a cheap way to experiment with the library.
Here's a breakdown of what I did:
Gemini 2.0 Flash has a couple of quirks that needed addressing:
planning_step
method insrc/smolagents/agents.py
to send theinitial_facts
prompt as a user message instead of a system message. This resolves an error where Gemini would complain aboutcontents
being empty.litellm.add_function_to_prompt
set to True or False depending on the selected model.model_output
because sometimes returns None, causing error. Returned early in this case.getattr
To avoid errors whenlast_input_token_count
isNone
Tool Calling Fixes: The changes make sure Litellm can create a valid tool calling request for Gemini, avoiding an empty parameters dict.
Gradio Demo App: I've included a simple Gradio application.
Peek.2025-02-06.18-50.mp4
code_agent.yaml
andtoolcalling_agent.yaml
to improve accuracy when calling managed agents.Blocked by this PR on LiteLLMBut one can do:export LITELLM_LOCAL_MODEL_COST_MAP=True
and set the config locally.