Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] "Error in generating tool call with model: 'NoneType' object is not subscriptable" when using FinalAnswerTool #404

Open
Sonson6 opened this issue Jan 28, 2025 · 1 comment
Labels
bug Something isn't working

Comments

@Sonson6
Copy link

Sonson6 commented Jan 28, 2025

Describe the bug
I am trying to reproduce the agentic RAG cookbook example (https://github.com/huggingface/cookbook/blob/main/notebooks/en/agent_rag.ipynb). But I noticed that the FinalAnswerTool was almost always throwing me an error. I tried with :

  • gpt4o (AzureOpenAI)
  • mistral-large-2407 (OpenAI-compatible API)

I also replaced ToolCallingAgent by CodeAgent, the error log is different but it still does not work most of the time. I would say 2/3 of the time, and even when it works the agent needs multiple steps to get an answer.

I decided to remove every RAG-related component in my pipeline and I just kept the FinalAnswerTool in my agent to make sure the RAG logic as nothing to do with this. The error remains the same.

Code to reproduce the error

from  smolagents import Tool

from smolagents import ToolCallingAgent
 
from smolagents import Model, get_clean_message_list, ChatMessage, tool_role_conversions
from smolagents.models import parse_tool_args_if_needed
from typing import Optional, Dict, List
from openai import OpenAI
 
import httpx
 
from typing import Union
 
from httpx_auth import OAuth2ClientCredentials
 
 
# CUSTOMIZE SERVER OBJECT
class CustomOpenAIServerModel(Model):
    """This model connects to an OpenAI-compatible API server.
 
    Parameters:
        model_id (`str`):
            The model identifier to use on the server (e.g. "gpt-3.5-turbo").
        api_base (`str`, *optional*):
            The base URL of the OpenAI-compatible API server.
        api_key (`str`, *optional*):
            The API key to use for authentication.
        custom_role_conversions (`dict[str, str]`, *optional*):
            Custom role conversion mapping to convert message roles in others.
            Useful for specific models that do not support specific message roles like "system".
        **kwargs:
            Additional keyword arguments to pass to the OpenAI API.
    """
 
    def __init__(
        self,
        model_id: str,
        client: OpenAI,
        custom_role_conversions: Optional[Dict[str, str]] = None,
        **kwargs,
    ):
        try:
            import openai
        except ModuleNotFoundError:
            raise ModuleNotFoundError(
                "Please install 'openai' extra to use OpenAIServerModel: `pip install 'smolagents[openai]'`"
            ) from None
 
        super().__init__(**kwargs)
        self.model_id = model_id
        self.client = client
        self.custom_role_conversions = custom_role_conversions
 
    def __call__(
        self,
        messages: List[Dict[str, str]],
        stop_sequences: Optional[List[str]] = None,
        grammar: Optional[str] = None,
        tools_to_call_from: Optional[List[Tool]] = None,
        **kwargs,
    ) -> ChatMessage:
        completion_kwargs = self._prepare_completion_kwargs(
            messages=messages,
            stop_sequences=stop_sequences,
            grammar=grammar,
            tools_to_call_from=tools_to_call_from,
            model=self.model_id,
            custom_role_conversions=self.custom_role_conversions,
            convert_images_to_image_urls=True,
            **kwargs,
        )
 
        response = self.client.chat.completions.create(**completion_kwargs)
        self.last_input_token_count = response.usage.prompt_tokens
        self.last_output_token_count = response.usage.completion_tokens
 
        message = ChatMessage.from_dict(
            response.choices[0].message.model_dump(include={"role", "content", "tool_calls"})
        )
        if tools_to_call_from is not None:
            return parse_tool_args_if_needed(message)
        return message
 
client = OpenAI(base_url=my_base_url, api_key="fake_key", organization="my_orga_id")

model = CustomOpenAIServerModel(model_id="mistral-large", client = client)
 
agent = ToolCallingAgent(
    tools=[], model=model
)
 
agent_output = agent.run("Qui a construit la tour Eiffel ?")

Error logs (if any)

Error is : Error in generating tool call with model:
'NoneType' object is not subscriptable

Expected behavior
I expect the question to be answered easily as this is a simple question using a FinalAnswerTool provided by default by the Smolagents package.

Packages version:
smolagents 1.5.1

Additional context
My previous issue (#381) was closed and I can't re-open it. However, my problem was not that much related to the OpenAIServerModel class as I had already found an alternative by customizing it.
My issue is with ToolCallingAgent (and also CodeAgent that I tried thanks to your suggestion), I am still getting error pretty much 50%, even if CodeAgent works more often than ToolCallingAgent.

I can provide with more information if you need

@Sonson6 Sonson6 added the bug Something isn't working label Jan 28, 2025
@Sonson6 Sonson6 changed the title [BUG] [BUG] "Error in generating tool call with model: 'NoneType' object is not subscriptable" when using FinalAnswerTool Jan 28, 2025
@seanw7
Copy link

seanw7 commented Feb 5, 2025

I appear to be running into the same issue and it's not entirely clear how to resolve or troubleshoot. Attaching a few screenshots of what it looks line in Arize Phoenix

Image Image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants