Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow client_tools to be defined only once #142

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

MichaelClifford
Copy link

What does this PR do?

This PR aims to address an issue I noticed where client_tools has to be declared twice in two different way in order to work properly. It has to be declared in the AgentConfig with something liketool.get_tool_definition(), as well as in the Agent. See the example below.

agent_config = AgentConfig(
    client_tools=[tool.get_tool_definition() for tool in client_tools],
    ...
    )

agent = Agent(client=client,
              agent_config=agent_config,
              client_tools=client_tools,
              )

This PR updates the Agent class initialization to set agent_config["client_tools"] based on the Agent class's client_tools parameter so that the user only needs to declare client_tools once and not worry about the .get_tool_definition() list comprehension.

Test Plan

I've confirmed that these code changes work as expected using the llamastack/distribution-ollama:latest image as the local Llama Stack server. You can run the code snippet below to verify.

from llama_stack_client import LlamaStackClient
from llama_stack_client.lib.agents.client_tool import client_tool
from llama_stack_client.lib.agents.event_logger import EventLogger
from llama_stack_client.lib.agents.agent import Agent
from llama_stack_client.types.agent_create_params import AgentConfig


client = LlamaStackClient(base_url="http://localhost:8321")

@client_tool
def torchtune(query: str = "torchtune"):
    """
    Answer information about torchtune.

    :param query: The query to use for querying the internet
    :returns: Information about torchtune
    """
    dummy_response = """
            torchtune is a PyTorch library for easily authoring, finetuning and experimenting with LLMs.

            torchtune provides:

            PyTorch implementations of popular LLMs from Llama, Gemma, Mistral, Phi, and Qwen model families
            Hackable training recipes for full finetuning, LoRA, QLoRA, DPO, PPO, QAT, knowledge distillation, and more
            Out-of-the-box memory efficiency, performance improvements, and scaling with the latest PyTorch APIs
            YAML configs for easily configuring training, evaluation, quantization or inference recipes
            Built-in support for many popular dataset formats and prompt templates
    """
    return dummy_response

agent_config = AgentConfig(
    model="meta-llama/Llama-3.1-8B-Instruct",
    enable_session_persistence = False,
    instructions = "You are a helpful assistant.",
    tool_choice="auto",
    tool_prompt_format="json",
    )

agent = Agent(client=client,
              agent_config=agent_config,
              client_tools=[torchtune]
              )

session_id = agent.create_session("test")
response = agent.create_turn(
            messages=[{"role":"user","content":"What is torchtune?"}],
            session_id= session_id,
            )

for r in EventLogger().log(response):
    r.print()

You should see an output below that has correctly called the CustomTool.

inference> {"type": "function", "name": "torchtune", "parameters": {"query": "What is torchtune?"}}
CustomTool> "\n            torchtune is a PyTorch library for easily authoring, finetuning and experimenting with LLMs.\n\n            torchtune provides:\n\n            PyTorch implementations of popular LLMs from Llama, Gemma, Mistral, Phi, and Qwen model families\n            Hackable training recipes for full finetuning, LoRA, QLoRA, DPO, PPO, QAT, knowledge distillation, and more\n            Out-of-the-box memory efficiency, performance improvements, and scaling with the latest PyTorch APIs\n            YAML configs for easily configuring training, evaluation, quantization or inference recipes\n            Built-in support for many popular dataset formats and prompt templates\n    "
inference> This response is based on the provided function `torchtune` which returns information about torchtune.

@yanxi0830
Copy link
Contributor

Thanks! LGTM to improve SDK ergonomics.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants