Skip to content

[Draft]add MCP notebooks #2970

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 15 commits into from
Jun 16, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions .ci/skipped_notebooks.yml
Original file line number Diff line number Diff line change
Expand Up @@ -556,3 +556,9 @@
- macos-13
- ubuntu-22.04
- windows-2019
- notebook: notebooks/llm-agent-mcp/llm-agent-mcp.ipynb
skips:
- os:
- macos-13
- ubuntu-22.04
- windows-2019
1 change: 1 addition & 0 deletions .ci/spellcheck/.pyspelling.wordlist.txt
Original file line number Diff line number Diff line change
Expand Up @@ -549,6 +549,7 @@ matplotlib
MathVista
MatMul
MBs
MCP
md
MediaPipe
medprob
Expand Down
33 changes: 33 additions & 0 deletions notebooks/llm-agent-mcp/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
# Create MCP Agent using OpenVINO and Qwen-Agent

MCP is an open protocol that standardizes how applications provide context to LLMs. Think of MCP like a USB-C port for AI applications. Just as USB-C provides a standardized way to connect your devices to various peripherals and accessories, MCP provides a standardized way to connect AI models to different data sources and tools.

MCP helps you build agents and complex workflows on top of LLMs. LLMs frequently need to integrate with data and tools, and MCP provides:

- A growing list of pre-built integration that your LLM can directly plug into
- The flexibility to switch between LLM providers and vendors
- Best practices for securing your data within your infrastructure

![Image](https://github.com/user-attachments/assets/dfe1aa42-cae9-4356-be81-f010462d78a8)

[Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) is a framework for developing LLM applications based on the instruction following, tool usage, planning, and memory capabilities of Qwen. It also comes with example applications such as Browser Assistant, Code Interpreter, and Custom Assistant.

This notebook explores how to create a MCP Agent step by step using OpenVINO and Qwen-Agent.

### Notebook Contents

The tutorial consists of the following steps:

- Install prerequisites
- Download and convert the model from a public source using the [OpenVINO integration with Hugging Face Optimum](https://huggingface.co/blog/openvino).
- Compress model weights to INT4 or INT8 precision using [NNCF](https://github.com/openvinotoolkit/nncf)
- Create an Agent
- Interactive Demo


## Installation Instructions

This is a self-contained example that relies solely on its own code.</br>
We recommend running the notebook in a virtual environment. You only need a Jupyter server to start.
For details, please refer to [Installation Guide](../../README.md).
<img referrerpolicy="no-referrer-when-downgrade" src="https://static.scarf.sh/a.png?x-pxid=5b5a4db0-7875-4bfb-bdbd-01698b5b1a77&file=notebooks/llm-agent-mcp/README.md" />
141 changes: 141 additions & 0 deletions notebooks/llm-agent-mcp/gradio_helper.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,141 @@
import os
from typing import List
from qwen_agent.gui.utils import convert_history_to_chatbot
from qwen_agent.llm.schema import Message
from qwen_agent.gui import WebUI


class OpenVINOUI(WebUI):
def run(
self,
messages: List[Message] = None,
share: bool = False,
server_name: str = None,
server_port: int = None,
concurrency_limit: int = 10,
enable_mention: bool = False,
**kwargs
):
self.run_kwargs = kwargs

from qwen_agent.gui.gradio_dep import gr, mgr, ms

customTheme = gr.themes.Default(
primary_hue=gr.themes.utils.colors.blue,
radius_size=gr.themes.utils.sizes.radius_none,
)

with gr.Blocks(
css=os.path.join(os.path.dirname(__file__), "assets/appBot.css"),
theme=customTheme,
) as demo:
history = gr.State([])
with ms.Application():
with gr.Row(elem_classes="container"):
with gr.Column(scale=4):
chatbot = mgr.Chatbot(
value=convert_history_to_chatbot(messages=messages),
avatar_images=[
self.user_config,
self.agent_config_list,
],
height=850,
avatar_image_width=80,
flushing=False,
show_copy_button=True,
latex_delimiters=[
{"left": "\\(", "right": "\\)", "display": True},
{"left": "\\begin{equation}", "right": "\\end{equation}", "display": True},
{"left": "\\begin{align}", "right": "\\end{align}", "display": True},
{"left": "\\begin{alignat}", "right": "\\end{alignat}", "display": True},
{"left": "\\begin{gather}", "right": "\\end{gather}", "display": True},
{"left": "\\begin{CD}", "right": "\\end{CD}", "display": True},
{"left": "\\[", "right": "\\]", "display": True},
],
)

input = mgr.MultimodalInput(
placeholder=self.input_placeholder,
)
audio_input = gr.Audio(sources=["microphone"], type="filepath")

with gr.Column(scale=1):
if len(self.agent_list) > 1:
agent_selector = gr.Dropdown(
[(agent.name, i) for i, agent in enumerate(self.agent_list)],
label="Agents",
info="Select an Agent",
value=0,
interactive=True,
)

agent_info_block = self._create_agent_info_block()

agent_plugins_block = self._create_agent_plugins_block()

if self.prompt_suggestions:
gr.Examples(
label="Example",
examples=self.prompt_suggestions,
inputs=[input],
)

if len(self.agent_list) > 1:
agent_selector.change(
fn=self.change_agent,
inputs=[agent_selector],
outputs=[agent_selector, agent_info_block, agent_plugins_block],
queue=False,
)

input_promise = input.submit(
fn=self.add_text,
inputs=[input, audio_input, chatbot, history],
outputs=[input, audio_input, chatbot, history],
queue=False,
)

if len(self.agent_list) > 1 and enable_mention:
input_promise = input_promise.then(
self.add_mention,
[chatbot, agent_selector],
[chatbot, agent_selector],
).then(
self.agent_run,
[chatbot, history, agent_selector],
[chatbot, history, agent_selector],
)
else:
input_promise = input_promise.then(
self.agent_run,
[chatbot, history],
[chatbot, history],
)

input_promise.then(self.flushed, None, [input])

demo.load(None)

demo.queue(default_concurrency_limit=concurrency_limit).launch(share=share, server_name=server_name, server_port=server_port)

def _create_agent_plugins_block(self, agent_index=0):
from qwen_agent.gui.gradio_dep import gr

agent_interactive = self.agent_list[agent_index]

if agent_interactive.function_map:
capabilities = [key for key in agent_interactive.function_map.keys()]
return gr.CheckboxGroup(
label="Plugins",
value=capabilities,
choices=capabilities,
interactive=False,
)

else:
return gr.CheckboxGroup(
label="Plugins",
value=[],
choices=[],
interactive=False,
)
Loading
Loading