Skip to content

The open-source multi-agent chat interface that lets you manage multiple agents in one dynamic conversation and add MCP servers for deep research

Notifications You must be signed in to change notification settings

KB01111/KB-multi-agent

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

62 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Open Multi-Agent Canvas

CopilotKit-Banner

multi-agent-canvas

Open Multi-Agent Canvas, created by CopilotKit is an open-source multi-agent chat interface that lets you manage multiple agents in one dynamic conversation. It's built with Next.js, LangGraph, and CopilotKit to help with travel planning, research, and general-purpose tasks through MCP servers.

Existing Agents

Check out these awesome agents (they live in separate repositories). You can run them separately or deploy them on LangSmith:

Additionally, this project now includes built-in agents:

  • MCP Agent: A general-purpose agent capable of handling various tasks through configurable MCP servers.
  • Knowledge Agent: A specialized agent for visualizing, querying, and managing knowledge graphs with PostgreSQL database integration.

Copilot Cloud is required to run this project

You'll need a Copilot Cloud API key to run the frontend. Get one at cloud.copilotkit.ai.

Architecture Overview

The Multi-Agent Canvas consists of two main components:

  1. Frontend (Next.js):

    • Provides the user interface for interacting with agents
    • Manages agent state using CopilotKit
    • Communicates with the backend via Server-Sent Events (SSE)
  2. Backend (LangGraph):

    • Runs the agent workflows using LangGraph
    • Provides API endpoints for the frontend
    • Integrates with various tools and services
    • Includes a custom server with health and diagnostic endpoints

Server Implementation

The backend includes two server implementations:

  1. Custom Server (Recommended):

    • Enhanced FastAPI server with health endpoint
    • Improved error handling and diagnostics
    • Automatic fallback to simple health server
  2. Standard LangGraph Server:

    • Default LangGraph development server
    • Provides graph visualization and debugging tools

The start-all.ps1 script attempts to start the custom server first, and if that fails, it falls back to the standard server.

Quick Start 🚀

1. Prerequisites

Make sure you have:

  • pnpm for frontend dependencies
  • Poetry for backend dependencies
  • Python (v3.10 or later)
  • Node.js (v18 or later)
  • Docker (optional, for containerized deployment)

2. API Keys

  • Copilot Cloud for the frontend
  • OpenAI API key for the backend (optional)

3. Redis Installation (Optional but Recommended)

For improved performance and caching, install Redis:

# Install Redis dependencies
./scripts/install-redis-deps.ps1  # or ./scripts/install-redis-deps.bat

This script will:

  • Install the Redis Python package
  • Install the OpenAI Agents SDK
  • Start a Redis container if Docker is installed

You can also run Redis manually:

# Using Docker
docker run --name kb-redis -p 6379:6379 -d redis

# Or install Redis directly on your system
# See https://redis.io/download for instructions

3a. Docker Setup (Alternative)

Alternatively, you can use Docker to run the entire application stack:

# Initialize Docker environment
./start-app.ps1 -Action init -Docker

# Start the application with Docker
./start-app.ps1 -Docker

This will start all components (frontend, backend, Redis, and database) in Docker containers.

For more information about using Docker, see DOCKER.md.

4. Starting the Application

We've simplified the application management to a single script that handles all operations:

# Start the application (standard mode)
./start-app.ps1

# Start with interactive mode (includes Supabase)
./start-app.ps1 -Interactive

# Force restart (when console windows aren't visible)
./start-app.ps1 -Force

# Stop all components
./start-app.ps1 -Action stop

# Restart specific component
./start-app.ps1 -Action restart -Component backend

# Check status
./start-app.ps1 -Action status -Detailed

This unified script will:

  • Check for required dependencies
  • Start the backend with a health endpoint
  • Start the frontend
  • Initialize the database (PostgreSQL or Supabase)
  • Verify the integration between all components

Parameters:

  • -Action: start, stop, restart, or status (default: start)
  • -Component: all, backend, frontend, or database (default: all)
  • -Interactive: Enable interactive mode with Supabase
  • -Force: Force restart, creating new console windows
  • -Detailed: Show detailed status information
  • -Docker: Use Docker for deployment

Examples:

# Basic operations
./start-app.ps1                                  # Start everything
./start-app.ps1 -Interactive                     # Start with Supabase integration
./start-app.ps1 -Action stop                     # Stop all components

# Force restart (creates new console windows)
./start-app.ps1 -Force                           # Force restart all components
./start-app.ps1 -Component backend -Force        # Force restart just the backend

# Component-specific operations
./start-app.ps1 -Component backend               # Start just the backend
./start-app.ps1 -Component frontend              # Start just the frontend
./start-app.ps1 -Component database              # Initialize the database
./start-app.ps1 -Action restart -Component backend # Restart the backend

# Status and maintenance
./start-app.ps1 -Action status -Detailed         # Detailed status check

# Docker operations
./start-app.ps1 -Docker                          # Start everything with Docker
./start-app.ps1 -Action stop -Docker             # Stop Docker containers
./start-app.ps1 -Action restart -Docker          # Restart Docker containers
./start-app.ps1 -Action status -Docker           # Check Docker container status
./start-app.ps1 -Action init -Docker             # Initialize Docker environment

Database Integration

The Knowledge Agent now includes database integration for persistent storage of knowledge graph data. You can choose between PostgreSQL or Supabase as your database backend.

Option 1: PostgreSQL (Default)

To set up PostgreSQL:

  1. Install the required dependencies:
./scripts/install-db-deps.ps1  # or ./scripts/install-db-deps.bat
  1. Initialize the database:
./scripts/manage-app.ps1 -Action start -Component database

The PostgreSQL connection is configured in the agent/.env file with the following variables:

DATABASE_BACKEND=postgres
DATABASE_URL='prisma+postgres://accelerate.prisma-data.net/?api_key=YOUR_API_KEY'
DIRECT_URL='postgres://username:password@hostname/database?sslmode=require'

Option 2: Supabase

To set up Supabase:

  1. Create a Supabase project at supabase.com

  2. Install the required dependencies:

./scripts/install-supabase-deps.ps1  # or ./scripts/install-supabase-deps.bat
  1. Configure your environment variables in both frontend/.env and agent/.env:

    Frontend .env:

    NEXT_PUBLIC_SUPABASE_URL=https://your-project-id.supabase.co
    NEXT_PUBLIC_SUPABASE_ANON_KEY=your-anon-key
    

    Backend .env:

    DATABASE_BACKEND=supabase
    SUPABASE_URL=https://your-project-id.supabase.co
    SUPABASE_SERVICE_KEY=your-service-key
    
  2. Initialize Supabase tables and default user (either method works):

./scripts/init-supabase.ps1                                      # Manual initialization
./scripts/manage-app.ps1 -Action start -Component supabase       # Using the management script
./start-app-interactive.ps1                                      # Using the interactive startup script
  1. Run the SQL script in scripts/update-supabase-tables.sql in the Supabase SQL Editor to ensure all tables are properly configured.

The initialization script will:

  • Check if all required tables exist with the correct structure
  • Create a default user for testing
  • Provide instructions for any missing tables or columns

For more detailed instructions, see STARTUP.md.

4. Manual Setup

Running the Frontend

Rename the example.env file in the frontend folder to .env:

NEXT_PUBLIC_COPILOT_CLOUD_API_KEY=...
NEXT_PUBLIC_BACKEND_URL=http://localhost:8124

Install dependencies:

cd frontend
pnpm i

Need a CopilotKit API key? Get one here.

Then, fire up the Next.js project:

pnpm run dev  # for development
# or
pnpm run build && pnpm run start  # for production

Agent Features

MCP Agent

mcp-demo

The MCP Agent allows you to connect to various MCP-compatible servers and select different agent frameworks:

  1. Configuring Custom MCP Servers:

    • Click the "MCP Servers" button in the top right of the interface
    • Add servers via the configuration panel:
      • Standard IO: Run commands locally (e.g., Python scripts)
      • SSE: Connect to external MCP-compatible servers (via Server-Sent Events)
  2. Public MCP Servers:

  3. Framework Selection:

    • Click the settings icon in the MCP Agent interface
    • Select from available frameworks:
      • LangGraph: Default framework using LangGraph for agent workflows
      • OpenAI Agents: Alternative framework using OpenAI Agents SDK
      • Hybrid: Uses OpenAI Agents first, falls back to LangGraph if there's an error

Knowledge Agent

The Knowledge Agent provides a powerful interface for working with knowledge graphs:

  1. Interactive Graph Visualization:

    • Visualize entities and relationships in an interactive force-directed graph
    • Zoom, pan, and explore the knowledge structure visually
    • Filter by entity types and relationship types
  2. Entity Management:

    • Add new entities to the knowledge graph
    • Edit existing entities and their properties
    • View detailed information about entities
  3. Knowledge Querying:

    • Search for entities and concepts in the knowledge graph
    • Find relationships between entities
    • Explore the knowledge structure through natural language queries

Running the MCP Agent Backend

Rename the example.env file in the agent folder to .env:

OPENAI_API_KEY=...
LANGSMITH_API_KEY=...

# Framework Configuration
# Options: langgraph, openai_agents, hybrid
FRAMEWORK=langgraph
DEFAULT_MODEL=gpt-4o

# Optional Logfire configuration
LOGFIRE_PROJECT=kb-multi-agent
LOGFIRE_TOKEN=...
LOGGING_ENABLED=true

# Redis Configuration
REDIS_ENABLED=true
REDIS_URL=redis://localhost:6379/0

# Database Configuration
DATABASE_BACKEND=supabase
SUPABASE_URL=...
SUPABASE_SERVICE_KEY=...

If you want to use the included MCP Agent with the built-in math and knowledge servers:

cd agent
poetry install

# Standard server (no health endpoint)
poetry run demo

# OR use the improved server with health endpoint (recommended)
poetry run custom-server

The custom server provides a health endpoint at http://localhost:8124/health that the frontend can use to verify the backend is running. It also includes additional diagnostic endpoints like /routes and a root endpoint with server information.

Enhanced Server Features

  1. Robust Error Handling: The custom server includes comprehensive error handling and fallback mechanisms to ensure reliability.
  2. Diagnostic Endpoints:
    • /health: Check if the server is running
    • /routes: List all available API routes
    • /: Root endpoint with server information
    • /config/mode: Set the framework mode (langgraph, openai_agents, hybrid)
  3. Automatic Fallback: If the custom server fails to start, it automatically falls back to a simple health server to maintain basic functionality.
  4. Framework Selection: The server supports multiple agent frameworks:
    • LangGraph: Default framework using LangGraph for agent workflows
    • OpenAI Agents: Alternative framework using OpenAI Agents SDK
    • Hybrid: Uses OpenAI Agents first, falls back to LangGraph if there's an error

Running a tunnel

Add another terminal and select Remote Endpoint. Then select Local Development. Once this is done, copy the command into your terminal and change the port to match the LangGraph server 8124 image

Testing the Backend

Test scripts are included to verify that the backend is working correctly:

# Test basic server functionality
cd agent
python test_server.py

# Test framework integration
python test_framework_integration.py

# Test with a specific framework
python test_server.py --framework openai_agents

You can also use the PowerShell script to test the framework integration:

./scripts/test-framework.ps1

These scripts test the health endpoint, routes endpoint, root endpoint, and framework selection to ensure the server is functioning properly.

Backend Integrations

The backend includes several modular integrations that can be configured via environment variables:

  1. Memory Management:

    • mem0: Integration with mem0 for persistent memory storage
    • memorysaver: Default in-memory storage using LangGraph's MemorySaver
  2. LLM Integration:

    • litellm: Unified interface for various LLM providers through LiteLLM
  3. Agent-to-Agent Communication:

    • inmemory: In-process communication between agents
  4. Knowledge Graph:

    • graphiti: Integration with knowledge graph services
  5. Logging and Tracing:

  6. Redis Integration:

    • Caching for LLM responses to improve performance and reduce API costs
    • Session management for persistent user sessions
    • Rate limiting for API endpoints
    • Pub/Sub for real-time communication between components
    • See Redis Integration for detailed setup instructions
  7. OpenAI Agents SDK Integration:

    • Full integration with OpenAI's official Agents SDK
    • Support for advanced features like tracing, voice, and parallel execution
    • Team support for creating and managing agent teams
    • Tool conversion between LangChain and OpenAI Agents SDK
    • See OpenAI Agents SDK Integration for detailed setup instructions

Troubleshooting

Quick Fix Scripts

We've created several scripts to help diagnose and fix common issues:

  1. Simple Start Script:

    ./scripts/simple-start.ps1

    This script starts both the backend and frontend in minimal mode, bypassing any complex initialization.

  2. Backend Diagnostic Script:

    ./scripts/diagnose-backend.ps1

    Checks for common backend issues and provides detailed diagnostics.

  3. Frontend Fix Script:

    ./scripts/fix-frontend.ps1

    Clears Next.js cache and rebuilds the frontend to fix chunk loading errors.

  4. Poetry Permissions Fix Script:

    ./scripts/fix-poetry-permissions.ps1

    Resolves permission issues with Poetry's cache directory by configuring Poetry to use a local cache directory in the project.

Port Conflicts

If you encounter port conflicts when starting the backend server, you may see an error like:

ERROR: [Errno 10048] error while attempting to bind on address ('127.0.0.1', 8124): [winerror 10048] only one usage of each socket address is allowed

To resolve this:

  1. Find the process using the port:

    netstat -ano | findstr :8124
  2. Terminate the process:

    taskkill /F /PID <process_id>
  3. Alternatively, use our fix script:

    ./scripts/fix-backend-offline.ps1

Backend Connection Issues

If the frontend cannot connect to the backend:

  1. Verify the backend is running:

    curl http://localhost:8124/health
  2. Check that the frontend's .env file has the correct backend URL:

    NEXT_PUBLIC_BACKEND_URL=http://localhost:8124
  3. Run the test script to verify the backend is working correctly:

    cd agent && python test_server.py
  4. If the backend still won't start, try running the health server directly:

    cd agent && poetry run python -m mcp_agent.health_server

Frontend Issues

Chunk Loading Errors

If you encounter chunk loading errors in the frontend, you may see errors like:

ChunkLoadError: Loading chunk [...] failed

To fix this:

  1. Clear the Next.js cache:

    cd frontend
    rm -rf .next
    rm -rf node_modules/.cache
  2. Rebuild the application:

    pnpm install
    pnpm run build
    pnpm run dev
  3. Alternatively, use our fix script:

    ./scripts/fix-frontend.ps1

Next.js Server Component Errors

If you see errors related to Server Components, such as:

Error: `ssr: false` is not allowed with `next/dynamic` in Server Components

This is because Next.js App Router uses Server Components by default. To fix:

  1. Move dynamic imports with ssr: false to Client Components
  2. Add the "use client" directive to components that use browser-specific features
  3. Create client component wrappers for components that need to be used in Server Components

Poetry Permission Issues

If you encounter permission errors when running poetry install, you may see an error like:

[Errno 13] Permission denied: 'C:\\Users\\username\\AppData\\Local\\pypoetry\\Cache\\cache\\repositories\\PyPI\\_http\\...'

This is typically caused by permission issues with Poetry's cache directory. To fix this:

  1. Run our fix script:

    ./scripts/fix-poetry-permissions.ps1
  2. This script will:

    • Configure Poetry to use a local cache directory in the project
    • Clear any existing cache
    • Run the installation with the --no-cache flag
  3. If you still encounter issues, try running PowerShell as an administrator.

Offline Mode

The application is designed to work in offline mode when Supabase or other external services are not available:

  1. The backend will run in fallback mode with limited functionality
  2. The frontend will use localStorage for temporary storage
  3. Basic chat and agent functionality will still work

To start in offline mode:

./scripts/simple-start.ps1

Documentation

License

Distributed under the MIT License. See LICENSE for more info.

About

The open-source multi-agent chat interface that lets you manage multiple agents in one dynamic conversation and add MCP servers for deep research

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • TypeScript 48.9%
  • Python 31.1%
  • PowerShell 13.9%
  • PLpgSQL 2.4%
  • JavaScript 2.0%
  • CSS 1.3%
  • Other 0.4%