https://forum.quantummindsunited.com/t/the-chatbridge-project/66?u=meistro
A beautiful, interactive chat bridge that connects two AI assistants with retro Windows 95-style web GUI and enhanced CLI experience! Watch AI assistants converse with real-time streaming, facial expressions, and comprehensive transcripts and SQLite storage.

- ๐ญ Intimate 4-Step Setup: Persona โ Provider โ Settings โ Start flow
- โก Real-Time Streaming: Watch AI responses appear live as they're generated
- ๐จ Retro Computing Aesthetic: Classic retro-inspired interface with beveled buttons, classic color schemes, and throwback design
- ๐๏ธ Immersive Experience: Window-like interface, scrollbars, and vintage computer styling
- ๐ฏ Dual Provider Selection: Choose any combination of AI providers (OpenAI, Anthropic, Gemini, OpenRouter, etc.)
- ๐ก๏ธ Advanced Controls: Adjustable max rounds, temperature settings per agent
- ๐ Instant Conversations: Clickโก to launch WebSocket-streaming AI dialogues
- ๐ฑ Modern Interface: React + TypeScript + Tailwind CSS for smooth, professional experience
- ๐ช Modal Persona Selection: Interactive persona selection with descriptions and previews
- ๐ Live Status: Real-time connection indicators and typing animations
- ๐จ Classic Color Scheme: Classic grays, blues, and system colors throughout
- โก Quick Startup Script: New
start_web_gui.shfor easy single-command startup
- ๐ Web GUI: Modern browser interface (recommended for visual experience)
- ๐ป CLI Mode: Traditional command-line interface (always reliable)
- ๐ Hybrid: Switch between interfaces based on your needs
- Window-Like Menus: Classic window management with title bars and buttons
- 3D Button Effects: Outset/inset button styling for authentic retro feel
- Scrollbar Styling: Classic gray scrollbars throughout the interface
- Bubble Messages: Vintage chat bubble design with retro colors
- Animated Elements: Pulsing status indicators and smooth transitions
- Responsive Design: Adapts beautifully to desktop, tablet, and mobile
- WebSocket Streaming: Ultra-fast real-time message delivery
- Visual Feedback: Typing indicators, connection status, hover effects
- ๐ข Round Markers - Conversation turns now include visible round numbers in transcripts for easy tracking
- ๐ค Persona Names - Speaker labels now display persona names (e.g., "Steel Worker") instead of generic "Agent A"/"Agent B" labels
- ๐ญ Enhanced Persona Library - Added DeepSeek, ADHD Kid, and Complainer personas for diverse conversation dynamics
- โ๏ธ Improved Roles Management - Updated roles_manager.py with better persona handling and configuration options
- ๐ ๏ธ Additional Utilities - Added check_port.py for database connectivity testing
- ๐ฆ Version Management - Centralized version string in version.py for better release tracking
- ๐ Stop Word Detection Toggle - Enable/disable conversation termination control through interactive menu
- ๐ Enhanced Session Transcripts - Comprehensive session configuration tracking in transcript headers
- โจ Custom Role Creation - Create fully customizable AI roles with user-defined settings
- ๐ญ Enhanced Role Modes - Multiple preset personas including Scientist, Philosopher, Comedian, Steel Worker, DeepSeek Strategist, ADHD Kid, and more
- ๐ฏ Advanced Stop Word Control - Lessened stop word weight function for nuanced conversation control
- ๐จ Beautiful colorful interface with styled menus and progress indicators
- ๐ Single unified script combining all previous functionality
- ๐ฏ Interactive mode with guided setup and provider selection
- ๐ญ Persona system supporting custom AI personalities from
roles.json - โ๏ธ Comprehensive roles management - Create, edit, and manage personas interactively
- ๐ Provider connectivity testing - Ping and diagnose AI provider connections
- โก Quick launcher with preset configurations
- ๐ Enhanced security with proper API key management
- ๐ Stop Word Detection Toggle - Enable/disable conversation termination on stop words
- ๐ ๏ธ Utility Scripts - Standalone roles manager, port connectivity checker, and comprehensive certification suite
The new retro web interface provides a nostalgic AI conversation experience:
# ๐ One-Command Startup (Recommended)
./start.sh
# Development mode with hot-reload
./start.sh --dev
# Stop all services
./stop.shWhat gets started:
- โ FastAPI backend server (includes MCP HTTP endpoints)
- โ Retro web GUI at http://localhost:8000
- โ MCP memory system (HTTP or stdio mode)
- โ Live status monitoring and logs
See QUICKSTART.md for detailed instructions
Alternative: Manual startup
# Backend only
python main.py
# Frontend development
cd web_gui/frontend
npm run dev
# Open: http://localhost:5173๐ Retro GUI Features:
- ๐จ Retro Aesthetic: Classical retro-inspired design with beveled buttons, classic colors, and window-like interface
- โก Live Streaming: Watch AI responses appear in real-time as they're generated
- ๐ญ 32 Personas: Scientist, Philosopher, Comedian, Steel Worker, and many more
- ๐ Multiple Providers: OpenAI, Anthropic, Gemini, OpenRouter, Ollama, LM Studio
- ๐ก๏ธ Advanced Controls: Temperature, max rounds, and conversation settings
- ๐ฑ Responsive: Works perfectly on desktop, tablet, and mobile devices
Option 1: Interactive Launcher (Recommended)
python launch.pyOption 2: Roles Manager (Standalone)
python roles_manager.pyOption 3: Direct Interactive Mode
python chat_bridge.pyOption 4: Command Line
python chat_bridge.py --provider-a openai --provider-b anthropic --starter "What is consciousness?"python roles_manager.pyDedicated interface for creating, editing, and managing AI personas independently of the main chat bridge.
python check_port.pyDatabase connectivity tester for MySQL/MariaDB connections with detailed diagnostics.
python certify.pyComprehensive automated testing and certification system for validating the entire Chat Bridge installation.
- Multi-provider bridge โ choose any combination of OpenAI, Anthropic, Gemini, DeepSeek, Ollama, LM Studio, or OpenRouter for Agent A and Agent B.
- Turbo defaults โ out of the box the scripts target GPT-4o Mini, Claude 3.5 Sonnet (Oct 2024), Gemini 2.5 Flash, llama3.1 8B (Ollama), and LM Studio's meta-llama3 instruct build.
- OpenRouter integration โ access 200+ AI models through a unified API with categorized browsing and model discovery.
- MCP Memory System โ HTTP-based memory integration provides contextual awareness across conversations using RESTful API endpoints.
- Interactive setup โ each run offers a multiple-choice picker for providers and models alongside CLI flags and environment overrides.
- Streaming transcripts โ watch tokens arrive live, capture the Markdown transcript,
and persist structured logs in SQLite plus optional
.logfiles. - Loop + stop guards โ configurable stop phrases and repetition detection end the chat gracefully.
- Versioned releases โ the project now exposes a semantic version (
--version) so you can keep track of updates.
For CLI (Python only):
- Python 3.10 or newer
- Dependencies:
httpx,python-dotenv,google-generativeai,inquirer(install viapip install -r requirements.txt)
For Web GUI (Full Experience):
- All Python requirements above
- Web GUI Backend:
fastapi,uvicorn[standard],websockets,pydantic(install viapip install -r web_gui/backend/requirements.txt) - Web GUI Frontend: Node.js 16+, npm (install via
cd web_gui/frontend && npm install) - Web GUI Build: Modern browser with WebSocket support
Create a .env file in the project root with your API keys:
# Primary API Keys
OPENAI_API_KEY=sk-proj-...
ANTHROPIC_API_KEY=sk-ant-...
GEMINI_API_KEY=AIza...
DEEPSEEK_API_KEY=sk-...
# OpenRouter - Access 200+ models through unified API
OPENROUTER_API_KEY=sk-or-v1-...
OPENROUTER_MODEL=openai/gpt-4o-mini
OPENROUTER_APP_NAME="Chat Bridge" # Optional: appears in OpenRouter logs
OPENROUTER_REFERER="https://github.com/yourusername/chat-bridge" # Optional
# Model Configuration
OPENAI_MODEL=gpt-4o-mini
ANTHROPIC_MODEL=claude-3-5-sonnet-20241022
GEMINI_MODEL=gemini-2.5-flash
DEEPSEEK_MODEL=deepseek-chat
# Local Model Hosts (optional)
OLLAMA_HOST=http://localhost:11434
OLLAMA_MODEL=llama3.1:8b-instruct
LMSTUDIO_BASE_URL=http://localhost:1234/v1
LMSTUDIO_MODEL=lmstudio-community/Meta-Llama-3-8B-Instruct-GGUF
# MCP Memory System (HTTP-based)
MCP_BASE_URL=http://localhost:8000 # FastAPI server for conversation memorypython chat_bridge.pyYou'll see beautiful colored menus guiding you through:
- Start Conversation - Simple Setup (Step-by-step) - New streamlined configuration flow
- Manage Roles & Personas - Interactive roles.json configuration and persona creation
- Test Provider Connectivity - Diagnose and test AI provider connections
- Exit - Gracefully exit the application
The new simplified turn-based configuration walks you through setting up each agent:
- Step 1: Select Persona - Choose from persona library or skip for defaults
- Step 2: Select Provider - Choose AI provider (OpenAI, Anthropic, Gemini, etc.)
- Step 3: Select Model - Choose from dynamically fetched available models
- Step 4: Set Temperature - Enter temperature (default: 0.6 for balanced responses)
- Step 1: Select Persona - Choose from persona library or skip for defaults
- Step 2: Select Provider - Choose AI provider (OpenAI, Anthropic, Gemini, etc.)
- Step 3: Select Model - Choose from dynamically fetched available models
- Step 4: Set Temperature - Enter temperature (default: 0.6 for balanced responses)
- Conversation Starter - Enter your discussion topic
- Live Conversation - Watch the AI assistants converse with real-time streaming
python chat_bridge.py --provider-a openai --provider-b anthropic --max-rounds 40 --mem-rounds 12Skip the interactive setup by providing all parameters via command line.
--provider-a/--provider-bโ select providers for agents A and B--model-a/--model-bโ model overrides for each agent--max-roundsโ maximum conversation rounds (default: 30)--mem-roundsโ context memory rounds (default: 8)--temp-a/--temp-bโ sampling temperatures (default: 0.7)--rolesโ path to personas JSON file--starterโ conversation starter (skips interactive mode)--versionโ show version and exit
Legacy aliases: --openai-model, --anthropic-model
Choose from 4 preset role modes or create your own custom role:
- ๐ฌ Scientist - Evidence-based, analytical, methodical approach
- ๐ค Philosopher - Deep thinking, ethical reasoning, existential exploration
- ๐ Comedian - Witty, observational, entertaining responses
- ๐ญ Steel Worker - Practical, hands-on, blue-collar wisdom and experience
Create fully customized AI roles with complete control over:
- Role Name - Define your custom role identity
- AI Provider - Choose from OpenAI, Anthropic, Gemini, Ollama, or LM Studio
- Model Override - Specify custom models if needed
- System Prompt - Complete control over AI personality and behavior
- Guidelines - Multiple behavioral instructions and rules
- Temperature - Custom creativity level (0.0-2.0)
- Notes - Optional role descriptions
- Permanent Saving - Save custom roles to
roles.jsonfor future use
The Chat Bridge includes a comprehensive roles management interface accessible from the main menu:
- โจ Create New Personas - Interactive wizard for persona creation
- โ๏ธ Edit Existing Personas - Modify system prompts, guidelines, and settings
- ๐ค Edit Default Agents - Configure Agent A and Agent B defaults
- ๐ก๏ธ Temperature Settings - Adjust creativity levels for each agent
- ๐ Stop Words Management - Configure conversation termination phrases
- ๐ Stop Word Detection Toggle - Enable/disable stop word detection during conversations
- ๐ Import/Export - Backup and restore configurations
- ๐ Reset to Defaults - Restore original settings
Create custom AI personalities in roles.json:
{
"agent_a": {
"provider": "openai",
"system": "You are ChatGPT. Be concise, truthful, and witty.",
"guidelines": ["Cite sources", "Use clear structure"]
},
"agent_b": {
"provider": "anthropic",
"system": "You are Claude. Be thoughtful and reflective.",
"guidelines": ["Consider multiple perspectives", "Express uncertainty"]
},
"persona_library": {
"scientist": {
"provider": "openai",
"system": "You are a research scientist. Approach topics with rigorous scientific methodology...",
"guidelines": [
"Base conclusions on empirical evidence",
"Use the scientific method framework",
"Acknowledge limitations and uncertainties"
]
},
"philosopher": {
"provider": "anthropic",
"system": "You are a philosopher. Engage with deep questions about existence...",
"guidelines": [
"Question assumptions deeply",
"Explore multiple perspectives",
"Embrace complexity and nuance"
]
},
"comedian": {
"provider": "openai",
"system": "You are a comedian. Find humor in everyday situations...",
"guidelines": [
"Look for absurdity and unexpected connections",
"Use wordplay and clever observations",
"Balance entertainment with insight"
]
},
"steel_worker": {
"provider": "anthropic",
"system": "You are a steel worker. Speak from experience with hands-on work...",
"guidelines": [
"Emphasize practical solutions",
"Value hard work and reliability",
"Focus on what actually works"
]
}
},
"temp_a": 0.6,
"temp_b": 0.7,
"stop_words": ["wrap up", "end chat", "terminate"],
"stop_word_detection_enabled": true
}Diagnose connection issues and verify API keys before starting conversations. The enhanced error reporting system provides detailed troubleshooting guidance for each provider.
- Test All Providers - Comprehensive connectivity check for all configured providers
- Test Specific Provider - Detailed diagnostics for individual providers
- System Diagnostics - Environment variables and configuration overview
- Real-time Results - Response times and connection status
- Enhanced Error Diagnosis - Specific troubleshooting recommendations with step-by-step solutions
- Troubleshooting Tips - Contextual help based on specific error types
- โ API Key Validity - Authentication with each provider
- โ Model Accessibility - Default model availability
- โ Response Time - Network latency measurement
- โ Local Services - Ollama/LM Studio server status
- โ Connection Health - Network connectivity verification
๐ PROVIDER CONNECTIVITY TEST
Testing OpenAI...
โ
API key valid, model accessible (245ms)
Testing Anthropic...
โ Invalid API key
๐ PROVIDER STATUS SUMMARY
Overall Status: 1/2 providers online
๐ข ONLINE PROVIDERS:
โข OpenAI (gpt-4o-mini) - 245ms
๐ด PROVIDERS WITH ISSUES:
โข Anthropic: โ Invalid API key
๐ก RECOMMENDATIONS:
โข Check your API keys and network connectivity
โข Consider using available providers for conversations
- ๐ Colorful menus - Beautiful ANSI colors and formatting
- ๐ Real-time progress - Live conversation streaming
- ๐ฌ Styled output - Clear agent identification and formatting
- โก Quick launcher - Preset configurations for common scenarios
Every session produces:
transcripts/<timestamp>__<starter-slug>.mdโ Enhanced Markdown transcript with complete session configuration, round markers, and persona nameslogs/<timestamp>__<starter-slug>.logโ optional structured per-session log.chat_bridge.logโ global append-only log capturing request IDs and errors.bridge.dbโ SQLite database containing metadata plus turn-by-turn content.
- Round Markers - Each conversation turn is prefixed with
**Round N**for easy tracking and navigation - Persona Names - When personas are selected, they appear as speaker labels (e.g., "Steel Worker" instead of "Agent A")
- Session Configuration Header - Complete configuration details including providers, models, temperatures
- Agent Configuration - Detailed settings for both agents including personas and system prompts
- Session Settings - Max rounds, memory rounds, and stop word detection status
- Stop Words List - Active stop words with current detection status
- Timestamps - Each turn includes precise timestamps for debugging and analysis
- Structured Format - Clear sections for easy navigation and analysis
Legacy transcripts from earlier experiments may be stored in chatlogs/; current scripts
write to transcripts/ automatically.
- Increase
--max-rounds(e.g.--max-rounds 200). - Raise
--mem-roundsif you want each model to retain more context (values between12โ20work well). - Monitor token budgets: OpenAI GPT-4o Mini typically caps around 128k tokens, Anthropic Claude models around 200k, and Gemini 2.5 Flash around 1M context (depending on release).
Run the automated certification script to validate your entire Chat Bridge installation:
python certify.pyEnhanced Features:
- ๐ Detailed provider identification with specific AI model names (GPT-4o Mini, Claude 3.5 Sonnet, Gemini 2.5 Flash, etc.)
- โฑ๏ธ Comprehensive timestamps for all test operations
- ๐ Enhanced reporting with provider-specific statistics
- ๐ฏ Structured JSON reports saved to
certification_report_YYYYMMDD_HHMMSS.json
The certification covers:
- โ Module imports and dependencies
- โ File structure validation
- โ Database operations (SQLite)
- โ Provider connectivity (OpenAI, Anthropic, Gemini, Ollama, LM Studio)
- โ Roles and personas system
- โ Error handling and recovery
Use the built-in Provider Connectivity Test from the main menu to quickly diagnose issues:
- Check API key validity
- Test network connectivity
- Verify local services (Ollama/LM Studio)
- View environment configuration
- The scripts abort if either assistant hits a configured stop phrase.
- A stall longer than 90 seconds triggers a timeout and ends the session gracefully.
- Check the per-session log and the global
chat_bridge.logfor request IDs and errors. - Missing API keys raise clear runtime errorsโset them in
.envor your shell.
- Invalid API Key (401): Verify
OPENAI_API_KEYis set correctly, check credits, ensure key hasn't expired - Access Forbidden (403): API key lacks model permissions, try different model (e.g., gpt-4o-mini)
- Rate Limited (429): Wait for reset, check usage limits in OpenAI dashboard, consider upgrading plan
- Network Issues: Check internet connection, verify firewall/proxy settings
- Invalid API Key (401): Verify
ANTHROPIC_API_KEYis set correctly, ensure key is valid and active - Access Forbidden (403): Check API key permissions, verify account status
- Rate Limited (429): Wait before retrying, check usage limits, consider API tier upgrade
- Invalid API Key (401): Verify
GEMINI_API_KEY, enable Gemini API in Google Cloud Console - Rate Limited (429): Wait for reset, check quota in Google Cloud Console, enable billing for higher limits
- Access Forbidden (403): Enable Gemini API, check permissions, verify billing is enabled
- Invalid API Key (401): Verify
DEEPSEEK_API_KEYis set correctly, ensure key is valid and active - Access Forbidden (403): Check API key permissions, verify account status
- Rate Limited (429): Wait before retrying, check usage limits, consider API tier upgrade
- Network Issues: Verify connection to DeepSeek API endpoint
- Connection Refused: Start Ollama with
ollama serveorsystemctl start ollama - Model Not Found (404): Pull model with
ollama pull llama3.1:8b-instruct, checkOLLAMA_MODEL - Port Issues: Verify Ollama runs on port 11434, check
OLLAMA_HOSTvariable - Firewall: Ensure firewall allows connections to Ollama port
- Connection Refused: Start LM Studio application and load a model
- Server Not Started: Enable local server in LM Studio (usually port 1234)
- API Endpoint (404): Verify server is running, check if model is loaded
- Port Conflicts: Check if another application uses port 1234, verify
LMSTUDIO_BASE_URL
- Invalid API Key (401): Verify
OPENROUTER_API_KEY, ensure key is valid at openrouter.ai/keys - Provider Filtering (404): Model's provider is blocked in OpenRouter settings, visit https://openrouter.ai/settings/preferences to adjust
- Model Not Found (404): Check model ID format (e.g.,
openai/gpt-4o-mini,anthropic/claude-3-5-sonnet) - Rate Limited (429): Wait before retrying, check credits at openrouter.ai/account
- Network Issues: Verify connection to openrouter.ai API endpoint
The MCP (Memory, Continuity, Protocol) system provides conversation memory via HTTP-based RESTful API:
Starting MCP:
# Start the FastAPI server with MCP endpoints
python main.py
# Or use uvicorn for development
uvicorn main:app --reload --host 0.0.0.0 --port 8000MCP Features:
- HTTP-based integration: RESTful API endpoints for memory operations
- Unified database: SQLAlchemy-powered storage with SQLite backend
- 6 endpoints: Health, stats, recent chats, search, contextual memory, conversation details
- Continuous memory: Fresh context retrieved on every conversation turn
- Graceful degradation: Conversations work without MCP if server unavailable
Using MCP in conversations:
# Enable MCP memory integration (requires MCP server running)
python chat_bridge.py --enable-mcp
# Check MCP status
curl http://localhost:8000/api/mcp/health
curl http://localhost:8000/api/mcp/statsMCP Troubleshooting:
- Ensure FastAPI server is running:
curl http://localhost:8000/health - Check MCP endpoints are accessible:
curl http://localhost:8000/api/mcp/health - Verify database exists with data:
curl http://localhost:8000/api/mcp/stats - MCP integration gracefully degrades if server unavailable
- Check server logs if MCP queries fail
Happy bridging!