Skip to content

Secure Flutter desktop app connecting Auth0 authentication with local Ollama AI models via encrypted tunneling. Access your private AI instances remotely while keeping data on your hardware.

License

Notifications You must be signed in to change notification settings

imrightguy/CloudToLocalLLM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CloudToLocalLLM

Cloud Run Deployment: Your Personal AI Powerhouse

Version License Flutter Platform

Website: https://cloudtolocalllm.online Web App: https://app.cloudtolocalllm.online

Overview

CloudToLocalLLM is a revolutionary Flutter-based application that bridges the gap between cloud-based AI services and local AI models. It provides a seamless, secure, and efficient way to interact with various AI models while maintaining complete control over your data and privacy.

Key Features

  • Hybrid AI Architecture: Seamlessly switch between cloud-based and local AI models
  • Privacy-First Design: Keep sensitive data local while leveraging cloud AI when needed
  • Cross-Platform Support: Available on Windows, Linux, and Web platforms
  • Secure Authentication: OAuth2-based authentication with encrypted token storage
  • Real-Time Communication: WebSocket-based tunneling for instant AI responses
  • Model Flexibility: Support for OpenAI, Anthropic, and local Ollama models
  • User-Friendly Interface: Intuitive Flutter-based UI with responsive design

Quick Start

Prerequisites

  • Flutter SDK (3.8 or higher)
  • Node.js (for development and testing)
  • Git (for version control)
  • Ollama (optional, for local AI models)

Installation

  1. Clone the repository:

    git clone https://github.com/imrightguy/CloudToLocalLLM.git
    cd CloudToLocalLLM
  2. Install dependencies:

    flutter pub get
    npm install
  3. Run the application:

    # For desktop (Windows/Linux)
    flutter run -d windows
    flutter run -d linux
    
    # For web
    flutter run -d chrome

Architecture

CloudToLocalLLM employs a sophisticated architecture that combines the best of both worlds:

Cloud Integration

  • OAuth2 Authentication: Secure authentication with major cloud providers
  • API Gateway: Centralized API management and routing
  • WebSocket Tunneling: Real-time communication between client and cloud services
  • Load Balancing: Intelligent distribution of requests across multiple AI providers

Local AI Support

  • Ollama Integration: Direct integration with local Ollama models
  • Model Management: Easy installation and switching between local models
  • Privacy Protection: All local processing stays on your device
  • Offline Capability: Continue working even without internet connection

Security Features

  • End-to-End Encryption: All communications are encrypted
  • Token Management: Secure storage and automatic refresh of authentication tokens
  • Data Isolation: Clear separation between local and cloud data
  • Audit Logging: Comprehensive logging for security monitoring

Development

Development Environment Setup

Windows Development

# Run the automated setup script
.\scripts\powershell\Setup-WindowsDevelopmentEnvironment.ps1

# Or install manually:
choco install flutter nodejs git docker-desktop

Linux Development

# Install Flutter
sudo snap install flutter --classic

# Install Node.js
curl -fsSL https://deb.nodesource.com/setup_lts.x | sudo -E bash -
sudo apt-get install -y nodejs

# Install other dependencies
sudo apt-get install git docker.io

Version Management

CloudToLocalLLM uses automated version management with documentation updates:

# Windows
.\scripts\powershell\version_manager.ps1 increment patch

# Linux
./scripts/version_manager.sh increment patch

Testing

# Run Flutter tests
flutter test

# Run e2e tests
npm test

# Run specific test suites
npm run test:auth
npm run test:tunnel

Building

# Build for Windows
flutter build windows --release

# Build for Linux
flutter build linux --release

# Build for Web
flutter build web --release

Security

Cloud Run deployment uses keyless authentication via GitHub OIDC and Google Cloud Workload Identity Federation (WIF). This avoids long‑lived service account keys and is our recommended best practice. See the Cloud Run OIDC/WIF Guide.

Deployment

CloudToLocalLLM uses a comprehensive CI/CD pipeline that separates local desktop builds from cloud infrastructure deployment.

🖥️ Desktop Application Builds

For building and releasing desktop applications:

# Build desktop apps and create GitHub release
.\scripts\powershell\Deploy-CloudToLocalLLM.ps1

# Increment minor version
.\scripts\powershell\Deploy-CloudToLocalLLM.ps1 -VersionIncrement minor

# Dry run to test without making changes
.\scripts\powershell\Deploy-CloudToLocalLLM.ps1 -DryRun

What it does:

  • Builds Windows, macOS, and Linux desktop applications
  • Creates GitHub releases with cross-platform binaries
  • Generates SHA256 checksums for security verification
  • Pushes build artifacts to releases/v* branch

☁️ Cloud Infrastructure Deployment

Cloud deployment is automatically handled by GitHub Actions when you push to the main branch:

# Make changes and push to main
git add .
git commit -m "Feature: Add new functionality"
git push origin main

What happens automatically:

  • Builds Docker containers for web, API, and streaming services
  • Deploys to Google Cloud Run at app.cloudtolocalllm.online
  • Configures environment variables; uses OIDC/WIF repository Variables for Cloud Run authentication (no service account keys)
  • Performs health checks and verification

📋 CI/CD Pipeline Overview

Trigger Action Result
Push to main Cloud deployment Services deployed to Google Cloud Run
PowerShell script Desktop build GitHub release with desktop binaries
Push to releases/v* Cross-platform build Multi-platform desktop packages

📚 Documentation

Configuration

Environment Variables

Create a .env file in the project root:

# API Configuration
OPENAI_API_KEY=your_openai_key
ANTHROPIC_API_KEY=your_anthropic_key

# Server Configuration
SERVER_HOST=localhost
SERVER_PORT=3000

# Database Configuration
DATABASE_URL=your_database_url

# OAuth Configuration
OAUTH_CLIENT_ID=your_client_id
OAUTH_CLIENT_SECRET=your_client_secret

Local AI Models

To use local AI models with Ollama:

# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh

# Download models
ollama pull llama3.2:1b
ollama pull codellama:7b
ollama pull mistral:7b

API Documentation

Authentication Endpoints

  • POST /auth/login - Initiate OAuth login
  • POST /auth/callback - Handle OAuth callback
  • POST /auth/refresh - Refresh authentication token
  • POST /auth/logout - Logout and invalidate tokens

AI Model Endpoints

  • POST /api/chat - Send chat message to AI model
  • GET /api/models - List available AI models
  • POST /api/models/switch - Switch active AI model
  • GET /api/models/status - Get model status and health

WebSocket Events

  • connection - Establish WebSocket connection
  • message - Send/receive chat messages
  • model_switch - Switch AI model in real-time
  • status_update - Receive status updates

Contributing

We welcome contributions! Please see our Contributing Guide for details.

Development Workflow

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Add tests
  5. Submit a pull request

Code Style

  • Follow Flutter/Dart conventions
  • Use meaningful variable and function names
  • Add comments for complex logic
  • Ensure all tests pass

License

This project is licensed under the MIT License - see the LICENSE file for details.

Support

Acknowledgments

  • Flutter Team for the amazing cross-platform framework
  • Ollama for local AI model support
  • OpenAI and Anthropic for cloud AI services
  • Community Contributors for their valuable contributions

Made with ❤️ by the CloudToLocalLLM Team

About

Secure Flutter desktop app connecting Auth0 authentication with local Ollama AI models via encrypted tunneling. Access your private AI instances remotely while keeping data on your hardware.

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Packages

No packages published