A scalable computer vision platform for real-time model inference, video streaming, and production deployment.
Nexus AI Platform provides a complete solution for computer vision applications with support for multiple model types including YOLO, classification, segmentation, and custom models. Built with modern technologies and designed for production workloads.
- Multi-Model Support: YOLO v8 detection, segmentation, tracking, and custom models
- Scalable Processing: Async queue-based inference engine with priority scheduling
- Real-Time Streaming: RTSP camera integration with WebSocket connectivity
- Modern Interface: React TypeScript frontend with dark theme support
- Enterprise Ready: JWT authentication, role-based access, and comprehensive monitoring
- FastAPI REST API with OpenAPI documentation
- MongoDB data persistence with Redis caching
- Celery background task processing
- Real-time WebSocket connections
- Health checks and metrics collection
- React 18 with TypeScript and Tailwind CSS
- Radix UI components for accessibility
- Real-time dashboard with live metrics
- Camera management interface
- Model inference visualization with canvas rendering
Secure authentication with modern dark theme
Real-time system monitoring and analytics
Live camera feeds and streaming controls
AI model configuration and inference results
Real-time object detection with bounding boxes and confidence scores
Detailed inference results with visualization and metrics
Comprehensive system configuration
- Docker and Docker Compose
- 4GB+ RAM recommended
- NVIDIA GPU (optional, for CUDA acceleration)
-
Clone the repository
git clone https://github.com/NutrinoDaya/Nexus-AI-Platform.git cd Nexus-AI-Platform -
Configure environment
cp .env.example .env # Edit .env with your settings -
Start the platform
docker-compose up -d
-
Access the application
- Frontend: http://localhost:3000
- API: http://localhost:8000
- API Docs: http://localhost:8000/docs
# Register user
curl -X POST "http://localhost:8000/api/v1/auth/register" \
-H "Content-Type: application/json" \
-d '{"username": "user", "email": "user@example.com", "password": "password"}'
# Login
curl -X POST "http://localhost:8000/api/v1/auth/login" \
-H "Content-Type: application/json" \
-d '{"username": "user", "password": "password"}'# Load YOLO model
curl -X POST "http://localhost:8000/api/v1/yolo/models/load" \
-H "Authorization: Bearer YOUR_TOKEN" \
-F "model_path=yolov8n.pt" \
-F "model_id=yolo_nano"
# Run detection
curl -X POST "http://localhost:8000/api/v1/yolo/detect" \
-H "Authorization: Bearer YOUR_TOKEN" \
-F "image=@image.jpg" \
-F "model_id=yolo_nano" \
-F "confidence_threshold=0.5"
# Async processing for high load
curl -X POST "http://localhost:8000/api/v1/yolo/detect" \
-H "Authorization: Bearer YOUR_TOKEN" \
-F "image=@image.jpg" \
-F "model_id=yolo_nano" \
-F "async_processing=true" \
-F "priority=2"# Check job status
curl "http://localhost:8000/api/v1/yolo/jobs/{job_id}" \
-H "Authorization: Bearer YOUR_TOKEN"
# Queue statistics
curl "http://localhost:8000/api/v1/yolo/queue/stats" \
-H "Authorization: Bearer YOUR_TOKEN"INFERENCE_MAX_WORKERS=4 # Number of inference workers
INFERENCE_QUEUE_SIZE=256 # Maximum queue size
INFERENCE_DEVICE=cuda # Processing device (cuda/cpu)
API_WORKERS=8 # FastAPI worker processes- Model caching and preloading
- Connection pooling for databases
- Redis caching for frequent queries
- Gzip compression for API responses
- Multi-stage Docker builds
cd backend
pip install -r requirements.txt
uvicorn api.main:app --reload --host 0.0.0.0 --port 8000cd frontend
npm install
npm run devNexus-AI-Platform/
├── backend/ # FastAPI application
│ ├── api/ # API routes and endpoints
│ ├── core/ # Configuration and utilities
│ ├── models/ # Database models
│ ├── services/ # Business logic
│ └── tasks/ # Background tasks
├── frontend/ # React TypeScript app
│ ├── src/
│ │ ├── components/ # UI components
│ │ ├── pages/ # Application pages
│ │ └── lib/ # API client and utilities
├── config/ # Configuration files
├── docs/ # Documentation
└── scripts/ # Deployment utilities
Backend
- FastAPI (Python web framework)
- Ultralytics YOLO (Computer vision)
- MongoDB (Database)
- Redis (Caching)
- Celery (Task queue)
Frontend
- React 18 (UI library)
- TypeScript (Type safety)
- Tailwind CSS (Styling)
- Radix UI (Components)
- Vite (Build tool)
Infrastructure
- Docker & Docker Compose
- NGINX (Reverse proxy)
- Prometheus (Metrics)
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- Create an issue for bug reports or feature requests
- Check the
/docsdirectory for detailed documentation - Review API documentation at
/docsendpoint when running






