An AI-powered architecture review tool that analyzes software architecture documents and provides detailed, actionable feedback.
-
3-Stage LLM Analysis Pipeline:
- Extract structured architecture model from documents
- Detect issues across 6 categories (scalability, reliability, security, data, observability, devex)
- Generate comprehensive Markdown reports
-
Tech Stack:
- Backend: Fastify + TypeScript
- Frontend: Next.js 14 (App Router) + React
- LLM: Groq (fast inference with multiple models: Compound, Llama 3.3 70B, Llama 3.1 8B, Qwen3 32B, GPT-OSS 120B)
- Monorepo: pnpm workspaces
archon/
├── packages/
│ ├── shared/ # Shared TypeScript types
│ ├── backend/ # Fastify API server
│ │ ├── src/
│ │ │ ├── llm/ # Groq client & prompts
│ │ │ ├── services/ # Review pipeline & storage
│ │ │ ├── routes/ # API endpoints
│ │ │ └── index.ts
│ └── frontend/ # Next.js web UI
│ ├── src/app/
│ │ ├── page.tsx # Main review UI
│ │ └── layout.tsx
└── pnpm-workspace.yaml
- Node.js 18+
- pnpm (
npm install -g pnpm) - Groq API key (get one at https://console.groq.com)
-
Install dependencies:
pnpm install
-
Build shared types:
cd packages/shared pnpm build -
Configure backend:
cd packages/backend cp .env.example .env # Edit .env and add your GROQ_API_KEY
-
Configure frontend:
cd packages/frontend cp .env.local.example .env.local # Edit .env.local if needed (defaults to http://localhost:3001)
pnpm devTerminal 1 - Backend:
cd packages/backend
pnpm devTerminal 2 - Frontend:
cd packages/frontend
pnpm dev- Backend API: http://localhost:3001
- Frontend UI: http://localhost:3000
- Open http://localhost:3000
- Paste your architecture document (Markdown format) into the textarea
- Select your preferred AI model (each has separate rate limits):
- Groq Compound - Default, best quality, multi-model routing
- Llama 3.3 70B - Fastest, most balanced
- Llama 3.1 8B - Faster inference, smaller model
- Qwen3 32B - Multilingual support
- GPT-OSS 120B - Most capable model
- Optionally add a GitHub repository URL (not yet implemented)
- Click "Generate Architecture Review"
- Wait 30-60 seconds for the AI analysis
- View the comprehensive architecture review
Create a new architecture review.
Request Body:
{
"architectureText": "# My Architecture\n...",
"repoUrl": "https://github.com/user/repo", // optional
"model": "groq/compound" // optional (defaults to groq/compound)
}Response: ArchitectureReview object (201 Created)
Retrieve a stored review by ID.
Response: ArchitectureReview object (200 OK)
List all reviews (for debugging).
Response: Array of ArchitectureReview objects
Health check endpoint.
# E-Commerce Platform Architecture
## Overview
A microservices-based e-commerce platform handling 10K requests/day.
## Components
### User Service
- Tech: Node.js + Express
- Database: PostgreSQL
- Handles user authentication and profiles
### Product Service
- Tech: Python + FastAPI
- Database: MongoDB
- Manages product catalog
### Order Service
- Tech: Node.js + Fastify
- Database: PostgreSQL
- Processes orders and payments via Stripe API
## Infrastructure
- Load Balancer: AWS ALB
- Cache: Redis
- Message Queue: RabbitMQ
## Deployment
Docker containers on AWS ECSAll types are defined in packages/shared/src/types.ts and shared between frontend and backend.
Implement the LLMClient interface in packages/backend/src/llm/client.ts:
export interface LLMClient {
complete(messages: LLMMessage[], options?: CompletionOptions): Promise<string>;
}Currently uses in-memory storage. To add a database:
- Implement a new class following the
ReviewStorageinterface - Replace the storage instance in
packages/backend/src/index.ts
This project is configured for deployment on Vercel with separate frontend and backend deployments.
- Push your code to GitHub
- Go to Vercel Dashboard
- Import your GitHub repository
- Set Root Directory to
packages/frontend - Framework Preset: Next.js (auto-detected)
- Set environment variable:
NEXT_PUBLIC_API_URL= your backend URL (e.g.,https://your-backend.vercel.app)
- Click Deploy
- Go to Vercel Dashboard
- Import your GitHub repository again
- Set Root Directory to
packages/backend - Set environment variable:
GROQ_API_KEY= your Groq API key
- Click Deploy
After both deployments:
- Update the frontend's
NEXT_PUBLIC_API_URLto point to your backend URL - Redeploy the frontend
For long-running backend processes, you can deploy the backend to Railway or Render instead:
Railway:
# Install Railway CLI
npm install -g @railway/cli
# Login and deploy
railway login
railway init
railway upRender:
- Connect your GitHub repo
- Create a new Web Service
- Set Root Directory:
packages/backend - Build Command:
pnpm install && pnpm build - Start Command:
pnpm start - Add
GROQ_API_KEYenvironment variable
- GitHub repository analysis (clone, parse code, generate CodeProfile)
- Support for architecture diagrams (PlantUML, Mermaid rendering)
- Persistent storage (PostgreSQL, MongoDB)
- User authentication
- Review history and comparison
- Export reports as PDF
- Real-time progress updates via WebSocket
- CI/CD integration (GitHub Actions, GitLab CI)
MIT