Automatically optimizes your resume for specific job listings using local LLM models or Llama via Groq APIs.
-
The hosted version can be found at https://ats-buddy.onrender.com/.
**Due to using free hosting it may take up to a minute for the services to spin up on intial request after periods of inactivity.
- 🧬 Multiple AI Backends: Uses Ollama for local LLM processing or Groq API for cloud processing
- 🕷️ Web Scraping: Automatically extracts job descriptions from URLs
- 🔒 Privacy-Focused: Local processing option - no data sent to external APIs when using Ollama
ATS-Buddy follows a client-server architecture with AI-powered resume optimization. The system consists of three main components: a web frontend for user interaction, a REST API backend for processing, and optional local AI model hosting via Ollama.
- User Interaction: Users paste their resume or add a PDF of it and provide a job posting URL through the web interface
- Job Description Extraction: The backend scrapes and parses the job posting to extract key requirements
- AI-Powered Optimization: The system analyzes the resume against job requirements using either local LLM models (via Ollama) or cloud API (Groq)
- Result Delivery: Optimized resume with highlighted changes is returned to the user
- Single-page React application built with Next.js
- Handles user input, job URL, AI backend selection)
- Displays optimization results with downloadable PDF & change tracking
- FastAPI-based REST server handling all business logic
- Web scraping pipeline using Playwright for dynamic content and BeautifulSoup for HTML parsing
- Dual AI backend support:
- Local Mode: Integrates with Ollama for privacy-focused, offline processing
- Cloud Mode: Uses Groq API for faster, cloud-based optimization
- Resume parsing and PDF generation capabilities
- Ollama: Runs local LLM models (e.g., llama3.2) in Docker containers for complete data privacy
- Groq API: Cloud-based inference for rapid processing with rate limits (6,000 requests/day free tier)
Application Endpoints:
- 🎨 Frontend: http://localhost:3000
- 🔧 Backend API: http://localhost:8000
- 🧠 Ollama: http://localhost:11434 (when using local AI)
See End of README for full list of technologies used.
Before running this application outside of docker, make sure you have:
- Node.js (v18 or higher)
- Python (v3.8 or higher)
- Docker if using local LLM via dockerfiles
For faster resume optimization (~5-10 seconds), you can use Groq's free API:
- Go to console.groq.com
- Sign up for a free account (no credit card required)
- Generate an API key
- Use the API key in the frontend interface or add it to env.local as
NEXT_PUBLIC_DEFAULT_GROQ_API_KEY
The easiest way to run ATS-Buddy is using Docker. This approach is completely self-contained and handles all dependencies automatically.
- Clone the repository
git clone https://github.com/Jared-Krajewski/ATS-Buddy.git
cd ATS-Buddy- Quick start (everything in one command)
make startFor contributors and developers who want to work on ATS-Buddy:
Quick setup (recommended):
# Clone and set up development environment
git clone https://github.com/Jared-Krajewski/ATS-Buddy.git
cd ATS-Buddy
make dev-setup # Install linting tools, pre-commit hooks, dev dependencies
make dev-groq # Start development with Docker and hot reloadThis sets up:
- Pre-commit hooks (automatic code formatting on commit)
- Local linting tools (for
make lint,make formatcommands) - All dependencies needed for development workflows
Development mode features available with make dev-groq via compose.dev-groq.yaml:
- 🔥 Hot reload - Live code changes for both frontend and backend without rebuilding containers (via volume mounts)
- ⚡ Faster startup - No Ollama container needed, uses Groq API for rapidly cloud processed results
- 💻 Lower resource usage - No local AI model requirements
If you prefer to run the application without Docker, follow these steps:
Manual setup including Ollama:
cd backend
# Create virtual environment
python3 -m venv venv
# Activate virtual environment
source venv/bin/activate # On Windows: venv\\Scripts\\activate
# Install dependencies
pip install -r requirements.txt
# Install Playwright browsers
playwright install chromiumcd frontend
# Install dependencies
npm installollama servecd backend
source venv/bin/activate
python main.pycd frontend
npm run dev# Basic operations
make start # Start all services
make stop # Stop all services
make restart # Restart all services
make logs # View logs from all services
# Development
make dev # Start in development mode (hot reload)
make dev-rebuild # Rebuild development services
# Groq API optimized development
make dev-groq # Start lightweight setup (frontend + backend only, optimized for Groq API)
make dev-groq-stop # Stop Groq development services
make dev-groq-logs # View Groq development logs
make dev-groq-rebuild # Rebuild Groq development services
# Management
make health # Check service health
make status # Show container status
# AI Model management
make pull-model # Download llama3.2 model
make list-models # List available models
# Troubleshooting
make clean # Clean everything, keeps build cache
make clean-images # Remove dangling Docker images
make clean-build-cache # Clear Docker build cache
make clobber # Nuclear option: Remove ALL Docker resources
make reset # Full reset and restart
# Individual service management
make start-ollama # Start only Ollama service
make start-backend # Start only backend service
make start-frontend # Start only frontend service
make rebuild-frontend # Rebuild and restart only frontend
# Monitoring and utilities
make backup-models # Backup AI models to local directory
make urls # Show all service URLs
# Development setup
make dev-setup # Set up local development tools (linting, formatting, pre-commit hooks)
# Help and information
make help # Show all available commandswith the backend running, visit ATS-Buddy API Documentation
Health check endpoint that verifies Ollama connectivity and available models.
Main optimization endpoint.
Request body:
{
"resume_text": "Your resume content...",
"job_url": "https://example.com/job-posting",
"use_groq": false,
"groq_api_key": "optional_groq_api_key"
}Response:
{
"optimized_resume": "Optimized resume content...",
"job_description": "Extracted job description...",
"changes_made": ["List of changes made..."]
}Returns list of available Ollama models.
Copy .env.example to .env and configure as needed:
cp .env.example .env- Model Selection: Change the
DEFAULT_MODELinbackend/resume_optimizer.py - Port: Modify the port in
backend/main.py - CORS: Update allowed origins in
backend/main.pyfor production
- API URL: Update the backend URL in
frontend/src/components/ResumeOptimizer.tsx
The frontend uses environment variables for testing configuration. To set up your development environment:
-
Copy the example file:
cd frontend cp .env.example .env.local -
Edit
.env.localwith your values:# Default job URL for testing (optional) NEXT_PUBLIC_DEFAULT_JOB_URL=https://your-test-job-url.com # Default Groq API key for testing (optional) NEXT_PUBLIC_DEFAULT_GROQ_API_KEY=your_groq_api_key_here # Backend API URL NEXT_PUBLIC_API_URL=http://localhost:8000
NEXT_PUBLIC_DEFAULT_JOB_URL(optional): Pre-fills the job URL fieldNEXT_PUBLIC_DEFAULT_GROQ_API_KEY(optional): Pre-fills the Groq API key field and enables Groq by defaultNEXT_PUBLIC_API_URL: Backend API endpoint (defaults tohttp://localhost:8000)
- The
.env.localfile is ignored by git and won't be committed to the repository - If
NEXT_PUBLIC_DEFAULT_GROQ_API_KEYis provided, Groq mode will be enabled by default - All environment variables are optional - the app will work without them
This project uses Husky for automated code quality checks before commits:
Pre-commit checks:
- Frontend files: ESLint automatically runs and fixes issues
- Backend files: Black formatting check
- Full linting: Available via
make lint(includes flake8 + mypy, may cause system hanging for several seconds..) - Automatic: Runs automatically on
git commit, no manual action needed
Manual code formatting:
# Using Makefile (recommended)
make lint # Check both frontend and backend
make lint-frontend # Check only frontend (ESLint + Prettier + TypeScript)
make lint-backend # Check only backend (Black + flake8 + mypy)
make format # Auto-fix all formatting issues
make lint-fix # Same as make format
# Frontend
cd frontend
npm run lint # Check and fix TypeScript/JavaScript
npm run format # Format with Prettier
# Backend
cd backend
python -m black . # Format Python code
python -m flake8 . # Check style issuesAfter building the application on development machine..
- To get your local IP, in terminal run..
ifconfig | grep "inet " | grep -v 127.0.0.1 | head -1- Then on a mobile browser navigate to yourLocalIP:3000 ex:192.168.0.77:3000
Once everything is running, open your browser to http://localhost:3000 and:
- Enter your resume: Paste your complete resume text into the left text area
- Add job URL: Enter the URL of the job listing you want to optimize for
- Choose AI backend: Select between local Ollama or Groq API (cloud processing)
- Local Mode: Uses smaller local models but will have longer processing time (~30-90 seconds)
- Groq API: Uses free web API key ( ~5-10 second results, 6,000 requests/day)
- Click "Optimize Resume": The AI will analyze the job description and customize your resume
- Review results: See the optimized resume and categorized list of changes made
- Copy or download: Use the optimized resume for your application
-
"No Ollama models available"
- Make sure Ollama is running:
ollama serve - Download a model:
ollama pull llama3.2
- Make sure Ollama is running:
-
Web scraping fails
- Some sites may block automated requests
- Try different job URLs
-
Frontend can't connect to backend
- Ensure backend is running on port 8000
- Verify the API URL in the frontend
-
Groq API issues
- Verify your API key is correct (starts with
gsk_) - Check you haven't exceeded the daily limit (6,000 requests)
- Groq changed their free tier 😢
- Verify your API key is correct (starts with
-
Failed build for
make dev- Fix Docker File Sharing
- Open Docker Desktop
- Go to Settings → Resources → File Sharing
- Add ATS-Buddy to the shared paths
- Click Apply & Restart
- Fix Docker File Sharing
- Fork the repository
- Create a feature branch:
git checkout -b feature-name - Make your changes and test them
- Commit your changes:
git commit -m 'Add feature' - Push to the branch:
git push origin feature-name - Submit a pull request
This project is open source and available under the MIT License.