Skip to content

Jared-Krajewski/ATS-Buddy

Repository files navigation

ATS-Buddy

Automatically optimizes your resume for specific job listings using local LLM models or Llama via Groq APIs.

  • The hosted version can be found at https://ats-buddy.onrender.com/.

    **Due to using free hosting it may take up to a minute for the services to spin up on intial request after periods of inactivity.

Screenshot 2025-10-22 at 4 55 42 PM

Features

  • 🧬 Multiple AI Backends: Uses Ollama for local LLM processing or Groq API for cloud processing
  • 🕷️ Web Scraping: Automatically extracts job descriptions from URLs
  • 🔒 Privacy-Focused: Local processing option - no data sent to external APIs when using Ollama

Architecture

ATS-Buddy follows a client-server architecture with AI-powered resume optimization. The system consists of three main components: a web frontend for user interaction, a REST API backend for processing, and optional local AI model hosting via Ollama.

System Overview

  1. User Interaction: Users paste their resume or add a PDF of it and provide a job posting URL through the web interface
  2. Job Description Extraction: The backend scrapes and parses the job posting to extract key requirements
  3. AI-Powered Optimization: The system analyzes the resume against job requirements using either local LLM models (via Ollama) or cloud API (Groq)
  4. Result Delivery: Optimized resume with highlighted changes is returned to the user

Frontend (Web Interface)

  • Single-page React application built with Next.js
  • Handles user input, job URL, AI backend selection)
  • Displays optimization results with downloadable PDF & change tracking

Backend (API & Processing)

  • FastAPI-based REST server handling all business logic
  • Web scraping pipeline using Playwright for dynamic content and BeautifulSoup for HTML parsing
  • Dual AI backend support:
    • Local Mode: Integrates with Ollama for privacy-focused, offline processing
    • Cloud Mode: Uses Groq API for faster, cloud-based optimization
  • Resume parsing and PDF generation capabilities

AI Integration

  • Ollama: Runs local LLM models (e.g., llama3.2) in Docker containers for complete data privacy
  • Groq API: Cloud-based inference for rapid processing with rate limits (6,000 requests/day free tier)

Application Endpoints:

See End of README for full list of technologies used.

Prerequisites

Before running this application outside of docker, make sure you have:

  1. Node.js (v18 or higher)
  2. Python (v3.8 or higher)
  3. Docker if using local LLM via dockerfiles

Getting Groq API Key (Optional, for web api based processing)

For faster resume optimization (~5-10 seconds), you can use Groq's free API:

  1. Go to console.groq.com
  2. Sign up for a free account (no credit card required)
  3. Generate an API key
  4. Use the API key in the frontend interface or add it to env.local as NEXT_PUBLIC_DEFAULT_GROQ_API_KEY

Quick Start with Docker (Recommended)

The easiest way to run ATS-Buddy is using Docker. This approach is completely self-contained and handles all dependencies automatically.

Run with Docker

  1. Clone the repository
git clone https://github.com/Jared-Krajewski/ATS-Buddy.git
cd ATS-Buddy
  1. Quick start (everything in one command)
make start

Development

For contributors and developers who want to work on ATS-Buddy:

Quick setup (recommended):

# Clone and set up development environment
git clone https://github.com/Jared-Krajewski/ATS-Buddy.git
cd ATS-Buddy
make dev-setup    # Install linting tools, pre-commit hooks, dev dependencies
make dev-groq     # Start development with Docker and hot reload

This sets up:

  • Pre-commit hooks (automatic code formatting on commit)
  • Local linting tools (for make lint, make format commands)
  • All dependencies needed for development workflows

Development mode features available with make dev-groq via compose.dev-groq.yaml:

  • 🔥 Hot reload - Live code changes for both frontend and backend without rebuilding containers (via volume mounts)
  • Faster startup - No Ollama container needed, uses Groq API for rapidly cloud processed results
  • 💻 Lower resource usage - No local AI model requirements

Manual Setup (Alternative)

If you prefer to run the application without Docker, follow these steps:

Manual setup including Ollama:

1. Set up the Backend

cd backend

# Create virtual environment
python3 -m venv venv

# Activate virtual environment
source venv/bin/activate  # On Windows: venv\\Scripts\\activate

# Install dependencies
pip install -r requirements.txt

# Install Playwright browsers
playwright install chromium

2. Set up the Frontend

cd frontend

# Install dependencies
npm install

Running the Application

3. Start Ollama (if not already running)

ollama serve

4. Start the Backend Server

cd backend
source venv/bin/activate
python main.py

5. Start the Frontend

cd frontend
npm run dev

Makefile Commands

# Basic operations
make start          # Start all services
make stop           # Stop all services
make restart        # Restart all services
make logs           # View logs from all services

# Development
make dev            # Start in development mode (hot reload)
make dev-rebuild    # Rebuild development services

# Groq API optimized development
make dev-groq       # Start lightweight setup (frontend + backend only, optimized for Groq API)
make dev-groq-stop  # Stop Groq development services
make dev-groq-logs  # View Groq development logs
make dev-groq-rebuild # Rebuild Groq development services

# Management
make health         # Check service health
make status         # Show container status

# AI Model management
make pull-model     # Download llama3.2 model
make list-models    # List available models

# Troubleshooting
make clean          # Clean everything, keeps build cache
make clean-images   # Remove dangling Docker images
make clean-build-cache # Clear Docker build cache
make clobber        # Nuclear option: Remove ALL Docker resources
make reset          # Full reset and restart

# Individual service management
make start-ollama   # Start only Ollama service
make start-backend  # Start only backend service
make start-frontend # Start only frontend service
make rebuild-frontend # Rebuild and restart only frontend

# Monitoring and utilities
make backup-models  # Backup AI models to local directory
make urls           # Show all service URLs

# Development setup
make dev-setup      # Set up local development tools (linting, formatting, pre-commit hooks)

# Help and information
make help           # Show all available commands

API Endpoints

with the backend running, visit ATS-Buddy API Documentation

GET /health

Health check endpoint that verifies Ollama connectivity and available models.

POST /optimize-resume

Main optimization endpoint.

Request body:

{
  "resume_text": "Your resume content...",
  "job_url": "https://example.com/job-posting",
  "use_groq": false,
  "groq_api_key": "optional_groq_api_key"
}

Response:

{
  "optimized_resume": "Optimized resume content...",
  "job_description": "Extracted job description...",
  "changes_made": ["List of changes made..."]
}

GET /models

Returns list of available Ollama models.

Configuration

Environment Variables

Copy .env.example to .env and configure as needed:

cp .env.example .env

Backend Configuration

  • Model Selection: Change the DEFAULT_MODEL in backend/resume_optimizer.py
  • Port: Modify the port in backend/main.py
  • CORS: Update allowed origins in backend/main.py for production

Frontend Configuration

  • API URL: Update the backend URL in frontend/src/components/ResumeOptimizer.tsx

Frontend Environment Variables

The frontend uses environment variables for testing configuration. To set up your development environment:

  1. Copy the example file:

    cd frontend
    cp .env.example .env.local
  2. Edit .env.local with your values:

    # Default job URL for testing (optional)
    NEXT_PUBLIC_DEFAULT_JOB_URL=https://your-test-job-url.com
    
    # Default Groq API key for testing (optional)
    NEXT_PUBLIC_DEFAULT_GROQ_API_KEY=your_groq_api_key_here
    
    # Backend API URL
    NEXT_PUBLIC_API_URL=http://localhost:8000

Available Environment Variables

  • NEXT_PUBLIC_DEFAULT_JOB_URL (optional): Pre-fills the job URL field
  • NEXT_PUBLIC_DEFAULT_GROQ_API_KEY (optional): Pre-fills the Groq API key field and enables Groq by default
  • NEXT_PUBLIC_API_URL: Backend API endpoint (defaults to http://localhost:8000)

Notes

  • The .env.local file is ignored by git and won't be committed to the repository
  • If NEXT_PUBLIC_DEFAULT_GROQ_API_KEY is provided, Groq mode will be enabled by default
  • All environment variables are optional - the app will work without them

Code Quality & Pre-commit Hooks

This project uses Husky for automated code quality checks before commits:

Pre-commit checks:

  • Frontend files: ESLint automatically runs and fixes issues
  • Backend files: Black formatting check
  • Full linting: Available via make lint (includes flake8 + mypy, may cause system hanging for several seconds..)
  • Automatic: Runs automatically on git commit, no manual action needed

Manual code formatting:

# Using Makefile (recommended)
make lint           # Check both frontend and backend
make lint-frontend  # Check only frontend (ESLint + Prettier + TypeScript)
make lint-backend   # Check only backend (Black + flake8 + mypy)
make format         # Auto-fix all formatting issues
make lint-fix       # Same as make format

# Frontend
cd frontend
npm run lint    # Check and fix TypeScript/JavaScript
npm run format  # Format with Prettier

# Backend
cd backend
python -m black .      # Format Python code
python -m flake8 .     # Check style issues

Mobile Test

After building the application on development machine..

  1. To get your local IP, in terminal run..
ifconfig | grep "inet " | grep -v 127.0.0.1 | head -1
  1. Then on a mobile browser navigate to yourLocalIP:3000 ex:192.168.0.77:3000

App Usage

Once everything is running, open your browser to http://localhost:3000 and:

  1. Enter your resume: Paste your complete resume text into the left text area
  2. Add job URL: Enter the URL of the job listing you want to optimize for
  3. Choose AI backend: Select between local Ollama or Groq API (cloud processing)
    • Local Mode: Uses smaller local models but will have longer processing time (~30-90 seconds)
    • Groq API: Uses free web API key ( ~5-10 second results, 6,000 requests/day)
  4. Click "Optimize Resume": The AI will analyze the job description and customize your resume
  5. Review results: See the optimized resume and categorized list of changes made
  6. Copy or download: Use the optimized resume for your application

Troubleshooting

Common Issues

  1. "No Ollama models available"

    • Make sure Ollama is running: ollama serve
    • Download a model: ollama pull llama3.2
  2. Web scraping fails

    • Some sites may block automated requests
    • Try different job URLs
  3. Frontend can't connect to backend

    • Ensure backend is running on port 8000
    • Verify the API URL in the frontend
  4. Groq API issues

    • Verify your API key is correct (starts with gsk_)
    • Check you haven't exceeded the daily limit (6,000 requests)
    • Groq changed their free tier 😢
  5. Failed build for make dev

    • Fix Docker File Sharing
      1. Open Docker Desktop
      2. Go to Settings → Resources → File Sharing
      3. Add ATS-Buddy to the shared paths
      4. Click Apply & Restart

Contributing

  1. Fork the repository
  2. Create a feature branch: git checkout -b feature-name
  3. Make your changes and test them
  4. Commit your changes: git commit -m 'Add feature'
  5. Push to the branch: git push origin feature-name
  6. Submit a pull request

License

This project is open source and available under the MIT License.

Tech used

Frontend

Backend

Dev/code quality

Other Tools

About

use AI to update your resume to better match job listings

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published