Production-ready Amazon scraping scripts in Python using Selenium for extracting structured ecommerce data.
Includes scrapers for product pages (ASIN /dp/), search results (SERP /s?k=...), and category/browse pages (node=...).
All scrapers in this directory are built for reliability at scale with built-in proxy rotation, retries, and anti-bot handling via ScrapeOps. Generated automatically from real Amazon URLs using AI, designed to survive common Amazon anti-bot defenses, and intended as reference-quality scrapers, not demos or proof of concepts.
See also:
../README.md(Python overview) and../../../README.md(Amazon scrapers overview).
| Scraper | Best for | Start here |
|---|---|---|
Product pages (/dp/{ASIN}) |
Full product details (price, rating, images, seller, etc.) | product/product_data/README.md |
Search results (/s?k=...) |
SERP analysis + many products per page | product/product_search/README.md |
Category pages (node= / browse) |
Category/browse listing extraction | product/product_category/README.md |
| Reviews | Review extraction | Coming soon |
| Sellers | Seller profile extraction | Coming soon |
- 📦 Available Scrapers
- 🚀 Quick Start
- 🛠️ Why Selenium?
- 📋 Common Use Cases
- 🔑 Get ScrapeOps API Key
- ⚙️ Dependencies & Setup
- 📚 Scraper Documentation
- 🔄 Alternative Implementations
This directory contains the following Amazon scrapers:
-
Product Data Scraper — Extract product data from Amazon product pages
- Individual product pages (ASIN-based) with full details, reviews, ratings, specifications
- Uses Selenium with undetected-chromedriver for browser automation
- Handles dynamic content and JavaScript-rendered pages
-
Product Search Scraper — Extract search results from Amazon SERP
- Search results pages (SERP) with multiple products, pagination, sponsored ads
- Extracts organic products, sponsored placements, search metadata, and related searches
- Supports pagination for scraping multiple pages of results
-
Product Category Scraper — Extract category/browse page data
- Category/browse pages (node ID-based) with category information, products, and navigation links
- Extracts category metadata, products listed on category pages, subcategories, and filters
- Reviews Scraper — Extract product reviews and ratings
- Coming soon — Scraper implementation in development
- Sellers Scraper — Extract seller information and profiles
- Coming soon — Scraper implementation in development
product/— Product page + search results + category scrapers (code + examples)product_data/— Product page scraper (ASIN-based)product_search/— Search results scraper (SERP)product_category/— Category page scraper (node ID-based)
reviews/— Reviews documentation (scraper implementation coming soon)sellers/— Sellers documentation (scraper implementation coming soon)
- ✅ Python 3.7+
- ✅ ScrapeOps API key (Get one free)
- ✅ Dependencies:
undetected-chromedriver,seleniumwire,selenium
-
Install dependencies:
pip install undetected-chromedriver seleniumwire selenium
Note:
undetected-chromedriverautomatically downloads and manages ChromeDriver, so no separate browser installation step is needed. -
Get your ScrapeOps API key:
- Sign up at ScrapeOps (free account)
- Copy your API key from the dashboard
- Set it as an environment variable:
# macOS/Linux export SCRAPEOPS_API_KEY="your-api-key" # Windows PowerShell $env:SCRAPEOPS_API_KEY="your-api-key"
- Or edit the
API_KEYvariable in the scraper file directly
-
Run a scraper:
# Product Page Scraper python product/product_data/amazon.com_scraper_product_v1.py # Search Results Scraper python product/product_search/amazon.com_scraper_product_search_v1.py # Category Page Scraper python product/product_category/amazon.com_scraper_product_category_v1.py
👉 Start with product/product_data/README.md for usage, schemas, examples.
Selenium is an excellent choice for Amazon scraping because:
- Browser Automation — Full browser automation with Chrome, Firefox, or other browsers
- JavaScript Support — Handles JavaScript-heavy sites and dynamic content
- Undetected ChromeDriver — Uses undetected-chromedriver to avoid detection
- Realistic Behavior — Mimics real browser behavior for better success rates
- Threading Support — ThreadPoolExecutor for efficient concurrent scraping
- Production-Ready — Robust error handling and retry mechanisms
- Wide Browser Support — Supports multiple browsers (Chrome, Firefox, Edge, etc.)
| Framework | Best For | Complexity |
|---|---|---|
| BeautifulSoup | Simple HTML parsing, quick scripts | ⭐ Low |
| Scrapy | Large-scale crawling, pipelines | ⭐⭐⭐ Medium |
| Playwright | Modern browser automation, async support | ⭐⭐⭐ High |
| Selenium | Browser automation, legacy support | ⭐⭐⭐ High |
Selenium is ideal when you need browser automation with wide browser support or when working with legacy systems that require Selenium WebDriver.
All scrapers in this directory use Selenium with undetected-chromedriver and seleniumwire for proxy support, and output structured JSONL (see per-scraper docs).
- Amazon price monitoring and tracking
- Product catalog ingestion
- Competitive pricing analysis
- Review and rating aggregation
- Search results analysis (SERP)
- Product discovery and catalog building
- Category hierarchy mapping and navigation
- Market research and trend analysis
- Ecommerce data pipelines
- Category-based product listings
- Dynamic content extraction (JavaScript-rendered pages)
- Browser-based scraping for anti-bot protected sites
All Selenium scrapers require a ScrapeOps API key to access the proxy service.
- Visit the ScrapeOps registration page
- Sign up for a free account
- Navigate to your dashboard to retrieve your API key
Method 1: Direct Assignment (Quick Start)
- Open the scraper file you want to use
- Locate the
API_KEYvariable near the top of the file - Replace the placeholder with your actual ScrapeOps API key:
API_KEY = "your-actual-api-key-here"
Method 2: Environment Variable (Recommended for Production)
For better security, use environment variables:
-
Set the environment variable:
# macOS/Linux export SCRAPEOPS_API_KEY="your-actual-api-key-here" # Windows PowerShell $env:SCRAPEOPS_API_KEY="your-actual-api-key-here"
-
Modify the code to read from environment:
import os API_KEY = os.getenv("SCRAPEOPS_API_KEY", "your-default-key")
Note: Some v1 scripts read from a hardcoded
API_KEYconstant. If so, either editAPI_KEYdirectly or update the script to useos.getenv("SCRAPEOPS_API_KEY").
All Selenium scrapers require the following Python packages:
pip install undetected-chromedriver seleniumwire seleniumundetected-chromedriver— Undetected ChromeDriver that automatically downloads and manages ChromeDriverseleniumwire— Selenium extension for capturing and modifying network requests (used for proxy support)selenium— Browser automation library for web scraping
undetected-chromedriver automatically downloads and manages ChromeDriver, so no separate installation step is needed. The first time you run a scraper, it will automatically download the appropriate ChromeDriver version.
Using pip:
pip install undetected-chromedriver seleniumwire seleniumUsing requirements.txt:
pip install -r requirements.txt(If a requirements.txt file is available in the scraper directory)
Comprehensive documentation for product page, search results, and category page scrapers:
-
Product Data Scraper README — Complete guide for product page scraping
- Product page scraping (ASIN-based)
- Full product details extraction
- Output schemas and examples
- Configuration and usage examples
- Browser automation setup
-
Product Search Scraper README — Complete guide for search results scraping
- Search results scraping (SERP)
- Multiple products per page extraction
- Pagination strategies
- Sponsored products and related searches
- Output schemas and examples
-
Product Category Scraper README — Complete guide for category page scraping
- Category page scraping (node ID-based)
- Category information and metadata
- Products listed on category pages
- Subcategories and navigation links
- Output schemas and examples
- Reviews Scraper — Documentation for product reviews scraping
- Implementation in development
- Sellers Scraper — Documentation for seller information scraping
- Implementation in development
These Amazon scrapers are available in multiple Python frameworks. Explore alternative implementations that may better suit your needs:
- BeautifulSoup Framework — Simple HTML parsing, fast and lightweight
- Scrapy Framework — Full-featured crawling framework for large-scale scraping
- Playwright Framework — Modern browser automation with async support
- Selenium (This directory) — Browser automation with Selenium WebDriver
- Node.js Implementations — JavaScript/TypeScript scrapers with Playwright and Puppeteer
- Go Implementations — Go-based HTTP client scrapers
- Amazon Scrapers Main README — Canonical website-level README with overview of all Amazon scrapers and implementations
These scrapers are provided for educational and research purposes.
You are responsible for ensuring your use complies with Amazon's terms of service and applicable laws in your jurisdiction.