Skip to content

Pinned Loading

  1. vllm vllm Public

    A high-throughput and memory-efficient inference and serving engine for LLMs

    Python 66.1k 12.2k

  2. llm-compressor llm-compressor Public

    Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM

    Python 2.5k 334

  3. recipes recipes Public

    Common recipes to run vLLM

    Jupyter Notebook 300 111

  4. speculators speculators Public

    A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM

    Python 173 22

  5. semantic-router semantic-router Public

    Intelligent Router for Mixture-of-Models

    Go 2.6k 358

Repositories

Showing 10 of 31 repositories
  • vllm Public

    A high-throughput and memory-efficient inference and serving engine for LLMs

    vllm-project/vllm’s past year of commit activity
    Python 66,088 Apache-2.0 12,160 1,843 (42 issues need help) 1,308 Updated Dec 24, 2025
  • vllm-gaudi Public

    Community maintained hardware plugin for vLLM on Intel Gaudi

    vllm-project/vllm-gaudi’s past year of commit activity
    Python 21 Apache-2.0 87 1 66 Updated Dec 24, 2025
  • ci-infra Public

    This repo hosts code for vLLM CI & Performance Benchmark infrastructure.

    vllm-project/ci-infra’s past year of commit activity
    HCL 27 Apache-2.0 53 0 26 Updated Dec 24, 2025
  • vllm-ascend Public

    Community maintained hardware plugin for vLLM on Ascend

    vllm-project/vllm-ascend’s past year of commit activity
    Python 1,495 Apache-2.0 677 801 (8 issues need help) 286 Updated Dec 24, 2025
  • vllm-daily Public

    vLLM Daily Summarization of Merged PRs

    vllm-project/vllm-daily’s past year of commit activity
    14 0 0 0 Updated Dec 24, 2025
  • semantic-router Public

    Intelligent Router for Mixture-of-Models

    vllm-project/semantic-router’s past year of commit activity
    Go 2,571 Apache-2.0 358 94 (13 issues need help) 36 Updated Dec 24, 2025
  • vllm-omni Public

    A framework for efficient model inference with omni-modality models

    vllm-project/vllm-omni’s past year of commit activity
    Python 1,497 Apache-2.0 197 99 (32 issues need help) 49 Updated Dec 24, 2025
  • tpu-inference Public

    TPU inference for vLLM, with unified JAX and PyTorch support.

    vllm-project/tpu-inference’s past year of commit activity
    Python 200 Apache-2.0 63 16 (1 issue needs help) 80 Updated Dec 24, 2025
  • vllm-xpu-kernels Public

    The vLLM XPU kernels for Intel GPU

    vllm-project/vllm-xpu-kernels’s past year of commit activity
    C++ 13 Apache-2.0 16 1 4 Updated Dec 23, 2025
  • guidellm Public

    Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs

    vllm-project/guidellm’s past year of commit activity
    Python 765 Apache-2.0 108 45 (5 issues need help) 17 Updated Dec 24, 2025