Skip to content

rlakhtakia/llm-d

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

llm-d Logo

Achieve SOTA Inference Performance On Any Accelerator

Documentation Release Status License Join Slack

Latest News πŸ”₯

  • [2025-10] Our v0.3 release delivers Intel XPU and Google TPU support, TCP and RDMA over RoCE tested with disaggregation, new experimental predicted latency balancing for up to 3x better P90 latency on long prefill, DeepSeek Expert Parallel serving reaching 2.7k output tokens/s/gpu on H200, and integrates the Inference Gateway v1.0 GA release.
  • [2025-08] Read more about the intelligent inference scheduler, including a deep dive on how different balancing techniques are composed to improve throughput without overloading replicas.

πŸ“„ About

llm-d is a Kubernetes-native distributed inference serving stack providing well-lit paths for anyone to serve large generative AI models at scale. We help you achieve the fastest "time to state-of-the-art (SOTA) performance" for key OSS models across most hardware accelerators and infrastructure providers.

Our well-lit paths provide tested and benchmarked recipes and Helm charts to start serving quickly with best practices common to production deployments. They are extensible and customizable for particulars of your models and use cases, using popular open source components like Kubernetes, Envoy proxy, NIXL, and vLLM. Our intent is to eliminate the heavy lifting common in tuning and deploying inference at scale.

We currently offer three tested and benchmarked paths to help deploying large models:

  1. Intelligent Inference Scheduling - Deploy vLLM behind the Inference Gateway (IGW) to decrease serving latency and increase throughput with predicted latency balancing (experimental), precise prefix-cache aware routing, and customizable scheduling policies.
  2. Prefill/Decode Disaggregation - Reduce time to first token (TTFT) and get more predictable time per output token (TPOT) by splitting inference into prefill servers handling prompts and decode servers handling responses, primarily on large models such as Llama-70B and when processing very long prompts.
  3. Wide Expert-Parallelism - Deploy very large Mixture-of-Experts (MoE) models like DeepSeek-R1 and significantly reduce end-to-end latency and increase throughput by scaling up with Data Parallelism and Expert Parallelism over fast accelerator networks.

Hardware Support

llm-d directly tests and validates multiple accelerator types including NVIDIA GPUs, AMD GPUs, Google TPUs, and Intel XPUs and provides common operational patterns to improve production reliability.

See the accelerator docs for points of contact for more details about the accelerators, networks, and configurations tested and our roadmap for what is coming next.

What is in scope for llm-d

llm-d currently targets improving the production serving experience around:

  • Online serving and online batch of Generative models running in PyTorch or JAX
    • Large language models (LLMs) with 1 billion or more parameters
    • Using most or all of the capacity of one or more hardware accelerators
    • Running in throughput, latency, or multiple-objective configurations
  • On recent generation datacenter-class accelerators
    • NVIDIA A100 / L4 or newer
    • AMD MI250 or newer
    • Google TPU v5e or newer
    • Intel Data Center GPU Max (XPU/Ponte Vecchio) series or newer
  • With extremely fast accelerator interconnect and datacenter networking
    • 600-16,000 Gbps per accelerator NVLINK on host or across narrow domains like NVL72
    • 1,600-5,000 Gbps per chip TPU OCS links within TPU pods
    • 100-1,600 Gbps per host datacenter networking across broad (>128 host) domains
  • Kubernetes 1.29+ as the hardware orchestrator
    • in large (100-100k node) reserved cloud capacity or datacenters, often overlapping with AI batch and training
    • in medium (10-1k node) cloud deployments with a mix of reserved, on-demand, or spot capacity
    • in small (1-10 node) test and qualification environments with a static footprint, often time shared

Our upstream projects – particularly vLLM and Kubernetes – support a broader array of models, accelerators, and networks that may also benefit from our work, but we concentrate on optimizing and standardizing the operational and automation challenges of the leading edge inference workloads.

🧱 Architecture

llm-d accelerates distributed inference by integrating industry-standard open technologies: vLLM as default model server and engine, Inference Gateway as request scheduler and balancer, and Kubernetes as infrastructure orchestrator and workload control plane.

llm-d Arch

Key features of llm-d include:

  • vLLM-Optimized Inference Scheduler: llm-d builds on IGW's pattern of leveraging the Envoy proxy and its extensible balancing policies to make customizable β€œsmart” load-balancing decisions specifically for LLMs. Leveraging operational telemetry, the Inference Scheduler implements the filtering and scoring algorithms to make decisions with P/D-, KV-cache-, SLA-, and load-awareness. Advanced users can implement their own scorers to further customize the algorithm while benefiting from IGW features like flow control and latency-aware balancing. See our Northstar design

  • Disaggregated Serving with vLLM: llm-d orchestrates prefill and decode phases onto independent instances - the scheduler decides which instances should receive a given request, and the transaction is coordinated via a sidecar alongside decode instances. The sidecar instructs vLLM to provide point to point KV cache transfer over fast interconnects (IB/RoCE RDMA, TPU ICI, and DCN) via NIXL. See our Northstar design

  • Disaggregated Prefix Caching : llm-d uses vLLM's KVConnector abstraction to configure a pluggable KV cache hierarchy, including offloading KVs to host, remote storage, and systems like LMCache. We plan to support two KV caching schemes. See our Northstar design

    • Independent (N/S) caching with offloading to local memory and disk, providing a zero operational cost mechanism for offloading.
    • Shared (E/W) caching with KV transfer between instances and shared storage with global indexing, providing potential for higher performance at the cost of a more operationally complex system.
  • Variant Autoscaling over Hardware, Workload, and Traffic: A traffic- and hardware-aware autoscaler that (a) measures the capacity of each model server instance, (b) derive a load function that takes into account different request shapes and QoS, and (c) assesses recent traffic mix (QPS, QoS, and shapes) to calculate the optimal mix of instances to handle prefill, decode, and latency-tolerant requests, enabling use of HPA for SLO-level efficiency. See our Northstar design

For more see the project proposal.

πŸš€ Getting Started

llm-d can be installed as a full solution, customizing enabled features, or through its individual components for experimentation.

Pre-requisites

llm-d requires accelerators capable of running large models. Our well-lit paths are focused on datacenter accelerators and networks and issues encountered outside these may not receive the same level of attention.

See the prerequisites for our guides for more details on supported hardware, networking, Kubernetes cluster configuration, and client tooling.

Deploying llm-d

llm-d provides Helm charts that deploy the inference scheduler and a parameterized deployment of vLLM that demonstrates a number of different production configurations.

We bundle these with guides to our well-lit paths with key decisions, tradeoffs, benchmarks, and recommended configuration.

We suggest the inference scheduling well-lit path if you need a simple, production ready deployment of vLLM with optimized load balancing.

Tip

For a more in-depth introduction to llm-d, try our step-by-step quickstart.

Experimenting and developing with llm-d

llm-d is composed of multiple component repositories and derives from both vLLM and Inference Gateway upstreams. Please see the individual repositories for more guidance on development.

πŸ“¦ Releases

Our guides are living docs and kept current. For details about the Helm charts and component releases, visit our GitHub Releases page to review release notes.

Check out our roadmap for upcoming releases.

Contribute

  • See our project overview for more details on our development process and governance.
  • Review our contributing guidelines for detailed information on how to contribute to the project.
  • Join one of our Special Interest Groups (SIGs) to contribute to specific areas of the project and collaborate with domain experts.
  • We use Slack to discuss development across organizations. Please join: Slack
  • We host a weekly standup for contributors on Wednesdays at 12:30 PM ET, as well as meetings for various SIGs. You can find them in the shared llm-d calendar
  • We use Google Groups to share architecture diagrams and other content. Please join: Google Group

License

This project is licensed under Apache License 2.0. See the LICENSE file for details.

About

llm-d enables high-performance distributed LLM inference on Kubernetes

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Makefile 48.3%
  • Shell 44.0%
  • Dockerfile 7.7%