|
7 | 7 |
|
8 | 8 | ## Background
|
9 | 9 |
|
10 |
| -The InferencePool resource is a logical grouping of compute resources, e.g. Pods, that run model servers. The InferencePool would deploy its own routing, and offer administrative configuration to the Platform Admin. |
| 10 | +The **InferencePool** API defines a group of Pods (containers) dedicated to serving AI models. Pods within an InferencePool share the same compute configuration, accelerator type, base language model, and model server. This abstraction simplifies the management of AI model serving resources, providing a centralized point of administrative configuration for Platform Admins. |
11 | 11 |
|
12 |
| -It is expected for the InferencePool to: |
| 12 | +An InferencePool is expected to be bundled with an [Endpoint Picker](https://github.com/kubernetes-sigs/gateway-api-inference-extension/tree/main/pkg/epp) extension. This extension is responsible for tracking key metrics on each model server (i.e. the KV-cache utilization, queue length of pending requests, active LoRA adapters, etc.) and routing incoming inference requests to the optimal model server replica based on these metrics. An EPP can only be associated with a single InferencePool. The associated InferencePool is specified by the [poolName](https://github.com/kubernetes-sigs/gateway-api-inference-extension/blob/main/config/manifests/inferencepool-resources.yaml#L54) and [poolNamespace](https://github.com/kubernetes-sigs/gateway-api-inference-extension/blob/main/config/manifests/inferencepool-resources.yaml#L56) flags. An HTTPRoute can have multiple backendRefs that reference the same InferencePool and therefore routes to the same EPP. An HTTPRoute can have multiple backendRefs that reference different InferencePools and therefore routes to different EPPs. |
13 | 13 |
|
14 |
| - - Enforce fair consumption of resources across competing workloads |
15 |
| - - Efficiently route requests across shared compute (as displayed by the PoC) |
16 |
| - |
17 |
| -It is _not_ expected for the InferencePool to: |
| 14 | +Additionally, any Pod that seeks to join an InferencePool would need to support the [model server protocol](https://github.com/kubernetes-sigs/gateway-api-inference-extension/tree/main/docs/proposals/003-model-server-protocol), defined by this project, to ensure the Endpoint Picker has adequate information to intelligently route requests. |
18 | 15 |
|
19 |
| - - Enforce any common set of adapters or base models are available on the Pods |
20 |
| - - Manage Deployments of Pods within the Pool |
21 |
| - - Manage Pod lifecycle of pods within the pool |
| 16 | +## How to Configure an InferencePool |
22 | 17 |
|
23 |
| -Additionally, any Pod that seeks to join an InferencePool would need to support a protocol, defined by this project, to ensure the Pool has adequate information to intelligently route requests. |
| 18 | +The full spec of the InferencePool is defined [here](/reference/spec/#inferencepool). |
24 | 19 |
|
25 |
| -`InferencePool` has some small overlap with `Service`, displayed here: |
| 20 | +In summary, the InferencePoolSpec consists of 3 major parts: |
| 21 | + |
| 22 | +- The `selector` field specifies which Pods belong to this pool. The labels in this selector must exactly match the labels applied to your model server Pods. |
| 23 | +- The `targetPortNumber` field defines the port number that the Inference Gateway should route to on model server Pods that belong to this pool. |
| 24 | +- The `extensionRef` field references the [endpoint picker extension](https://github.com/kubernetes-sigs/gateway-api-inference-extension/tree/main/pkg/epp) (EPP) service that monitors key metrics from model servers within the InferencePool and provides intelligent routing decisions. |
| 25 | + |
| 26 | +### Example Configuration |
| 27 | + |
| 28 | +Here is an example InferencePool configuration: |
| 29 | + |
| 30 | +``` |
| 31 | +apiVersion: inference.networking.x-k8s.io/v1alpha2 |
| 32 | +kind: InferencePool |
| 33 | +metadata: |
| 34 | + name: vllm-llama3-8b-instruct |
| 35 | +spec: |
| 36 | + targetPortNumber: 8000 |
| 37 | + selector: |
| 38 | + app: vllm-llama3-8b-instruct |
| 39 | + extensionRef: |
| 40 | + name: vllm-llama3-8b-instruct-epp |
| 41 | + port: 9002 |
| 42 | + failureMode: FailClose |
| 43 | +``` |
| 44 | + |
| 45 | +In this example: |
| 46 | + |
| 47 | +- An InferencePool named `vllm-llama3-8b-instruct` is created in the `default` namespace. |
| 48 | +- It will select Pods that have the label `app: vllm-llama3-8b-instruct`. |
| 49 | +- Traffic routed to this InferencePool will call out to the EPP service `vllm-llama3-8b-instruct-epp` on port `9002` for making routing decisions. If EPP fails to pick an endpoint, or is not responsive, the request will be dropped. |
| 50 | +- Traffic routed to this InferencePool will be forwarded to the port `8000` on the selected Pods. |
| 51 | + |
| 52 | +## Overlap with Service |
| 53 | + |
| 54 | +**InferencePool** has some small overlap with **Service**, displayed here: |
26 | 55 |
|
27 | 56 | <!-- Source: https://docs.google.com/presentation/d/11HEYCgFi-aya7FS91JvAfllHiIlvfgcp7qpi_Azjk4E/edit#slide=id.g292839eca6d_1_0 -->
|
28 | 57 | <img src="/images/inferencepool-vs-service.png" alt="Comparing InferencePool with Service" class="center" width="550" />
|
29 | 58 |
|
30 |
| -The InferencePool is _not_ intended to be a mask of the Service object, simply exposing the absolute bare minimum required to allow the Platform Admin to focus less on networking, and more on Pool management. |
31 |
| - |
32 |
| -## Spec |
| 59 | +The InferencePool is not intended to be a mask of the Service object. It provides a specialized abstraction tailored for managing and routing traffic to groups of LLM model servers, allowing Platform Admins to focus on pool-level management rather than low-level networking details. |
33 | 60 |
|
34 |
| -The full spec of the InferencePool is defined [here](/reference/spec/#inferencepool). |
| 61 | +## Replacing an InferencePool |
| 62 | +Please refer to the [Replacing an InferencePool](/guides/replacing-inference-pool) guide for details on uses cases and how to replace an InferencePool. |
0 commit comments