You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ This project offers tools for [AI Inference], enabling developers to build [Infe
7
7
8
8
## Concepts and Definitions
9
9
10
-
AI/ML is changing rapidly, and [Inference] goes beyond basic networking to include complex traffic routing and optimizations. Below are key terms for developers:
10
+
AI/ML is changing rapidly, and [Inference] goes beyond basic networking to include complex traffic routing and optimizations. Below are key terms used within this project:
11
11
12
12
-**Scheduler**: Makes decisions about which endpoint is optimal (best cost / best performance) for an inference request based on `Metrics and Capabilities` from [Model Serving Platforms].
13
13
-**Metrics and Capabilities**: Data provided by model serving platforms about performance, availability and capabilities to optimize routing. Includes things like [Prefix Cache] status or [LoRA Adapters] availability.
0 commit comments