Skip to content

Commit

Permalink
fix docs
Browse files Browse the repository at this point in the history
Signed-off-by: 逍遥 <[email protected]>
  • Loading branch information
xiaoyao authored and wawa0210 committed Dec 19, 2024
1 parent b5a8b27 commit c499ab5
Show file tree
Hide file tree
Showing 16 changed files with 41 additions and 43 deletions.
8 changes: 4 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,17 +51,17 @@ A task with the following resources:
```
resources:
limits:
nvidia.com/gpu: 1 # Declare how many physical GPUs the pod needs
nvidia.com/gpumem: 3000 # Identifies 3G GPU memory each physical GPU allocates to the pod
nvidia.com/gpu: 1 # declare how many physical GPUs the pod needs
nvidia.com/gpumem: 3000 # identifies 3G GPU memory each physical GPU allocates to the pod
```

will see 3G device memory inside container

![img](./imgs/hard_limit.jpg)

> Note:
1. **After installing HAMi, the value of `nvidia.com/gpu` registered on the node defaults to the "number of vGPUs".**
2. **When requesting resources in a pod, `nvidia.com/gpu` refers to the "number of physical GPUs required by the current pod".**
1. **After installing HAMi, the value of `nvidia.com/gpu` registered on the node defaults to the number of vGPUs.**
2. **When requesting resources in a pod, `nvidia.com/gpu` refers to the number of physical GPUs required by the current pod.**

### Supported devices

Expand Down
4 changes: 2 additions & 2 deletions README_cn.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,8 +56,8 @@ HAMi支持设备资源的硬隔离
![img](./imgs/hard_limit.jpg)
> 注意:
1. **安装HAMi后,节点上注册的 `nvidia.com/gpu` 值默认为“vGPU数量”**
2. **pod中申请资源时,`nvidia.com/gpu` 指“当前pod需要的物理GPU数量”**
1. **安装HAMi后,节点上注册的 `nvidia.com/gpu` 值默认为vGPU数量**
2. **pod中申请资源时,`nvidia.com/gpu` 指当前pod需要的物理GPU数量**

### 支持的设备

Expand Down
2 changes: 1 addition & 1 deletion docs/develop/tasklist.md
Original file line number Diff line number Diff line change
Expand Up @@ -113,6 +113,6 @@ spec:
command:["bash","-c","sleep 86400"]
resources:
limits:
nvidia.com/gpu: 2 # Declare how many physical GPUs the pod needs
nvidia.com/gpu: 2 # declare how many physical GPUs the pod needs
```

2 changes: 1 addition & 1 deletion example.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ spec:
- while true; do /cuda-samples/vectorAdd; done
resources:
limits:
nvidia.com/gpu: 1 # Declare how many physical GPUs the pod needs
nvidia.com/gpu: 1 # declare how many physical GPUs the pod needs
nvidia.com/gpumem: 3000 # Each vGPU contains 3000M device memory (Optional,Integer)
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
Expand Down
6 changes: 3 additions & 3 deletions examples/nvidia/default_use.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,6 @@ spec:
command: ["bash", "-c", "sleep 86400"]
resources:
limits:
nvidia.com/gpu: 2 # Declare how many physical GPUs the pod needs
nvidia.com/gpumem: 3000 # Identifies 3000M GPU memory each physical GPU allocates to the pod (Optional,Integer)
nvidia.com/gpucores: 30 # Identifies 30% GPU GPU core each physical GPU allocates to the pod (Optional,Integer)
nvidia.com/gpu: 2 # declare how many physical GPUs the pod needs
nvidia.com/gpumem: 3000 # identifies 3000M GPU memory each physical GPU allocates to the pod (Optional,Integer)
nvidia.com/gpucores: 30 # identifies 30% GPU GPU core each physical GPU allocates to the pod (Optional,Integer)
2 changes: 1 addition & 1 deletion examples/nvidia/default_use_legacy.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -9,4 +9,4 @@ spec:
command: ["bash", "-c", "sleep 86400"]
resources:
limits:
nvidia.com/gpu: 2 # Declare how many physical GPUs the pod needs
nvidia.com/gpu: 2 # declare how many physical GPUs the pod needs
16 changes: 8 additions & 8 deletions examples/nvidia/example.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -9,11 +9,11 @@ spec:
command: ["bash", "-c", "sleep 86400"]
resources:
limits:
nvidia.com/gpu: 2 # Declare how many physical GPUs the pod needs
#nvidia.com/gpumem: 3000 # Identifies 3000M GPU memory each physical GPU allocates to the pod
nvidia.com/gpumem-percentage: 50 # Identifies 50% GPU memory each physical GPU allocates to the pod. Can not be used with nvidia.com/gpumem
#nvidia.com/gpucores: 90 # Identifies 90% GPU GPU core each physical GPU allocates to the pod
#nvidia.com/priority: 0 # We only have two priority class, 0(high) and 1(low), default: 1
nvidia.com/gpu: 2 # declare how many physical GPUs the pod needs
#nvidia.com/gpumem: 3000 # identifies 3000M GPU memory each physical GPU allocates to the pod
nvidia.com/gpumem-percentage: 50 # identifies 50% GPU memory each physical GPU allocates to the pod. Can not be used with nvidia.com/gpumem
#nvidia.com/gpucores: 90 # identifies 90% GPU GPU core each physical GPU allocates to the pod
#nvidia.com/priority: 0 # we only have two priority class, 0(high) and 1(low), default: 1
#The utilization of high priority task won't be limited to resourceCores unless sharing GPU node with other high priority tasks.
#The utilization of low priority task won't be limited to resourceCores if no other tasks sharing its GPU.
- name: ubuntu-container0
Expand All @@ -24,7 +24,7 @@ spec:
command: ["bash", "-c", "sleep 86400"]
resources:
limits:
nvidia.com/gpu: 2 # Declare how many physical GPUs the pod needs
nvidia.com/gpumem: 2000
#nvidia.com/gpucores: 90 # Identifies 90% GPU GPU core each physical GPU allocates to the pod
nvidia.com/gpu: 2 # declare how many physical GPUs the pod needs
nvidia.com/gpumem: 2000 # identifies 2000M GPU memory each physical GPU allocates to the pod (Optional,Integer)
#nvidia.com/gpucores: 90 # identifies 90% GPU GPU core each physical GPU allocates to the pod

2 changes: 1 addition & 1 deletion examples/nvidia/specify_card_type_not_use.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -14,4 +14,4 @@ spec:
command: ["bash", "-c", "sleep 86400"]
resources:
limits:
nvidia.com/gpu: 2 # Declare how many physical GPUs the pod needs
nvidia.com/gpu: 2 # declare how many physical GPUs the pod needs
2 changes: 1 addition & 1 deletion examples/nvidia/specify_card_type_to_use.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -14,4 +14,4 @@ spec:
command: ["bash", "-c", "sleep 86400"]
resources:
limits:
nvidia.com/gpu: 2 # Declare how many physical GPUs the pod needs
nvidia.com/gpu: 2 # declare how many physical GPUs the pod needs
2 changes: 1 addition & 1 deletion examples/nvidia/specify_scheduling_policy.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -12,4 +12,4 @@ spec:
command: ["bash", "-c", "sleep 86400"]
resources:
limits:
nvidia.com/gpu: 1 # Declare how many physical GPUs the pod needs
nvidia.com/gpu: 1 # declare how many physical GPUs the pod needs
2 changes: 1 addition & 1 deletion examples/nvidia/specify_uuid_not_use.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -14,4 +14,4 @@ spec:
command: ["bash", "-c", "sleep 86400"]
resources:
limits:
nvidia.com/gpu: 2 # Declare how many physical GPUs the pod needs
nvidia.com/gpu: 2 # declare how many physical GPUs the pod needs
2 changes: 1 addition & 1 deletion examples/nvidia/specify_uuid_to_use.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -14,4 +14,4 @@ spec:
command: ["bash", "-c", "sleep 86400"]
resources:
limits:
nvidia.com/gpu: 1 # Declare how many physical GPUs the pod needs
nvidia.com/gpu: 1 # declare how many physical GPUs the pod needs
6 changes: 3 additions & 3 deletions examples/nvidia/use_as_normal.yaml
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
# Gpu-pod1 and gpu-pod2 will NOT share the same GPU
apiVersion: v1
kind: Pod
metadata:
Expand All @@ -9,7 +10,7 @@ spec:
command: ["bash", "-c", "sleep 86400"]
resources:
limits:
nvidia.com/gpu: 1 # Declare how many physical GPUs the pod needs
nvidia.com/gpu: 1 # declare how many physical GPUs the pod needs
---
apiVersion: v1
kind: Pod
Expand All @@ -22,5 +23,4 @@ spec:
command: ["bash", "-c", "sleep 86400"]
resources:
limits:
nvidia.com/gpu: 1 # Declare how many physical GPUs the pod needs
# gpu-pod1 and gpu-pod2 will NOT share the same GPU
nvidia.com/gpu: 1 # declare how many physical GPUs the pod needs
6 changes: 3 additions & 3 deletions examples/nvidia/use_exclusive_card.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,6 @@ spec:
command: ["bash", "-c", "sleep 86400"]
resources:
limits:
nvidia.com/gpu: 2 # Declare how many physical GPUs the pod needs
nvidia.com/gpumem-percentage: 100 # Identifies 100% GPU memory each physical GPU allocates to the pod (Optional,Integer)
nvidia.com/gpucores: 100 # Identifies 100% GPU GPU core each physical GPU allocates to the pod(Optional,Integer)
nvidia.com/gpu: 2 # declare how many physical GPUs the pod needs
nvidia.com/gpumem-percentage: 100 # identifies 100% GPU memory each physical GPU allocates to the pod (Optional,Integer)
nvidia.com/gpucores: 100 # identifies 100% GPU GPU core each physical GPU allocates to the pod(Optional,Integer)
6 changes: 3 additions & 3 deletions examples/nvidia/use_memory_fraction.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,6 @@ spec:
command: ["bash", "-c", "sleep 86400"]
resources:
limits:
nvidia.com/gpu: 2 # Declare how many physical GPUs the pod needs
nvidia.com/gpumem-percentage: 50 # Identifies 50% GPU memory each physical GPU allocates to the pod (Optional,Integer)
nvidia.com/gpucores: 30 # Identifies 30% GPU GPU core each physical GPU allocates to the pod (Optional,Integer)
nvidia.com/gpu: 2 # declare how many physical GPUs the pod needs
nvidia.com/gpumem-percentage: 50 # identifies 50% GPU memory each physical GPU allocates to the pod (Optional,Integer)
nvidia.com/gpucores: 30 # identifies 30% GPU GPU core each physical GPU allocates to the pod (Optional,Integer)
16 changes: 7 additions & 9 deletions examples/nvidia/use_sharing_card.yaml
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
# Gpu-pod1 and gpu-pod2 could share the same GPU
apiVersion: v1
kind: Pod
metadata:
Expand All @@ -9,10 +10,9 @@ spec:
command: ["bash", "-c", "sleep 86400"]
resources:
limits:
nvidia.com/gpu: 1 # Declare how many physical GPUs the pod needs
nvidia.com/gpumem-percentage: 40 # Identifies 40% GPU memory each physical GPU allocates to the pod (Optional,Integer)
nvidia.com/gpucores: 60 # Identifies 60% GPU GPU core each physical GPU allocates to the pod (Optional,Integer)

nvidia.com/gpu: 1 # declare how many physical GPUs the pod needs
nvidia.com/gpumem-percentage: 40 # identifies 40% GPU memory each physical GPU allocates to the pod (Optional,Integer)
nvidia.com/gpucores: 60 # identifies 60% GPU GPU core each physical GPU allocates to the pod (Optional,Integer)
---
apiVersion: v1
kind: Pod
Expand All @@ -25,8 +25,6 @@ spec:
command: ["bash", "-c", "sleep 86400"]
resources:
limits:
nvidia.com/gpu: 1 # Declare how many physical GPUs the pod needs
nvidia.com/gpumem-percentage: 60 # Identifies 60% GPU memory each physical GPU allocates to the pod (Optional,Integer)
nvidia.com/gpucores: 40 # Identifies 40% GPU GPU core each physical GPU allocates to the pod (Optional,Integer)

# gpu-pod1 and gpu-pod2 could share the same GPU
nvidia.com/gpu: 1 # declare how many physical GPUs the pod needs
nvidia.com/gpumem-percentage: 60 # identifies 60% GPU memory each physical GPU allocates to the pod (Optional,Integer)
nvidia.com/gpucores: 40 # identifies 40% GPU GPU core each physical GPU allocates to the pod (Optional,Integer)

0 comments on commit c499ab5

Please sign in to comment.