Skip to content

Commit c499ab5

Browse files
xiaoyaowawa0210
authored andcommitted
fix docs
Signed-off-by: 逍遥 <[email protected]>
1 parent b5a8b27 commit c499ab5

16 files changed

+41
-43
lines changed

README.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -51,17 +51,17 @@ A task with the following resources:
5151
```
5252
resources:
5353
limits:
54-
nvidia.com/gpu: 1 # Declare how many physical GPUs the pod needs
55-
nvidia.com/gpumem: 3000 # Identifies 3G GPU memory each physical GPU allocates to the pod
54+
nvidia.com/gpu: 1 # declare how many physical GPUs the pod needs
55+
nvidia.com/gpumem: 3000 # identifies 3G GPU memory each physical GPU allocates to the pod
5656
```
5757

5858
will see 3G device memory inside container
5959

6060
![img](./imgs/hard_limit.jpg)
6161

6262
> Note:
63-
1. **After installing HAMi, the value of `nvidia.com/gpu` registered on the node defaults to the "number of vGPUs".**
64-
2. **When requesting resources in a pod, `nvidia.com/gpu` refers to the "number of physical GPUs required by the current pod".**
63+
1. **After installing HAMi, the value of `nvidia.com/gpu` registered on the node defaults to the number of vGPUs.**
64+
2. **When requesting resources in a pod, `nvidia.com/gpu` refers to the number of physical GPUs required by the current pod.**
6565

6666
### Supported devices
6767

README_cn.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -56,8 +56,8 @@ HAMi支持设备资源的硬隔离
5656
![img](./imgs/hard_limit.jpg)
5757
5858
> 注意:
59-
1. **安装HAMi后,节点上注册的 `nvidia.com/gpu` 值默认为“vGPU数量”**
60-
2. **pod中申请资源时,`nvidia.com/gpu` 指“当前pod需要的物理GPU数量”**
59+
1. **安装HAMi后,节点上注册的 `nvidia.com/gpu` 值默认为vGPU数量**
60+
2. **pod中申请资源时,`nvidia.com/gpu` 指当前pod需要的物理GPU数量**
6161

6262
### 支持的设备
6363

docs/develop/tasklist.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -113,6 +113,6 @@ spec:
113113
command:["bash","-c","sleep 86400"]
114114
resources:
115115
limits:
116-
nvidia.com/gpu: 2 # Declare how many physical GPUs the pod needs
116+
nvidia.com/gpu: 2 # declare how many physical GPUs the pod needs
117117
```
118118

example.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ spec:
3434
- while true; do /cuda-samples/vectorAdd; done
3535
resources:
3636
limits:
37-
nvidia.com/gpu: 1 # Declare how many physical GPUs the pod needs
37+
nvidia.com/gpu: 1 # declare how many physical GPUs the pod needs
3838
nvidia.com/gpumem: 3000 # Each vGPU contains 3000M device memory (Optional,Integer)
3939
terminationMessagePath: /dev/termination-log
4040
terminationMessagePolicy: File

examples/nvidia/default_use.yaml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,6 @@ spec:
99
command: ["bash", "-c", "sleep 86400"]
1010
resources:
1111
limits:
12-
nvidia.com/gpu: 2 # Declare how many physical GPUs the pod needs
13-
nvidia.com/gpumem: 3000 # Identifies 3000M GPU memory each physical GPU allocates to the pod (Optional,Integer)
14-
nvidia.com/gpucores: 30 # Identifies 30% GPU GPU core each physical GPU allocates to the pod (Optional,Integer)
12+
nvidia.com/gpu: 2 # declare how many physical GPUs the pod needs
13+
nvidia.com/gpumem: 3000 # identifies 3000M GPU memory each physical GPU allocates to the pod (Optional,Integer)
14+
nvidia.com/gpucores: 30 # identifies 30% GPU GPU core each physical GPU allocates to the pod (Optional,Integer)

examples/nvidia/default_use_legacy.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,4 +9,4 @@ spec:
99
command: ["bash", "-c", "sleep 86400"]
1010
resources:
1111
limits:
12-
nvidia.com/gpu: 2 # Declare how many physical GPUs the pod needs
12+
nvidia.com/gpu: 2 # declare how many physical GPUs the pod needs

examples/nvidia/example.yaml

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -9,11 +9,11 @@ spec:
99
command: ["bash", "-c", "sleep 86400"]
1010
resources:
1111
limits:
12-
nvidia.com/gpu: 2 # Declare how many physical GPUs the pod needs
13-
#nvidia.com/gpumem: 3000 # Identifies 3000M GPU memory each physical GPU allocates to the pod
14-
nvidia.com/gpumem-percentage: 50 # Identifies 50% GPU memory each physical GPU allocates to the pod. Can not be used with nvidia.com/gpumem
15-
#nvidia.com/gpucores: 90 # Identifies 90% GPU GPU core each physical GPU allocates to the pod
16-
#nvidia.com/priority: 0 # We only have two priority class, 0(high) and 1(low), default: 1
12+
nvidia.com/gpu: 2 # declare how many physical GPUs the pod needs
13+
#nvidia.com/gpumem: 3000 # identifies 3000M GPU memory each physical GPU allocates to the pod
14+
nvidia.com/gpumem-percentage: 50 # identifies 50% GPU memory each physical GPU allocates to the pod. Can not be used with nvidia.com/gpumem
15+
#nvidia.com/gpucores: 90 # identifies 90% GPU GPU core each physical GPU allocates to the pod
16+
#nvidia.com/priority: 0 # we only have two priority class, 0(high) and 1(low), default: 1
1717
#The utilization of high priority task won't be limited to resourceCores unless sharing GPU node with other high priority tasks.
1818
#The utilization of low priority task won't be limited to resourceCores if no other tasks sharing its GPU.
1919
- name: ubuntu-container0
@@ -24,7 +24,7 @@ spec:
2424
command: ["bash", "-c", "sleep 86400"]
2525
resources:
2626
limits:
27-
nvidia.com/gpu: 2 # Declare how many physical GPUs the pod needs
28-
nvidia.com/gpumem: 2000
29-
#nvidia.com/gpucores: 90 # Identifies 90% GPU GPU core each physical GPU allocates to the pod
27+
nvidia.com/gpu: 2 # declare how many physical GPUs the pod needs
28+
nvidia.com/gpumem: 2000 # identifies 2000M GPU memory each physical GPU allocates to the pod (Optional,Integer)
29+
#nvidia.com/gpucores: 90 # identifies 90% GPU GPU core each physical GPU allocates to the pod
3030

examples/nvidia/specify_card_type_not_use.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,4 +14,4 @@ spec:
1414
command: ["bash", "-c", "sleep 86400"]
1515
resources:
1616
limits:
17-
nvidia.com/gpu: 2 # Declare how many physical GPUs the pod needs
17+
nvidia.com/gpu: 2 # declare how many physical GPUs the pod needs

examples/nvidia/specify_card_type_to_use.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,4 +14,4 @@ spec:
1414
command: ["bash", "-c", "sleep 86400"]
1515
resources:
1616
limits:
17-
nvidia.com/gpu: 2 # Declare how many physical GPUs the pod needs
17+
nvidia.com/gpu: 2 # declare how many physical GPUs the pod needs

examples/nvidia/specify_scheduling_policy.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,4 +12,4 @@ spec:
1212
command: ["bash", "-c", "sleep 86400"]
1313
resources:
1414
limits:
15-
nvidia.com/gpu: 1 # Declare how many physical GPUs the pod needs
15+
nvidia.com/gpu: 1 # declare how many physical GPUs the pod needs

0 commit comments

Comments
 (0)