Skip to content

Commit a0f6769

Browse files
committed
Checking H100
1 parent 46a7b0c commit a0f6769

File tree

3 files changed

+8
-8
lines changed

3 files changed

+8
-8
lines changed

.github/workflows/float8_test.yml

+2-2
Original file line numberDiff line numberDiff line change
@@ -30,9 +30,9 @@ jobs:
3030
gpu-arch-version: "12.1"
3131
- name: H100
3232
runs-on: linux.aws.h100
33-
torch-spec: '--pre torch --index-url https://download.pytorch.org/whl/nightly/cu121'
33+
torch-spec: '--pre torch --index-url https://download.pytorch.org/whl/nightly/cu124'
3434
gpu-arch-type: "cuda"
35-
gpu-arch-version: "12.1"
35+
gpu-arch-version: "12.4"
3636

3737
permissions:
3838
id-token: write

.github/workflows/regression_test.yml

+5-5
Original file line numberDiff line numberDiff line change
@@ -74,6 +74,11 @@ jobs:
7474
torch-spec: 'torch==2.5.1 --index-url https://download.pytorch.org/whl/cu121'
7575
gpu-arch-type: "cuda"
7676
gpu-arch-version: "12.1"
77+
- name: H100
78+
runs-on: linux.aws.h100
79+
torch-spec: '--pre torch --index-url https://download.pytorch.org/whl/nightly/cu121'
80+
gpu-arch-type: "cuda"
81+
gpu-arch-version: "12.4"
7782

7883
- name: CPU 2.3
7984
runs-on: linux.4xlarge
@@ -90,11 +95,6 @@ jobs:
9095
torch-spec: 'torch==2.5.1 --index-url https://download.pytorch.org/whl/cpu'
9196
gpu-arch-type: "cpu"
9297
gpu-arch-version: ""
93-
- name: H100
94-
runs-on: linux.aws.h100
95-
torch-spec: '--pre torch --index-url https://download.pytorch.org/whl/nightly/cu121'
96-
gpu-arch-type: "cuda"
97-
gpu-arch-version: "12.1"
9898

9999
uses: pytorch/test-infra/.github/workflows/linux_job.yml@main
100100
with:

test/float8/test_compile.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -469,7 +469,7 @@ def test_dynamic_scale_numeric_parity(dtype: torch.dtype):
469469

470470

471471
@unittest.skipIf(
472-
not is_sm_at_least_89() or not is_fbcode(),
472+
not is_sm_at_least_89() or is_fbcode(),
473473
"CUDA with float8 support not available; or not on fbcode (the test needs be run with the latest pytorch package)",
474474
)
475475
@pytest.mark.parametrize("dtype", [torch.bfloat16, torch.float16, torch.float32])

0 commit comments

Comments
 (0)