Skip to content

Commit d96c6a7

Browse files
msaroufimamdfaa
andauthored
Enable ROCM in CI (#999)
* Enable ROCM in CI --------- Co-authored-by: amdfaa <[email protected]>
1 parent cf45336 commit d96c6a7

File tree

2 files changed

+11
-4
lines changed

2 files changed

+11
-4
lines changed

.github/workflows/regression_test.yml

+10-3
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,10 @@ concurrency:
1717
env:
1818
HF_TOKEN: ${{ secrets.HF_TOKEN }}
1919

20+
permissions:
21+
id-token: write
22+
contents: read
23+
2024
jobs:
2125
test-nightly:
2226
strategy:
@@ -33,10 +37,16 @@ jobs:
3337
torch-spec: '--pre torch --index-url https://download.pytorch.org/whl/nightly/cpu'
3438
gpu-arch-type: "cpu"
3539
gpu-arch-version: ""
40+
- name: ROCM Nightly
41+
runs-on: linux.rocm.gpu.2
42+
torch-spec: '--pre torch --index-url https://download.pytorch.org/whl/nightly/rocm6.3'
43+
gpu-arch-type: "rocm"
44+
gpu-arch-version: "6.3"
3645

3746
uses: pytorch/test-infra/.github/workflows/linux_job_v2.yml@main
3847
with:
3948
timeout: 120
49+
no-sudo: ${{ matrix.gpu-arch-type == 'rocm' }}
4050
runner: ${{ matrix.runs-on }}
4151
gpu-arch-type: ${{ matrix.gpu-arch-type }}
4252
gpu-arch-version: ${{ matrix.gpu-arch-version }}
@@ -71,7 +81,6 @@ jobs:
7181
torch-spec: 'torch==2.5.1 --index-url https://download.pytorch.org/whl/cu121'
7282
gpu-arch-type: "cuda"
7383
gpu-arch-version: "12.1"
74-
7584
- name: CPU 2.3
7685
runs-on: linux.4xlarge
7786
torch-spec: 'torch==2.3.0 --index-url https://download.pytorch.org/whl/cpu'
@@ -99,8 +108,6 @@ jobs:
99108
conda create -n venv python=3.9 -y
100109
conda activate venv
101110
echo "::group::Install newer objcopy that supports --set-section-alignment"
102-
yum install -y devtoolset-10-binutils
103-
export PATH=/opt/rh/devtoolset-10/root/usr/bin/:$PATH
104111
python -m pip install --upgrade pip
105112
pip install ${{ matrix.torch-spec }}
106113
pip install -r dev-requirements.txt

torchao/utils.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -607,7 +607,7 @@ def _torch_version_at_least(min_version):
607607
def is_MI300():
608608
if torch.cuda.is_available() and torch.version.hip:
609609
mxArchName = ["gfx940", "gfx941", "gfx942"]
610-
archName = torch.cuda.get_device_properties().gcnArchName
610+
archName = torch.cuda.get_device_properties(0).gcnArchName
611611
for arch in mxArchName:
612612
if arch in archName:
613613
return True

0 commit comments

Comments
 (0)