-
Notifications
You must be signed in to change notification settings - Fork 88
Writing generic CI jobs
Eli Uriegas edited this page Nov 2, 2022
·
7 revisions
For most PyTorch developers we build and test our projects on Linux using both CPUs and GPUs. Unfortunately the process of setting up machines / workflows to run both of these successfully is sometimes a process with lots of boiler plate. Fortunately we've built a generic solution that should allow you to focus on writing good tests / automation and not have to worry about the overhead of maintaining the infrastructure code to set up your runners!
For up to date arguments you can use please reference linux_job.yml
name: Test build/test linux cpu
on:
pull_request:
push:
branches:
- main
jobs:
build-test:
uses: pytorch/test-infra/.github/workflows/linux_job.yml@main
with:
script: |
pip install -r requirements.txt
python setup.py develop
make test
name: Test build/test linux gpu
on:
pull_request:
jobs:
build-test:
uses: pytorch/test-infra/.github/workflows/linux_job.yml@main
with:
runner: linux.4xlarge.nvidia.gpu
gpu-arch-type: cuda
gpu-arch-version: "11.6"
script: |
pip install -r requirements.txt
python setup.py develop
make test
name: Test build/test linux gpu
on:
pull_request:
jobs:
build:
uses: pytorch/test-infra/.github/workflows/linux_job.yml@main
with:
runner: linux.2xlarge
# specify cuda here to get the correct build image that contains CUDA build toolkit
gpu-arch-type: cuda
gpu-arch-version: "11.6"
upload-artifact: my-build-artifact
script: |
pip install -r requirements.txt
# ${RUNNER_ARTIFACT_DIR} should always point to where the workflow is expecting to find artifacts
python setup.py -d "${RUNNER_ARTIFACT_DIR}" bdist_wheel
test:
# Specify dependency here so these run sequentially
needs: build
uses: pytorch/test-infra/.github/workflows/linux_job.yml@main
with:
runner: linux.4xlarge.nvidia.gpu
gpu-arch-type: cuda
gpu-arch-version: "11.6"
download-artifact: my-build-artifact
script: |
pip install -r requirements.txt
pip install "${RUNNER_ARTIFACT_DIR}/*.whl"
make test
For up to date arguments you can use please reference windows_job.yml
name: Test build/test windows cpu
on:
pull_request:
push:
branches:
- main
jobs:
build-test:
uses: pytorch/test-infra/.github/workflows/windows_job.yml@main
with:
script: |
pip install -r requirements.txt
python setup.py develop
make test
name: Test build/test windows gpu
on:
pull_request:
jobs:
build-test:
uses: pytorch/test-infra/.github/workflows/windows_job.yml@main
with:
runner: windows.8xlarge.nvidia.gpu
gpu-arch-type: cuda
gpu-arch-version: "11.6"
script: |
pip install -r requirements.txt
python setup.py develop
make test
name: Test build/test windows gpu
on:
pull_request:
jobs:
build:
uses: pytorch/test-infra/.github/workflows/windows_job.yml@main
with:
runner: windows.4xlarge
upload-artifact: my-build-artifact
script: |
pip install -r requirements.txt
# ${RUNNER_ARTIFACT_DIR} should always point to where the workflow is expecting to find artifacts
python setup.py -d "${RUNNER_ARTIFACT_DIR}" bdist_wheel
test:
# Specify dependency here so these run sequentially
needs: build
uses: pytorch/test-infra/.github/workflows/windows_job.yml@main
with:
runner: windows.8xlarge.nvidia.gpu
download-artifact: my-build-artifact
script: |
pip install -r requirements.txt
pip install "${RUNNER_ARTIFACT_DIR}/*.whl"
make test
For up to date arguments you can use please reference windows_job.yml
name: Test build/test windows cpu
on:
pull_request:
push:
branches:
- main
jobs:
build-test:
uses: pytorch/test-infra/.github/workflows/macos_job.yml@main
with:
script: |
pip install -r requirements.txt
python setup.py develop
make test
name: Test build/test macOS Apple silicon
on:
pull_request:
jobs:
build-test:
uses: pytorch/test-infra/.github/workflows/macos_job.yml@main
with:
runner: macos-m1-12
script: |
pip install -r requirements.txt
python setup.py develop
make test