Skip to content

Commit 246e3e0

Browse files
tracelogfbStephen Chen
and
Stephen Chen
authored
fix broken test vllm:test_kernels - test_attention_selector.py::test_flash_attn (#17873)
Co-authored-by: Stephen Chen <[email protected]>
1 parent 7042cc9 commit 246e3e0

File tree

1 file changed

+3
-2
lines changed

1 file changed

+3
-2
lines changed

tests/kernels/attention/test_attention_selector.py

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -188,8 +188,9 @@ def test_flash_attn(monkeypatch: pytest.MonkeyPatch):
188188
m.setenv(STR_BACKEND_ENV_VAR, STR_FLASH_ATTN_VAL)
189189

190190
# Unsupported CUDA arch
191-
monkeypatch.setattr(torch.cuda, "get_device_capability", lambda:
192-
(7, 5))
191+
monkeypatch.setattr(torch.cuda,
192+
"get_device_capability",
193+
lambda _=None: (7, 5))
193194
backend = get_attn_backend(16, torch.float16, None, 16, False)
194195
assert backend.get_name() != STR_FLASH_ATTN_VAL
195196

0 commit comments

Comments
 (0)