In conda-forge/vllm-feedstock#15, I'm running into something that really smells like a bug, but I don't understand how it could happen (especially not without running into this much more often).
After a lot of manual deduplication of the logs and removing spurious linebreaks, I get
Error: × Test failed: failed to setup test environment: Cannot solve the request because of: The following packages are incompatible
│ ├─ __cuda * can be installed with any of the following options:
│ │ └─ __cuda 12.9
│ └─ vllm ==0.10.1 cuda_129py312he88dff9_ cannot be installed because there are no viable options:
│ ├─ vllm 0.10.1 would require
│ │ ├─ pytorch * cuda*, which can be installed with any of the following options:
│ │ │ ├─ pytorch 2.0.0 | [...] | 2.5.1 would require
│ │ │ │ └─ cudatoolkit >=11.8,<12, which can be installed with any of the following options:
│ │ │ │ └─ cudatoolkit 11.8.0 | 11.8.0 | 11.8.0 | 11.8.0
│ │ │ ├─ pytorch 1.8.0 | [...] | 2.1.2 would require
│ │ │ │ └─ cudatoolkit >=11.2,<12.0a0, which can be installed with any of the following options:
│ │ │ │ └─ cudatoolkit 11.2.0 | [...] | 11.7.1
│ │ │ ├─ pytorch 1.8.0 | [...] | 1.12.1 would require
│ │ │ │ └─ cudatoolkit ==11.1|11.1.*, which can be installed with any of the following options:
│ │ │ │ └─ cudatoolkit 11.1.1 | [...] | 11.1.1
│ │ │ ├─ pytorch 1.8.0 | [...] | 1.12.1 would require
│ │ │ │ └─ cudatoolkit ==11.0|11.0.*, which can be installed with any of the following options:
│ │ │ │ └─ cudatoolkit 11.0.3 | [...] | 11.0.3
│ │ │ ├─ pytorch 1.6.0 | [...] | 1.12.1 would require
│ │ │ │ └─ cudatoolkit ==10.2|10.2.*, which can be installed with any of the following options:
│ │ │ │ └─ cudatoolkit 10.2.89 | [...] | 10.2.89
│ │ │ ├─ pytorch 1.6.0 | [...] | 1.7.1 would require
│ │ │ │ └─ cudatoolkit ==10.1|10.1.*, which can be installed with any of the following options:
│ │ │ │ └─ cudatoolkit 10.1.243 | [...] | 10.1.243
│ │ │ ├─ pytorch 1.6.0 | [...] | 1.7.1 would require
│ │ │ │ └─ cudatoolkit ==10.0|10.0.*, which can be installed with any of the following options:
│ │ │ │ └─ cudatoolkit 10.0.130 | [...] | 10.0.130
│ │ │ └─ pytorch 1.6.0 | [...] | 1.7.1 would require
│ │ │ └─ cudatoolkit ==9.2|9.2.*, which can be installed with any of the following options:
│ │ │ └─ cudatoolkit 9.2.148 | [...] | 9.2.148
│ │ ├─ pytorch >=2.7.1,<2.8.0a0, which cannot be installed because there are no viable options:
│ │ │ └─ pytorch 2.7.1, which conflicts with the versions reported above. # WHAT?!
│ │ ├─ torchvision ==0.22.1, which cannot be installed because there are no viable options:
│ │ │ ├─ torchvision 0.22.1 would require
│ │ │ │ ├─ pytorch * cpu*, which cannot be installed because there are no viable options:
│ │ │ │ │ ├─ pytorch 2.9.1, which conflicts with the versions reported above.
│ │ │ │ │ ├─ pytorch [...], which conflicts with the versions reported above.
│ │ │ │ │ └─ pytorch 1.6.0, which conflicts with the versions reported above.
│ │ │ │ └─ pytorch >=2.7.1,<2.8.0a0, which cannot be installed because there are no viable options:
│ │ │ │ └─ pytorch 2.7.1, which conflicts with the versions reported above.
│ │ │ ├─ torchvision 0.22.1 | 0.22.1 | 0.22.1 | 0.22.1 would require
│ │ │ │ └─ cudnn >=9.13.0.50,<10.0a0, which cannot be installed because there are no viable options:
│ │ │ │ └─ cudnn 9.13.0.50 | [...] | 9.17.1.4 would require
│ │ │ │ └─ cuda-version >=13,<14.0a0, which cannot be installed because there are no viable options:
│ │ │ │ └─ cuda-version 13.0 | 13.1 would constrain
│ │ │ │ └─ __cuda >=13, which conflicts with any installable versions previously reported
│ │ │ └─ torchvision 0.22.1 | [...] | 0.22.1 would require
│ │ │ └─ pytorch * cpu*, which cannot be installed because there are no viable options:
│ │ │ ├─ pytorch 2.9.1, which conflicts with the versions reported above.
│ │ │ ├─ pytorch [...], which conflicts with the versions reported above.
│ │ │ └─ pytorch 1.6.0, which conflicts with the versions reported above.
│ │ └─ cuda-version >=12.9,<13, which cannot be installed because there are no viable options:
│ │ └─ cuda-version 12.9 would constrain
│ │ └─ cudatoolkit ==12.9|12.9.*, which conflicts with any installable versions previously reported
│ └─ vllm 0.10.1 is excluded because due to strict channel priority not using this option from: 'file:///home/conda/feedstock_root/build_artifacts/'
Look on the RHS for WHAT?!, because that condition is IMO patently wrong. Of the possible packages that "conflicts with the versions reported above", there are very few options; the only relevant package (for pytorch 2.7.1) is __cuda 12.9, which is clearly compatible, because the pytorch build itself only depends on __cuda without a version pin.
A bit more hidden (but I believe to be the culprit) is that the pytorch versions under
│ ├─ vllm 0.10.1 would require
│ │ ├─ pytorch * cuda*, which can be installed with any of the following options:
│ │ │ ├─ pytorch <ver> would require
only go up to 2.5.1. This would explain why the later "pytorch 2.7.1, which conflicts with the versions reported above" happens, but then it's clearly a bug to omit builds like pytorch 2.7.1 cuda129_mkl_py313_h1e53aa0_304 from the pool of candidates that are compatible with pytorch * cuda*.
Note that the torchvision stuff is a red herring too; given the long runtime of CUDA CI for vllm (~12h...), I had verified that the builds from conda-forge/torchvision-feedstock#137 had made it through the CDN before starting (build began running at 2026-01-06T05:44:31.5301444Z, >45min after the last relevant upload). I believe the first "torchvision 0.22.1 would require" actually considers the right build for this, but runs into the same spurious conflict that somehow hinders pytorch 2.7.1 cuda* from being chosen.
Other complaints:
- the recipe already contains a specific pytorch pin; simplified it looks like
run:
- pytorch ==${{ pytorch_version }}
- if: use_cuda
then:
- pytorch * [build=cuda*]
In other words, spamming 1000s of lines about incompatible pytorch versions is completely besides the point; the resolver (or at least the logs) should discard versions it knows are impossible through other pins.
- please deduplicate
pkg <ver>, which conflicts with the versions reported above (for the same version); there's zero benefit to repeat this, unless distinguishing information gets added.
- likewise, stuff like
cudatoolkit 11.8.0 | 11.8.0 | 11.8.0 | 11.8.0 should be deduplicated to cudatoolkit 11.8.0 IMO
- the linebreaks are the opposite of helpful (see raw logs below)
- if there's any other actually relevant conflict, the logs do not mention it at all!
CC @baszalmstra
In conda-forge/vllm-feedstock#15, I'm running into something that really smells like a bug, but I don't understand how it could happen (especially not without running into this much more often).
After a lot of manual deduplication of the logs and removing spurious linebreaks, I get
Look on the RHS for
WHAT?!, because that condition is IMO patently wrong. Of the possible packages that "conflicts with the versions reported above", there are very few options; the only relevant package (for pytorch 2.7.1) is__cuda 12.9, which is clearly compatible, because thepytorchbuild itself only depends on__cudawithout a version pin.A bit more hidden (but I believe to be the culprit) is that the pytorch versions under
only go up to 2.5.1. This would explain why the later "pytorch 2.7.1, which conflicts with the versions reported above" happens, but then it's clearly a bug to omit builds like
pytorch 2.7.1 cuda129_mkl_py313_h1e53aa0_304from the pool of candidates that are compatible withpytorch * cuda*.Note that the torchvision stuff is a red herring too; given the long runtime of CUDA CI for vllm (~12h...), I had verified that the builds from conda-forge/torchvision-feedstock#137 had made it through the CDN before starting (build began running at
2026-01-06T05:44:31.5301444Z, >45min after the last relevant upload). I believe the first "torchvision 0.22.1 would require" actually considers the right build for this, but runs into the same spurious conflict that somehow hinderspytorch 2.7.1 cuda*from being chosen.Other complaints:
pkg <ver>, which conflicts with the versions reported above(for the same version); there's zero benefit to repeat this, unless distinguishing information gets added.cudatoolkit 11.8.0 | 11.8.0 | 11.8.0 | 11.8.0should be deduplicated tocudatoolkit 11.8.0IMOCC @baszalmstra