Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement venv/site-packages based binaries #2156

Open
groodt opened this issue Aug 24, 2024 · 7 comments
Open

Implement venv/site-packages based binaries #2156

groodt opened this issue Aug 24, 2024 · 7 comments
Labels
core-rules Issues concerning core bin/test/lib rules

Comments

@groodt
Copy link
Collaborator

groodt commented Aug 24, 2024

Context

This is a tracking issue to recognise that the lack of a site-packages layout causes friction when making use of third-party distribution packages (wheels and sdists) from indexes such as PyPI.

Outside bazel and rules_python, it is common for distribution packages to assume that they will be installed into a single site-packages folder, either in a "virtual environment" or directly into a python user or global site installation.

Notable examples are the libraries in the AI / ML ecosystem that make use of the nvidia CUDA shared libraries. These shared libraries contain relative rpath in the ELF/Mach-O/DLL which fail when not installed as siblings in a site-packages layout.

There is also a complication introduced into the rules due to lack of the single site-packages folder. Namespace packages in rules_python are all processed into pkg-util style namespace packages. This seems to work, but wouldn't be necessary if site-packages was used.

Another rare issue is failure to load *.pth files. Python provides Site-specific configuration hooks that can customize the sys.path at startup. rules_python could workaround this issue perhaps, but if a site-packages layout was used and discovered by the interpreter at startup, no workarounds would be necessary.

Distribution packages on PyPI known to have issues:

  • torch
  • onnxruntime-gpu
  • rerun-sdk

Known workarounds

  1. Patch the third-party dependencies using rules_python patching support
  2. Use an alternative set of rules such as rules_py
  3. Patch the third-party dependencies outside rules_python and push the patched dependencies to a private index

Related

@guoriyue
Copy link

guoriyue commented Sep 11, 2024

Hi experts, I'm new to Bazel and having an issue with nvimgcodec. I tried adding it in the preloading function, but preloading doesn't seem to work for me.

def _preload_cuda_deps(lib_folder: str, lib_name: str) -> None:
    """Preloads cuda deps if they could not be found otherwise."""
    # Should only be called on Linux if default path resolution have failed
    assert platform.system() == 'Linux', 'Should only be called on Linux'
    import glob
    lib_path = None
    for path in sys.path:
        nvidia_path = os.path.join(path, 'nvidia')
        if not os.path.exists(nvidia_path):
            continue
        print(f"Checking nvidia_path {nvidia_path}")
        if "nvimgcodec" == lib_folder:
            candidate_lib_paths = glob.glob(os.path.join(nvidia_path, lib_folder, 'libnvimgcodec.so.*[0-9]'))
        else:
            candidate_lib_paths = glob.glob(os.path.join(nvidia_path, lib_folder, 'lib', lib_name))
        print(f"Found candidate_lib_paths {candidate_lib_paths}")
        if candidate_lib_paths and not lib_path:
            lib_path = candidate_lib_paths[0]
        print(f"Found lib_path {lib_path}")
        if lib_path:
            break
    print(f"Preloading {lib_name} from {lib_path}")
    if not lib_path:
        raise ValueError(f"{lib_name} not found in the system path {sys.path}")
    ctypes.CDLL(lib_path)
def preload_cuda_deps() -> None:
    cuda_libs: Dict[str, str] = {
        'cublas': 'libcublas.so.*[0-9]',
        'cudnn': 'libcudnn.so.*[0-9]',
        'cuda_nvrtc': 'libnvrtc.so.*[0-9].*[0-9]',
        'cuda_runtime': 'libcudart.so.*[0-9].*[0-9]',
        'cuda_cupti': 'libcupti.so.*[0-9].*[0-9]',
        'cufft': 'libcufft.so.*[0-9]',
        'curand': 'libcurand.so.*[0-9]',
        'cusolver': 'libcusolver.so.*[0-9]',
        'cusparse': 'libcusparse.so.*[0-9]',
        'nccl': 'libnccl.so.*[0-9]',
        'nvtx': 'libnvToolsExt.so.*[0-9]',
        'nvimgcodec': 'libnvimgcodec.so.*[0-9]',
    }
    for lib_folder, lib_name in cuda_libs.items():
        _preload_cuda_deps(lib_folder, lib_name)

I have several Nvidia libraries and I wanted to use 'from nvidia import nvimgcodec', but multiple libraries have their own 'nvidia' directory under site-packages (e.g., pip_deps_cublas_cu11/site-packages/nvidia/ and pip_deps_nvimagecodec_cu11/site-packages/nvidia/), and 'from nvidia' always directs me to the cublas library.

My workaround is to copy the nvimgcodec library from my local Python environment to the Bazel directory, place it under pip_deps_nvimagecodec_cu11/site-packages/nvidia_img/, and then use 'from nvidia_img import nvimgcodec'.

I also tried just copying the nvimgcodec library from pip_deps_nvimagecodec_cu11/site-packages/nvidia and modifying the linking, but that didn't work, so I copied it from my local environment instead.

I'm not sure if I can add this as a patch because it doesn't really make sense. Do you know if there's a better solution for this? Thanks so much for your help!

@guoriyue
Copy link

I also tried just copying the nvimgcodec library from pip_deps_nvimagecodec_cu11/site-packages/nvidia and modifying the linking, but that didn't work, so I copied it from my local environment instead.

By the way, for this, I mean I could use 'from nvidia_img import nvimgcodec,' but seems the library is not initialized correctly. When I try to run the sample code to get a decoder, it seems that I just get None. I'm not sure if it's related to the copying and re-linking.

from nvidia import nvimgcodec
decoder = nvimgcodec.Decoder()

@1e100
Copy link

1e100 commented Dec 7, 2024

Could someone comment in which version of rules_python this is not broken? PyTorch did work before, at the very minimum. It'd be great to know if there's a rollback path.

@1e100
Copy link

1e100 commented Dec 9, 2024

rules_py does seem to fix it, with some workarounds.

@keith
Copy link
Member

keith commented Dec 9, 2024

Rules python has always worked this way. So yea it's not a regression.

@rickeylev rickeylev changed the title Lack of site-packages breaks assumptions of third-party packages causing friction Implement venv/site-packages based binaries Feb 4, 2025
@rickeylev rickeylev added the core-rules Issues concerning core bin/test/lib rules label Feb 4, 2025
@rickeylev
Copy link
Collaborator

I've renamed this to better reflect what is desired: to use a virtual environment with site-packages.

I've made a bit of progress on that front. --bootstrap_impl=script creates and uses a venv for the interpreter. So half the problem (use a venv) is solved.

The other half is populating the site-packages directory.

My current thinking is to have PyInfo carry info so that py_binary can then create symlinks in the site-packages directory. Probably a depset[tuple[str site_packages_path, str runfiles_actual_path]]. Or maybe depset[File marker]? It might be possible to combine the import strings (from attr.imports) and the marker file to infer the site-packages directories. For .pth files, I think we'd just glob() any (distro root level) pth files and stick them in site-packages? Which would go into another PyInfo field.

I haven't thought it through extensively or tried anything, yet. I mean, it sounds good at a high level. My optimism is tempered, though, because bootstrap=script has brought up more edge cases than I was hoping. It definitely would have gone better with some more eyes for design, exploration, and verification.

@groodt
Copy link
Collaborator Author

groodt commented Feb 6, 2025

I was quite careful to avoid the mention of a "venv" in the original issue to be honest. 😂 The new title is probably more appropriate for a PR that addresses some/all of the combined friction described in this issue, but I don't think it matters too much as long as this issue is easily discoverable.

While a venv might be some path towards a solution to the friction I describe, it's likely just an implementation detail. For example, when packaging python applications into docker containers or distroless (without bazel), you don't need to use a venv and you typically just install into the site-packages folder of the interpreter directly. Using a venv in the bootstrap doesn't directly solve any of the friction outlined in the original issue.

I'm not sure on the commentary about .pth files. When using a traditional site-packages or any folder marked as a "site dir" (site.addsitedir(dir)), .pth files are handled automatically by the interpreter and standard lib.

The same is true for Namespace packages. In a normal "site dir", no special handling is required.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
core-rules Issues concerning core bin/test/lib rules
Projects
None yet
Development

No branches or pull requests

5 participants