Skip to content

Implement venv/site-packages based binaries #2156

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
2 of 10 tasks
groodt opened this issue Aug 24, 2024 · 11 comments
Open
2 of 10 tasks

Implement venv/site-packages based binaries #2156

groodt opened this issue Aug 24, 2024 · 11 comments
Labels
core-rules Issues concerning core bin/test/lib rules

Comments

@groodt
Copy link
Collaborator

groodt commented Aug 24, 2024

Context

This is a tracking issue to recognise that the lack of a site-packages layout causes friction when making use of third-party distribution packages (wheels and sdists) from indexes such as PyPI.

Outside bazel and rules_python, it is common for distribution packages to assume that they will be installed into a single site-packages folder, either in a "virtual environment" or directly into a python user or global site installation.

Notable examples are the libraries in the AI / ML ecosystem that make use of the nvidia CUDA shared libraries. These shared libraries contain relative rpath in the ELF/Mach-O/DLL which fail when not installed as siblings in a site-packages layout.

There is also a complication introduced into the rules due to lack of the single site-packages folder. Namespace packages in rules_python are all processed into pkg-util style namespace packages. This seems to work, but wouldn't be necessary if site-packages was used.

Another rare issue is failure to load *.pth files. Python provides Site-specific configuration hooks that can customize the sys.path at startup. rules_python could workaround this issue perhaps, but if a site-packages layout was used and discovered by the interpreter at startup, no workarounds would be necessary.

Distribution packages on PyPI known to have issues:

  • torch
  • onnxruntime-gpu
  • rerun-sdk

Known workarounds

  1. Patch the third-party dependencies using rules_python patching support
  2. Use an alternative set of rules such as rules_py
  3. Patch the third-party dependencies outside rules_python and push the patched dependencies to a private index

Related

Proposed design to solve

The basic proposed solution is to create a per-binary virtual env whose site-packages contains symlinks to other locations in runfiles. e.g. ``$runfiles/mybin.venv/site-packages/foowould be a symlink to$runfiles/_pypi_foo/site-packages/foo`

TODO list

  • Add PyInfo.site_packages_symlinks. A depset of site-packages relative paths and runfiles paths to symlink to.
  • Make pypi-generated targets use this site-packages solution by default
    • Disable pkgutil-style __init__.py generation in pypi repo phase
    • Maybe refactor the pypi generation to use a custom rule instead of plain py_library.
  • Add a flag to allow experimentation and testing
  • Edge cases
    • if two distributions install into the same directory and/or have overlapping files
    • Handling pkgutil-style packages
    • Interaction of bootstrap=script vs bootstrap=system with this new layout
    • Handle platforms/cases where symlinks can't be created at build time (windows, using rules_pkg)
    • Handling if multiple versions of a distribution are in the deps and ensuring only one is used, while still respecting merge/conflict logic.
@guoriyue
Copy link

guoriyue commented Sep 11, 2024

Hi experts, I'm new to Bazel and having an issue with nvimgcodec. I tried adding it in the preloading function, but preloading doesn't seem to work for me.

def _preload_cuda_deps(lib_folder: str, lib_name: str) -> None:
    """Preloads cuda deps if they could not be found otherwise."""
    # Should only be called on Linux if default path resolution have failed
    assert platform.system() == 'Linux', 'Should only be called on Linux'
    import glob
    lib_path = None
    for path in sys.path:
        nvidia_path = os.path.join(path, 'nvidia')
        if not os.path.exists(nvidia_path):
            continue
        print(f"Checking nvidia_path {nvidia_path}")
        if "nvimgcodec" == lib_folder:
            candidate_lib_paths = glob.glob(os.path.join(nvidia_path, lib_folder, 'libnvimgcodec.so.*[0-9]'))
        else:
            candidate_lib_paths = glob.glob(os.path.join(nvidia_path, lib_folder, 'lib', lib_name))
        print(f"Found candidate_lib_paths {candidate_lib_paths}")
        if candidate_lib_paths and not lib_path:
            lib_path = candidate_lib_paths[0]
        print(f"Found lib_path {lib_path}")
        if lib_path:
            break
    print(f"Preloading {lib_name} from {lib_path}")
    if not lib_path:
        raise ValueError(f"{lib_name} not found in the system path {sys.path}")
    ctypes.CDLL(lib_path)
def preload_cuda_deps() -> None:
    cuda_libs: Dict[str, str] = {
        'cublas': 'libcublas.so.*[0-9]',
        'cudnn': 'libcudnn.so.*[0-9]',
        'cuda_nvrtc': 'libnvrtc.so.*[0-9].*[0-9]',
        'cuda_runtime': 'libcudart.so.*[0-9].*[0-9]',
        'cuda_cupti': 'libcupti.so.*[0-9].*[0-9]',
        'cufft': 'libcufft.so.*[0-9]',
        'curand': 'libcurand.so.*[0-9]',
        'cusolver': 'libcusolver.so.*[0-9]',
        'cusparse': 'libcusparse.so.*[0-9]',
        'nccl': 'libnccl.so.*[0-9]',
        'nvtx': 'libnvToolsExt.so.*[0-9]',
        'nvimgcodec': 'libnvimgcodec.so.*[0-9]',
    }
    for lib_folder, lib_name in cuda_libs.items():
        _preload_cuda_deps(lib_folder, lib_name)

I have several Nvidia libraries and I wanted to use 'from nvidia import nvimgcodec', but multiple libraries have their own 'nvidia' directory under site-packages (e.g., pip_deps_cublas_cu11/site-packages/nvidia/ and pip_deps_nvimagecodec_cu11/site-packages/nvidia/), and 'from nvidia' always directs me to the cublas library.

My workaround is to copy the nvimgcodec library from my local Python environment to the Bazel directory, place it under pip_deps_nvimagecodec_cu11/site-packages/nvidia_img/, and then use 'from nvidia_img import nvimgcodec'.

I also tried just copying the nvimgcodec library from pip_deps_nvimagecodec_cu11/site-packages/nvidia and modifying the linking, but that didn't work, so I copied it from my local environment instead.

I'm not sure if I can add this as a patch because it doesn't really make sense. Do you know if there's a better solution for this? Thanks so much for your help!

@guoriyue
Copy link

I also tried just copying the nvimgcodec library from pip_deps_nvimagecodec_cu11/site-packages/nvidia and modifying the linking, but that didn't work, so I copied it from my local environment instead.

By the way, for this, I mean I could use 'from nvidia_img import nvimgcodec,' but seems the library is not initialized correctly. When I try to run the sample code to get a decoder, it seems that I just get None. I'm not sure if it's related to the copying and re-linking.

from nvidia import nvimgcodec
decoder = nvimgcodec.Decoder()

@1e100
Copy link

1e100 commented Dec 7, 2024

Could someone comment in which version of rules_python this is not broken? PyTorch did work before, at the very minimum. It'd be great to know if there's a rollback path.

@1e100
Copy link

1e100 commented Dec 9, 2024

rules_py does seem to fix it, with some workarounds.

@keith
Copy link
Member

keith commented Dec 9, 2024

Rules python has always worked this way. So yea it's not a regression.

@rickeylev rickeylev changed the title Lack of site-packages breaks assumptions of third-party packages causing friction Implement venv/site-packages based binaries Feb 4, 2025
@rickeylev rickeylev added the core-rules Issues concerning core bin/test/lib rules label Feb 4, 2025
@rickeylev
Copy link
Collaborator

I've renamed this to better reflect what is desired: to use a virtual environment with site-packages.

I've made a bit of progress on that front. --bootstrap_impl=script creates and uses a venv for the interpreter. So half the problem (use a venv) is solved.

The other half is populating the site-packages directory.

My current thinking is to have PyInfo carry info so that py_binary can then create symlinks in the site-packages directory. Probably a depset[tuple[str site_packages_path, str runfiles_actual_path]]. Or maybe depset[File marker]? It might be possible to combine the import strings (from attr.imports) and the marker file to infer the site-packages directories. For .pth files, I think we'd just glob() any (distro root level) pth files and stick them in site-packages? Which would go into another PyInfo field.

I haven't thought it through extensively or tried anything, yet. I mean, it sounds good at a high level. My optimism is tempered, though, because bootstrap=script has brought up more edge cases than I was hoping. It definitely would have gone better with some more eyes for design, exploration, and verification.

@groodt
Copy link
Collaborator Author

groodt commented Feb 6, 2025

I was quite careful to avoid the mention of a "venv" in the original issue to be honest. 😂 The new title is probably more appropriate for a PR that addresses some/all of the combined friction described in this issue, but I don't think it matters too much as long as this issue is easily discoverable.

While a venv might be some path towards a solution to the friction I describe, it's likely just an implementation detail. For example, when packaging python applications into docker containers or distroless (without bazel), you don't need to use a venv and you typically just install into the site-packages folder of the interpreter directly. Using a venv in the bootstrap doesn't directly solve any of the friction outlined in the original issue.

I'm not sure on the commentary about .pth files. When using a traditional site-packages or any folder marked as a "site dir" (site.addsitedir(dir)), .pth files are handled automatically by the interpreter and standard lib.

The same is true for Namespace packages. In a normal "site dir", no special handling is required.

@rickeylev
Copy link
Collaborator

re: venv not relevant to solving the site-packages issue: ok, good points, fair enough. A key part of the issue is that, right now, the various libraries aren't collected into a single location ("site-packages"). Collecting them like that is problematic because each binary can have a different set of dependencies. Hence why I've closely associated venvs with the issue: the binary-specific venv, which has its own site-packages directory, is a natural place to collect them.

In any case, I have a PR almost ready for review. It has 3 key parts:

  1. PyInfo.site_packages_symlinks. This allows propagating up information about what should be put into site-packages.
  2. py_library.site_packages_root. This allows interpreter the files in srcs as following a site-packages layout; it feeds into the PyInfo field above
  3. In py_binary, it uses the PyInfo field to populate the site-packages directory in its venv.

Note that the above only applies to bootstrap=script. One of the neat things about this is it's pretty cheap and supports multiple dependency closures in a single build.

Something I ran into as I implemented the above is how the pip repo rules handle namespace packages. Right now, they create a pkgutil style shim -- this confuses the build-phase logic into thinking they aren't namespace packages and causes conflicts trying to figure out what files to symlink. This should be easy to address, though.

I have several Nvidia libraries and I wanted to use 'from nvidia import nvimgcodec'
but multiple libraries have their own 'nvidia' directory under site-packages
pip_deps_cublas_cu11/site-packages/nvidia/ and pip_deps_nvimagecodec_cu11/site-packages/nvidia/

This seems problematic no matter what? Unless nvidia is a namespace package?

github-merge-queue bot pushed a commit that referenced this issue Apr 5, 2025
This implements functionality to allow libraries to populate the
site-packages directory
of downstream binaries. The basic implementation is:
* Libraries provide tuples of `(runfile path, site packages path)` in
the
  `PyInfo.site_packages_symlinks` field.
* Binaries create symlinks (using declare_symlink) in their
site-packages directory
  pointing to the runfiles paths libraries provide.

The design was chosen because of the following properties:
* The site-packages directory is relocatable
* Populating site packages is cheap ( `O(number 3p dependencies)` )
* Dependencies are only created once in the runfiles, no matter
how many how many binaries there that use them. This minimizes disk
usage,
  file counts, inodes, etc.

The `site_packages_symlinks` field is a depset with topological
ordering. Using topological
ordering allows dependencies closer to the binary to have precedence,
which gives some
basic control over what entries are used.

Additionally, the runfiles path to link to can be None/empty, in which
case, the
directory in site-packages won't be created. This allows binaries to
prevent creation
of directories that might e.g. conflict.

For now, this functionality is disabled by default. The flag
`--venvs_site_packages=yes`
can be set to allow using it, which is automatically enable it for pypi
generated targets.

When enabled, it does basic detection of implicit namespace directories,
which
allows multiple distributions to "install" into the the same
site-packages directory.

Though this functionality is primarily useful for dependencies from pypi
(e.g. via
pip.parse), it is not yet activated for those targets, for two main
reasons:
1. The wheel extraction code creates pkgutil-style `__init__.py` shims
during the repo-phase.
The build phase can't distinguish these artifical rules_python generated
shims from
actual `__init__.py` files, which breaks the implicit namespace
detection logic.
2. A flag guard is needed before changing the behavior. Even though how
3p libraries are
added to sys.path is an implementation detail, the behavior has been
there for many
   years, so an escape hatch should be added.

Work towards #2156
@rickeylev
Copy link
Collaborator

With #2617 merged, there's some basic/experimental support for populating a binary's venv's site-packages with stuff. It assumes using --bootstrap_impl=script. There's a bunch of edge and special cases to deal with still (see end of the issue description, where I've listed some), but please report bugs if you try it.

Here's quick rundown of things.

Setting --@rules_python//python/config_settings:venvs_site_packages=yes will make pypi dependencies populate into the site-packages directory. docs.

PyInfo.site_packages_symlinks docs is what powers things under the hood. This provider field allows creating symlinks in a binary's venv that point back to a runfiles path. Custom rules can provide their own PyInfo object with the value set. Note, however, that the field is still in flux a bit (right now its a 2-tuple, but I'm pretty sure it'll turn into a three-tuple or struct). If people want to use now, then we can change it to a struct sooner rather than later.

features.bzl%py_info_site_packages_symlinks docs can be used to detect the presence of this new API.

Astute readers may find py_library.experimental_venvs_site_package docs -- this is a special field set by the pypi-generation code to trigger the venvs-site-packages behavior. I don't think this attribute will stick around (unless people find it useful, i guess?). I just mention it because it's an easy way to experiment with this new behavior without having to go through pypi-generated code.

@kwlzn
Copy link

kwlzn commented Apr 8, 2025

@rickeylev fwiw, I don't think it's possible for most real world python users to rely on/globally enable --bootstrap_impl=script due to key roadblocks in that implementation. is depending on that the long-term plan or just temporary?

@aignas aignas marked this as a duplicate of #2757 Apr 8, 2025
@rickeylev
Copy link
Collaborator

The plan is to enable bootstrap=script by default. I've just been waiting for a release that didn't have any bug reports for it, so I can start that process. Last release (or the one before) met that criteria, so I've started to do that (see #2760 -- it'll be default for linux, initially).

If you know of bugs, please report them and tag me! AFAIK (well, remember 😅), there aren't any outstanding issues with it.

I see you edited out some of your comment :) ? PRs to system_python are welcome in the meantime -- how bootstrap=script works could mostly also be applied to bootstrap=system_python, but since it's slated for removal, I haven't put effort into that (for further discussion, lets start a separate thread, though).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
core-rules Issues concerning core bin/test/lib rules
Projects
None yet
Development

No branches or pull requests

6 participants