[Bug]: Running same binary in container build with py_image_layer
causes failed imports
#526
Labels
bug
Something isn't working
What happened?
We recently migrated from version 0.7.1 to 1.2.1 and migrated the way we build our docker images from using a modified version of the old template to
py_image_layer
. Overall it has been great except for one thing: we deploy our containers to K8S and have health/status checks on them. The issue is the health checks use the same binary that the image runs normally, just in different modes. Concretely, we are running Dagster which has the container rundagster api grpc
and then the health checks usedagster api grpc-health-check
. What we found is since the same binary target is being run by two separate cases in the same container, the venv that backed the python script was being re-created each health check run. This caused the base process to lose the packages in it's venv temporarily, causing it to be unhealthy and thus fail.It seems like #522 would fix this since the venv would be stable but in the meantime, is there a way to fix this temporarily?
Version
Development (host) and target OS/architectures: aarch Darwin -> aarch Darwin, Linux x86_64 -> Linux x86_64
Output of
bazel --version
: 8.0.0Version of the Aspect rules, or other relevant rules from your
WORKSPACE
orMODULE.bazel
file: 1.2.1Language(s) and/or frameworks involved: Python 3.11, Docker/rules_oci 1.7.4
How to reproduce
Any other information?
We were getting errors that looked like
despite having
datasets
included in the binary. After turning off our health checks, the error went away. I also SSH'd into the pod and inspected the packages in the venv that was generated and saw it would repeatedly have a subset of the expected packages and then quickly after have all of the expected packagesThe text was updated successfully, but these errors were encountered: