-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Docker deployment capability #562
base: main
Are you sure you want to change the base?
Conversation
Add Docker deployment capability to the project. * **Dockerfile**: Create a `Dockerfile` to define the Docker image for the project using Python 3.12, copy project files, install dependencies, and set the entry point. * **docker-compose.yml**: Create a `docker-compose.yml` to define the Docker services, including building the project service, exposing necessary ports, and setting environment variables. * **.circleci/config.yml**: Add a job to build and push the Docker image to a registry, and update the workflow to include the new Docker jobs. * **README.md**: Add instructions to build and run the Docker image, and use `docker-compose` to run the services. --- For more details, open the [Copilot Workspace session](https://copilot-workspace.githubnext.com/exo-explore/exo?shareId=XXXX-XXXX-XXXX-XXXX).
Did you test this on Apple Silicon? |
@jincdream if you need any help testing on Apply Silicon, let me know - would be glad to assist if you do not have access to it. |
That's great! I haven't tested it yet and I'm so glad you're willing to lend a hand. Looking forward to working with you on this. Thanks a bunch! |
Hello! I have also created a Docker version, but GPU acceleration is a bit more complicated. However, this results in a large disk image, and it's important that the host machine's CUDA version matches the CUDA version inside the Docker container; otherwise, it won't work. Currently, I am only using this for testing: DockerfileFROM nvidia/cuda:12.4.0-devel-ubuntu22.04
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get -yqq update \
&& apt-get -yqq dist-upgrade \
&& apt-get -yqq upgrade \
&& apt-get -y --no-install-recommends install \
python3 \
python3-pip \
python3-venv \
python3-clang \
clang \
libgl1 \
libglib2.0-0 \
git \
&& rm -rf /var/lib/apt/lists/*
RUN useradd -m -d /home/container container \
&& usermod -aG adm,audio,video container
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
USER container
RUN git clone https://github.com/exo-explore/exo.git ~/exo \
&& python3 -m venv ~/exo/.venv \
&& cd ~/exo \
&& . ~/exo/.venv/bin/activate \
&& pip3 install --no-cache-dir torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124 \
&& pip3 install --no-cache-dir -e . \
&& cd ~ \
&& deactivate
EXPOSE 52415/tcp
ENTRYPOINT ["/entrypoint.sh"] entrypoint.sh#!/usr/bin/env bash
cd ~/exo
source ~/exo/.venv/bin/activate
exo ${EXO_ARGS} docker-compose.ymlservices:
docker_exo:
network_mode: bridge
build: .
container_name: docker_exo_container
restart: unless-stopped
ports:
- "52415:52415"
environment:
- NVIDIA_DRIVER_CAPABILITIES=gpu,utility,video,compute
- EXO_ARGS=--discovery-module=udp --data=/opt/exo-data --models-seed-dir=/opt/exo-seed
volumes:
- ./opt/exo-data:/opt/exo-data
- ./opt/exo-seed:/opt/exo-seed
#devices:
#- "/dev/dri:/dev/dri" # GPU
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu,utility,video,compute] With this approach, exo recognizes the GPU. |
Add Docker deployment capability to the project.
Dockerfile
to define the Docker image for the project using Python 3.12, copy project files, install dependencies, and set the entry point.docker-compose.yml
to define the Docker services, including building the project service, exposing necessary ports, and setting environment variables.docker-compose
to run the services.For more details, open the Copilot Workspace session.