Skip to content

Commit a240d45

Browse files
s4ayubpytorchmergebot
authored andcommitted
[torch deploy] Update deploy.rst with working simple example (pytorch#76538)
Summary: Pull Request resolved: pytorch#76538 when running the example from the docs, I found that these steps were not working. These are the updates necessary to get the example working. Test Plan: n/a Reviewed By: PaliC Differential Revision: D35998155 fbshipit-source-id: d78bb2886f94889abae5a3af5239fcd306cd5e09 (cherry picked from commit 6893812)
1 parent 598e7e5 commit a240d45

File tree

2 files changed

+75
-16
lines changed

2 files changed

+75
-16
lines changed

docs/source/deploy.rst

+70-16
Original file line numberDiff line numberDiff line change
@@ -29,8 +29,7 @@ When running ``setup.py``, you will need to specify ``USE_DEPLOY=1``, like:
2929
3030
export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
3131
export USE_DEPLOY=1
32-
python setup.py bdist_wheel
33-
python -mpip install dist/*.whl
32+
python setup.py develop
3433
3534
3635
Creating a model package in Python
@@ -53,28 +52,39 @@ For now, let's create a simple model that we can load and run in ``torch::deploy
5352
# Package and export it.
5453
with PackageExporter("my_package.pt") as e:
5554
e.intern("torchvision.**")
55+
e.extern("numpy.**")
5656
e.extern("sys")
57+
e.extern("PIL.*")
5758
e.save_pickle("model", "model.pkl", model)
5859
60+
Note that since "numpy", "sys" and "PIL" were marked as "extern", `torch.package` will
61+
look for these dependencies on the system that loads this package. They will not be packaged
62+
with the model.
63+
5964
Now, there should be a file named ``my_package.pt`` in your working directory.
6065

61-
.. note::
6266

63-
Currently, ``torch::deploy`` supports only the Python standard library and
64-
``torch`` as ``extern`` modules in ``torch.package``. In the future we plan
65-
to transparently support any Conda environment you point us to.
67+
Loading and running the model in C++
68+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
69+
70+
Set an environment variable (e.g. $PATH_TO_EXTERN_PYTHON_PACKAGES) to indicate to the interpreters
71+
where the external Python dependencies can be found. In the example below, the path to the
72+
site-packages of a conda environment is provided.
6673

74+
.. code-block:: bash
6775
76+
export PATH_TO_EXTERN_PYTHON_PACKAGES= \
77+
"~/anaconda/envs/deploy-example-env/lib/python3.8/site-packages"
6878
69-
Loading and running the model in C++
70-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
7179
7280
Let's create a minimal C++ program to that loads the model.
7381

7482
.. code-block:: cpp
7583
76-
#include <torch/deploy.h>
84+
#include <torch/csrc/deploy/deploy.h>
85+
#include <torch/csrc/deploy/path_environment.h>
7786
#include <torch/script.h>
87+
#include <torch/torch.h>
7888
7989
#include <iostream>
8090
#include <memory>
@@ -86,14 +96,19 @@ Let's create a minimal C++ program to that loads the model.
8696
}
8797
8898
// Start an interpreter manager governing 4 embedded interpreters.
89-
torch::deploy::InterpreterManager manager(4);
99+
std::shared_ptr<torch::deploy::Environment> env =
100+
std::make_shared<torch::deploy::PathEnvironment>(
101+
std::getenv("PATH_TO_EXTERN_PYTHON_PACKAGES")
102+
);
103+
torch::deploy::InterpreterManager manager(4, env);
90104
91105
try {
92106
// Load the model from the torch.package.
93107
torch::deploy::Package package = manager.loadPackage(argv[1]);
94108
torch::deploy::ReplicatedObj model = package.loadPickle("model", "model.pkl");
95109
} catch (const c10::Error& e) {
96110
std::cerr << "error loading the model\n";
111+
std::cerr << e.msg();
97112
return -1;
98113
}
99114
@@ -105,6 +120,9 @@ This small program introduces many of the core concepts of ``torch::deploy``.
105120
An ``InterpreterManager`` abstracts over a collection of independent Python
106121
interpreters, allowing you to load balance across them when running your code.
107122

123+
``PathEnvironment`` enables you to specify the location of Python
124+
packages on your system which are external, but necessary, for your model.
125+
108126
Using the ``InterpreterManager::loadPackage`` method, you can load a
109127
``torch.package`` from disk and make it available to all interpreters.
110128

@@ -120,20 +138,55 @@ an free interpreter to execute that interaction.
120138
Building and running the application
121139
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
122140

141+
Locate `libtorch_deployinterpreter.o` on your system. This should have been
142+
built when PyTorch was built from source. In the same PyTorch directory, locate
143+
the deploy source files. Set these locations to an environment variable for the build.
144+
An example of where these can be found on a system is shown below.
145+
146+
.. code-block:: bash
147+
148+
export DEPLOY_INTERPRETER_PATH="/pytorch/build/torch/csrc/deploy/"
149+
export DEPLOY_SRC_PATH="/pytorch/torch/csrc/deploy/"
150+
151+
As ``torch::deploy`` is in active development, these manual steps will be removed
152+
soon.
153+
123154
Assuming the above C++ program was stored in a file called, `example-app.cpp`, a
124155
minimal CMakeLists.txt file would look like:
125156

126157
.. code-block:: cmake
127158
128-
cmake_minimum_required(VERSION 3.0 FATAL_ERROR)
159+
cmake_minimum_required(VERSION 3.19 FATAL_ERROR)
129160
project(deploy_tutorial)
130161
162+
find_package(fmt REQUIRED)
131163
find_package(Torch REQUIRED)
132164
133-
add_executable(example-app example-app.cpp)
134-
target_link_libraries(example-app "${TORCH_LIBRARIES}")
135-
set_property(TARGET example-app PROPERTY CXX_STANDARD 14)
165+
add_library(torch_deploy_internal STATIC
166+
${DEPLOY_INTERPRETER_PATH}/libtorch_deployinterpreter.o
167+
${DEPLOY_DIR}/deploy.cpp
168+
${DEPLOY_DIR}/loader.cpp
169+
${DEPLOY_DIR}/path_environment.cpp
170+
${DEPLOY_DIR}/elf_file.cpp)
171+
172+
# for python builtins
173+
target_link_libraries(torch_deploy_internal PRIVATE
174+
crypt pthread dl util m z ffi lzma readline nsl ncursesw panelw)
175+
target_link_libraries(torch_deploy_internal PUBLIC
176+
shm torch fmt::fmt-header-only)
177+
caffe2_interface_library(torch_deploy_internal torch_deploy)
178+
179+
add_executable(example-app example.cpp)
180+
target_link_libraries(example-app PUBLIC
181+
"-Wl,--no-as-needed -rdynamic" dl torch_deploy "${TORCH_LIBRARIES}")
182+
183+
Currently, it is necessary to build ``torch::deploy`` as a static library.
184+
In order to correctly link to a static library, the utility ``caffe2_interface_library``
185+
is used to appropriately set and unset ``--whole-archive`` flag.
136186

187+
Furthermore, the ``-rdynamic`` flag is needed when linking to the executable
188+
to ensure that symbols are exported to the dynamic table, making them accessible
189+
to the deploy interpreters (which are dynamically loaded).
137190

138191
The last step is configuring and building the project. Assuming that our code
139192
directory is laid out like this:
@@ -152,8 +205,9 @@ We can now run the following commands to build the application from within the
152205
mkdir build
153206
cd build
154207
# Point CMake at the built version of PyTorch we just installed.
155-
SITE_PACKAGES="$(python -c 'from distutils.sysconfig import get_python_lib; print(get_python_lib())')"
156-
cmake -DCMAKE_PREFIX_PATH="$SITE_PACKAGES/torch" ..
208+
cmake -DCMAKE_PREFIX_PATH="$(python -c 'import torch.utils; print(torch.utils.cmake_prefix_path)')" .. \
209+
-DDEPLOY_INTERPRETER_PATH="$DEPLOY_INTERPRETER_PATH" \
210+
-DDEPLOY_DIR="$DEPLOY_DIR"
157211
cmake --build . --config Release
158212
159213
Now we can run our app:

torch/csrc/deploy/README.md

+5
Original file line numberDiff line numberDiff line change
@@ -20,3 +20,8 @@ Because CPython builds successfully when optional dependencies are missing, the
2020
To be safe, install the [complete list of dependencies for CPython](https://devguide.python.org/setup/#install-dependencies) for your platform, before trying to build torch with USE_DEPLOY=1.
2121

2222
If you already built CPython without all the dependencies and want to fix it, just blow away the CPython folder under torch/csrc/deploy/third_party, install the missing system dependencies, and re-attempt the pytorch build command.
23+
24+
# Example
25+
26+
Read the [getting started guide](https://github.com/pytorch/pytorch/blob/master/docs/source/deploy.rst) for an
27+
example on how to use `torch::deploy`.

0 commit comments

Comments
 (0)