@@ -29,8 +29,7 @@ When running ``setup.py``, you will need to specify ``USE_DEPLOY=1``, like:
29
29
30
30
export CMAKE_PREFIX_PATH=${CONDA_PREFIX:- " $( dirname $( which conda) ) /../" }
31
31
export USE_DEPLOY=1
32
- python setup.py bdist_wheel
33
- python -mpip install dist/* .whl
32
+ python setup.py develop
34
33
35
34
36
35
Creating a model package in Python
@@ -53,28 +52,39 @@ For now, let's create a simple model that we can load and run in ``torch::deploy
53
52
# Package and export it.
54
53
with PackageExporter(" my_package.pt" ) as e:
55
54
e.intern(" torchvision.**" )
55
+ e.extern(" numpy.**" )
56
56
e.extern(" sys" )
57
+ e.extern(" PIL.*" )
57
58
e.save_pickle(" model" , " model.pkl" , model)
58
59
60
+ Note that since "numpy", "sys" and "PIL" were marked as "extern", `torch.package ` will
61
+ look for these dependencies on the system that loads this package. They will not be packaged
62
+ with the model.
63
+
59
64
Now, there should be a file named ``my_package.pt `` in your working directory.
60
65
61
- .. note ::
62
66
63
- Currently, ``torch::deploy `` supports only the Python standard library and
64
- ``torch `` as ``extern `` modules in ``torch.package ``. In the future we plan
65
- to transparently support any Conda environment you point us to.
67
+ Loading and running the model in C++
68
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
69
+
70
+ Set an environment variable (e.g. $PATH_TO_EXTERN_PYTHON_PACKAGES) to indicate to the interpreters
71
+ where the external Python dependencies can be found. In the example below, the path to the
72
+ site-packages of a conda environment is provided.
66
73
74
+ .. code-block :: bash
67
75
76
+ export PATH_TO_EXTERN_PYTHON_PACKAGES= \
77
+ " ~/anaconda/envs/deploy-example-env/lib/python3.8/site-packages"
68
78
69
- Loading and running the model in C++
70
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
71
79
72
80
Let's create a minimal C++ program to that loads the model.
73
81
74
82
.. code-block :: cpp
75
83
76
- #include <torch/deploy.h>
84
+ #include <torch/csrc/deploy/deploy.h>
85
+ #include <torch/csrc/deploy/path_environment.h>
77
86
#include <torch/script.h>
87
+ #include <torch/torch.h>
78
88
79
89
#include <iostream>
80
90
#include <memory>
@@ -86,14 +96,19 @@ Let's create a minimal C++ program to that loads the model.
86
96
}
87
97
88
98
// Start an interpreter manager governing 4 embedded interpreters.
89
- torch::deploy::InterpreterManager manager(4);
99
+ std::shared_ptr<torch::deploy::Environment> env =
100
+ std::make_shared<torch::deploy::PathEnvironment>(
101
+ std::getenv("PATH_TO_EXTERN_PYTHON_PACKAGES")
102
+ );
103
+ torch::deploy::InterpreterManager manager(4, env);
90
104
91
105
try {
92
106
// Load the model from the torch.package.
93
107
torch::deploy::Package package = manager.loadPackage(argv[1]);
94
108
torch::deploy::ReplicatedObj model = package.loadPickle("model", "model.pkl");
95
109
} catch (const c10::Error& e) {
96
110
std::cerr << "error loading the model\n";
111
+ std::cerr << e.msg();
97
112
return -1;
98
113
}
99
114
@@ -105,6 +120,9 @@ This small program introduces many of the core concepts of ``torch::deploy``.
105
120
An ``InterpreterManager `` abstracts over a collection of independent Python
106
121
interpreters, allowing you to load balance across them when running your code.
107
122
123
+ ``PathEnvironment `` enables you to specify the location of Python
124
+ packages on your system which are external, but necessary, for your model.
125
+
108
126
Using the ``InterpreterManager::loadPackage `` method, you can load a
109
127
``torch.package `` from disk and make it available to all interpreters.
110
128
@@ -120,20 +138,55 @@ an free interpreter to execute that interaction.
120
138
Building and running the application
121
139
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
122
140
141
+ Locate `libtorch_deployinterpreter.o ` on your system. This should have been
142
+ built when PyTorch was built from source. In the same PyTorch directory, locate
143
+ the deploy source files. Set these locations to an environment variable for the build.
144
+ An example of where these can be found on a system is shown below.
145
+
146
+ .. code-block :: bash
147
+
148
+ export DEPLOY_INTERPRETER_PATH=" /pytorch/build/torch/csrc/deploy/"
149
+ export DEPLOY_SRC_PATH=" /pytorch/torch/csrc/deploy/"
150
+
151
+ As ``torch::deploy `` is in active development, these manual steps will be removed
152
+ soon.
153
+
123
154
Assuming the above C++ program was stored in a file called, `example-app.cpp `, a
124
155
minimal CMakeLists.txt file would look like:
125
156
126
157
.. code-block :: cmake
127
158
128
- cmake_minimum_required(VERSION 3.0 FATAL_ERROR)
159
+ cmake_minimum_required(VERSION 3.19 FATAL_ERROR)
129
160
project(deploy_tutorial)
130
161
162
+ find_package(fmt REQUIRED)
131
163
find_package(Torch REQUIRED)
132
164
133
- add_executable(example-app example-app.cpp)
134
- target_link_libraries(example-app "${TORCH_LIBRARIES}")
135
- set_property(TARGET example-app PROPERTY CXX_STANDARD 14)
165
+ add_library(torch_deploy_internal STATIC
166
+ ${DEPLOY_INTERPRETER_PATH}/libtorch_deployinterpreter.o
167
+ ${DEPLOY_DIR}/deploy.cpp
168
+ ${DEPLOY_DIR}/loader.cpp
169
+ ${DEPLOY_DIR}/path_environment.cpp
170
+ ${DEPLOY_DIR}/elf_file.cpp)
171
+
172
+ # for python builtins
173
+ target_link_libraries(torch_deploy_internal PRIVATE
174
+ crypt pthread dl util m z ffi lzma readline nsl ncursesw panelw)
175
+ target_link_libraries(torch_deploy_internal PUBLIC
176
+ shm torch fmt::fmt-header-only)
177
+ caffe2_interface_library(torch_deploy_internal torch_deploy)
178
+
179
+ add_executable(example-app example.cpp)
180
+ target_link_libraries(example-app PUBLIC
181
+ "-Wl,--no-as-needed -rdynamic" dl torch_deploy "${TORCH_LIBRARIES}")
182
+
183
+ Currently, it is necessary to build ``torch::deploy `` as a static library.
184
+ In order to correctly link to a static library, the utility ``caffe2_interface_library ``
185
+ is used to appropriately set and unset ``--whole-archive `` flag.
136
186
187
+ Furthermore, the ``-rdynamic `` flag is needed when linking to the executable
188
+ to ensure that symbols are exported to the dynamic table, making them accessible
189
+ to the deploy interpreters (which are dynamically loaded).
137
190
138
191
The last step is configuring and building the project. Assuming that our code
139
192
directory is laid out like this:
@@ -152,8 +205,9 @@ We can now run the following commands to build the application from within the
152
205
mkdir build
153
206
cd build
154
207
# Point CMake at the built version of PyTorch we just installed.
155
- SITE_PACKAGES=" $( python -c ' from distutils.sysconfig import get_python_lib; print(get_python_lib())' ) "
156
- cmake -DCMAKE_PREFIX_PATH=" $SITE_PACKAGES /torch" ..
208
+ cmake -DCMAKE_PREFIX_PATH=" $( python -c ' import torch.utils; print(torch.utils.cmake_prefix_path)' ) " .. \
209
+ -DDEPLOY_INTERPRETER_PATH=" $DEPLOY_INTERPRETER_PATH " \
210
+ -DDEPLOY_DIR=" $DEPLOY_DIR "
157
211
cmake --build . --config Release
158
212
159
213
Now we can run our app:
0 commit comments