@@ -6,8 +6,8 @@ For a list of devices, see below, under *SUPPORTED SYSTEMS*
6
6
7
7
A goal of this repo, and the design of the PT2 components was to offer seamless integration and consistent workflows.
8
8
Both mobile and server/desktop paths start with torch.export() receiving the same model description. Similarly,
9
- integration into runners for Python (for initial testing) and Python-free environments (for deployment, in runner-posix
10
- and runner-mobile , respectively) offer very consistent experiences across backends and offer developers consistent interfaces
9
+ integration into runners for Python (for initial testing) and Python-free environments (for deployment, in runner-aoti
10
+ and runner-et , respectively) offer a consistent experience across backends and offer developers consistent interfaces
11
11
and user experience whether they target server, desktop or mobile & edge use cases, and/or all of them.
12
12
13
13
@@ -85,12 +85,14 @@ The environment variable MODEL_REPO should point to a directory with the `model.
85
85
The command below will add the file "llama-fast.pte" to your MODEL_REPO directory.
86
86
87
87
```
88
- python et_export.py --checkpoint_path $MODEL_REPO/model.pth -d fp32 --xnnpack -- out-path ${MODEL_REPO}
88
+ python et_export.py --checkpoint_path $MODEL_REPO/model.pth -d fp32 --out-path ${MODEL_REPO}
89
89
```
90
90
91
- How do run is problematic -- I would love to run it with
91
+ TODO(fix this): the export command works with "--xnnpack" flag, but the next generate.py command will not run it so we do not set it right now.
92
+
93
+ To run the pte file, run this. Note that this is very slow at the moment.
92
94
```
93
- python generate.py --pte ./${ MODEL_REPO} .pte --prompt "Hello my name is" --device cpu
95
+ python generate.py --checkpoint_path $MODEL_REPO/model.pth --pte $ MODEL_REPO/llama-fast .pte --prompt "Hello my name is" --device cpu
94
96
```
95
97
but * that requires xnnpack to work in python!*
96
98
@@ -233,6 +235,11 @@ List dependencies for these backends
233
235
### ExecuTorch
234
236
Set up executorch by following the instructions [ here] ( https://pytorch.org/executorch/stable/getting-started-setup.html#setting-up-executorch ) .
235
237
238
+ Make sure when you run the installation script in the executorch repo, you enable pybind.
239
+ ```
240
+ ./install_requirements.sh --pybind
241
+ ```
242
+
236
243
237
244
238
245
# Acknowledgements
0 commit comments