You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: CONTRIBUTING.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -58,7 +58,7 @@ executorch
58
58
│ ├── <ahref="exir/verification">verification</a> - IR verification.
59
59
├── <ahref="extension">extension</a> - Extensions built on top of the runtime.
60
60
│ ├── <ahref="extension/android">android</a> - ExecuTorch wrappers for Android apps. Please refer to the <ahref="docs/source/using-executorch-android.md">Android documentation</a> and <ahref="https://pytorch.org/executorch/main/javadoc/">Javadoc</a> for more information.
61
-
│ ├── <ahref="extension/apple">apple</a> - ExecuTorch wrappers for iOS apps. Please refer to the <ahref="docs/source/using-executorch-ios.md">iOS documentation</a> and <ahref="https://pytorch.org/executorch/stable/apple-runtime.html">how to integrate into Apple platform</a> for more information.
61
+
│ ├── <ahref="extension/apple">apple</a> - ExecuTorch wrappers for iOS apps. Please refer to the <ahref="docs/source/using-executorch-ios.md">iOS documentation</a> and <ahref="https://pytorch.org/executorch/main/using-executorch-ios.html">how to integrate into Apple platform</a> for more information.
62
62
│ ├── <ahref="extension/aten_util">aten_util</a> - Converts to and from PyTorch ATen types.
63
63
│ ├── <ahref="extension/data_loader">data_loader</a> - 1st party data loader implementations.
64
64
│ ├── <ahref="extension/evalue_util">evalue_util</a> - Helpers for working with EValue objects.
Copy file name to clipboardExpand all lines: docs/source/build-run-openvino.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -61,7 +61,7 @@ For more information about OpenVINO build, refer to the [OpenVINO Build Instruct
61
61
62
62
Follow the steps below to setup your build environment:
63
63
64
-
1.**Setup ExecuTorch Environment**: Refer to the [Environment Setup](https://pytorch.org/executorch/stable/getting-started-setup#environment-setup) guide for detailed instructions on setting up the ExecuTorch environment.
64
+
1.**Setup ExecuTorch Environment**: Refer to the [Environment Setup](getting-started-setup.md#environment-setup) guide for detailed instructions on setting up the ExecuTorch environment.
65
65
66
66
2.**Setup OpenVINO Backend Environment**
67
67
- Install the dependent libs. Ensure that you are inside `executorch/backends/openvino/` directory
After the [Memory Planning](https://pytorch.org/executorch/main/concepts.html#memory-planning) pass of ExecuTorch, memory allocation information is stored on the nodes of the [`ExportedProgram`](https://pytorch.org/executorch/main/concepts.html#exportedprogram). Here, we present a tool designed to inspect memory allocation and visualize all active tensor objects.
3
+
After the [Memory Planning](concepts.md#memory-planning) pass of ExecuTorch, memory allocation information is stored on the nodes of the [`ExportedProgram`](concepts.md#exportedprogram). Here, we present a tool designed to inspect memory allocation and visualize all active tensor objects.
4
4
5
5
## Usage
6
-
User should add this code after they call [to_executorch()](https://pytorch.org/executorch/main/export-to-executorch-api-reference.html#executorch.exir.EdgeProgramManager.to_executorch), and it will write memory allocation information stored on the nodes to the file path "memory_profile.json". The file is compatible with the Chrome trace viewer; see below for more information about interpreting the results.
6
+
User should add this code after they call [to_executorch()](export-to-executorch-api-reference.rst#executorch.exir.EdgeProgramManager.to_executorch), and it will write memory allocation information stored on the nodes to the file path "memory_profile.json". The file is compatible with the Chrome trace viewer; see below for more information about interpreting the results.
7
7
8
8
```python
9
9
from executorch.util.activation_memory_profiler import generate_memory_trace
@@ -13,18 +13,18 @@ generate_memory_trace(
13
13
enable_memory_offsets=True,
14
14
)
15
15
```
16
-
*`prog` is an instance of [`ExecuTorchProgramManager`](https://pytorch.org/executorch/main/export-to-executorch-api-reference.html#executorch.exir.ExecutorchProgramManager), returned by [to_executorch()](https://pytorch.org/executorch/main/export-to-executorch-api-reference.html#executorch.exir.EdgeProgramManager.to_executorch).
16
+
*`prog` is an instance of [`ExecuTorchProgramManager`](export-to-executorch-api-reference.rst#executorch.exir.ExecutorchProgramManager), returned by [to_executorch()](export-to-executorch-api-reference.rst#executorch.exir.EdgeProgramManager.to_executorch).
17
17
* Set `enable_memory_offsets` to `True` to show the location of each tensor on the memory space.
18
18
19
19
## Chrome Trace
20
20
Open a Chrome browser tab and navigate to <chrome://tracing/>. Upload the generated `.json` to view.
21
21
Example of a [MobileNet V2](https://pytorch.org/vision/main/models/mobilenetv2.html) model:
Note that, since we are repurposing the Chrome trace tool, the axes in this context may have different meanings compared to other Chrome trace graphs you may have encountered previously:
26
26
* The horizontal axis, despite being labeled in seconds (s), actually represents megabytes (MBs).
27
27
* The vertical axis has a 2-level hierarchy. The first level, "pid", represents memory space. For CPU, everything is allocated on one "space"; other backends may have multiple. In the second level, each row represents one time step. Since nodes will be executed sequentially, each node represents one time step, thus you will have as many nodes as there are rows.
Copy file name to clipboardExpand all lines: docs/source/new-contributor-guide.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -129,7 +129,7 @@ Before you can start writing any code, you need to get a copy of ExecuTorch code
129
129
git push # push updated local main to your GitHub fork
130
130
```
131
131
132
-
6. [Build the project](https://pytorch.org/executorch/main/using-executorch-building-from-source.html) and [run the tests](https://github.com/pytorch/executorch/blob/main/CONTRIBUTING.md#testing).
132
+
6. [Build the project](using-executorch-building-from-source.md) and [run the tests](https://github.com/pytorch/executorch/blob/main/CONTRIBUTING.md#testing).
133
133
134
134
Unfortunately, this step is too long to detail here. If you get stuck at any point, please feel free to ask forhelp on our [Discord server](https://discord.com/invite/Dh43CKSAdc) — we're always eager to help newcomers get onboarded.
Copy file name to clipboardExpand all lines: docs/source/using-executorch-android.md
+10-10
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,7 @@
2
2
3
3
To use from Android, ExecuTorch provides Java/Kotlin API bindings and Android platform integration, available as an AAR file.
4
4
5
-
Note: This page covers Android app integration through the AAR library. The ExecuTorch C++ APIs can also be used from Android native, and the documentation can be found on [this page about cross compilation](https://pytorch.org/executorch/main/using-executorch-building-from-source.html#cross-compilation).
5
+
Note: This page covers Android app integration through the AAR library. The ExecuTorch C++ APIs can also be used from Android native, and the documentation can be found on [this page about cross compilation](using-executorch-building-from-source.md#cross-compilation).
6
6
7
7
## Installation
8
8
@@ -41,8 +41,8 @@ dependencies {
41
41
Note: If you want to use release v0.5.0, please use dependency `org.pytorch:executorch-android:0.5.1`.
42
42
43
43
Click the screenshot below to watch the *demo video* on how to add the package and run a simple ExecuTorch model with Android Studio.
Copy file name to clipboardExpand all lines: examples/demo-apps/android/LlamaDemo/docs/delegates/mediatek_README.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -65,7 +65,7 @@ export ANDROID_ABIS=arm64-v8a
65
65
MTK currently supports Llama 3 exporting.
66
66
67
67
### Set up Environment
68
-
1. Follow the ExecuTorch set-up environment instructions found on the [Getting Started](https://pytorch.org/executorch/stable/getting-started-setup.html) page
68
+
1. Follow the ExecuTorch set-up environment instructions found on the [Getting Started](https://pytorch.org/executorch/main/getting-started-setup.html) page
69
69
2. Set-up MTK AoT environment
70
70
```
71
71
// Ensure that you are inside executorch/examples/mediatek directory
Copy file name to clipboardExpand all lines: examples/demo-apps/apple_ios/LLaMA/README.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -56,7 +56,7 @@ Link your binary with the ExecuTorch runtime and any backends or kernels used by
56
56
57
57
Note: To access logs, link against the Debug build of the ExecuTorch runtime, i.e., the executorch_debug framework. For optimal performance, always link against the Release version of the deliverables (those without the _debug suffix), which have all logging overhead removed.
58
58
59
-
For more details integrating and Running ExecuTorch on Apple Platforms, checkout this [link](https://pytorch.org/executorch/main/apple-runtime.html).
59
+
For more details integrating and Running ExecuTorch on Apple Platforms, checkout this [link](https://pytorch.org/executorch/main/using-executorch-ios.html).
60
60
61
61
### XCode
62
62
* Open XCode and select "Open an existing project" to open `examples/demo-apps/apple_ios/LLama`.
Copy file name to clipboardExpand all lines: examples/demo-apps/apple_ios/LLaMA/docs/delegates/mps_README.md
+2-2
Original file line number
Diff line number
Diff line change
@@ -9,7 +9,7 @@ More specifically, it covers:
9
9
## Prerequisites
10
10
*[Xcode 15](https://developer.apple.com/xcode)
11
11
*[iOS 18 SDK](https://developer.apple.com/ios)
12
-
* Set up your ExecuTorch repo and environment if you haven’t done so by following the [Setting up ExecuTorch](https://pytorch.org/executorch/stable/getting-started-setup) to set up the repo and dev environment:
12
+
* Set up your ExecuTorch repo and environment if you haven’t done so by following the [Setting up ExecuTorch](https://pytorch.org/executorch/stable/using-executorch-building-from-source) to set up the repo and dev environment:
13
13
14
14
## Setup ExecuTorch
15
15
In this section, we will need to set up the ExecuTorch repo first with Conda environment management. Make sure you have Conda available in your system (or follow the instructions to install it [here](https://conda.io/projects/conda/en/latest/user-guide/install/index.html)). The commands below are running on Linux (CentOS).
@@ -85,7 +85,7 @@ Link your binary with the ExecuTorch runtime and any backends or kernels used by
85
85
86
86
Note: To access logs, link against the Debug build of the ExecuTorch runtime, i.e., the executorch_debug framework. For optimal performance, always link against the Release version of the deliverables (those without the _debug suffix), which have all logging overhead removed.
87
87
88
-
For more details integrating and Running ExecuTorch on Apple Platforms, checkout this [link](https://pytorch.org/executorch/main/apple-runtime.html).
88
+
For more details integrating and Running ExecuTorch on Apple Platforms, checkout this [link](https://pytorch.org/executorch/using-executorch-ios.html).
89
89
90
90
<palign="center">
91
91
<imgsrc="https://raw.githubusercontent.com/pytorch/executorch/refs/heads/main/docs/source/_static/img/ios_demo_app_swift_pm.png"alt="iOS LLaMA App Swift PM"style="width:600px">
Copy file name to clipboardExpand all lines: examples/demo-apps/apple_ios/LLaMA/docs/delegates/xnnpack_README.md
+2-2
Original file line number
Diff line number
Diff line change
@@ -163,7 +163,7 @@ If you cannot add the package into your app target (it's greyed out), it might h
163
163
164
164
165
165
166
-
More details on integrating and Running ExecuTorch on Apple Platforms, check out the detailed guide [here](https://pytorch.org/executorch/main/apple-runtime.html#local-build).
166
+
More details on integrating and Running ExecuTorch on Apple Platforms, check out the detailed guide [here](https://pytorch.org/executorch/main/using-executorch-ios.html#local-build).
167
167
168
168
### 3. Configure Build Schemes
169
169
@@ -175,7 +175,7 @@ Navigate to `Product --> Scheme --> Edit Scheme --> Info --> Build Configuration
175
175
176
176
We recommend that you only use the Debug build scheme during development, where you might need to access additional logs. Debug build has logging overhead and will impact inferencing performance, while release build has compiler optimizations enabled and all logging overhead removed.
177
177
178
-
For more details integrating and Running ExecuTorch on Apple Platforms or building the package locally, checkout this [link](https://pytorch.org/executorch/main/apple-runtime.html).
178
+
For more details integrating and Running ExecuTorch on Apple Platforms or building the package locally, checkout this [link](https://pytorch.org/executorch/main/using-executorch-ios.html).
0 commit comments