Skip to content

Commit 39274d8

Browse files
authored
Update doc links to relative markdown files (#10164)
1 parent d614700 commit 39274d8

File tree

10 files changed

+27
-27
lines changed

10 files changed

+27
-27
lines changed

CONTRIBUTING.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -58,7 +58,7 @@ executorch
5858
│ ├── <a href="exir/verification">verification</a> - IR verification.
5959
├── <a href="extension">extension</a> - Extensions built on top of the runtime.
6060
│ ├── <a href="extension/android">android</a> - ExecuTorch wrappers for Android apps. Please refer to the <a href="docs/source/using-executorch-android.md">Android documentation</a> and <a href="https://pytorch.org/executorch/main/javadoc/">Javadoc</a> for more information.
61-
│ ├── <a href="extension/apple">apple</a> - ExecuTorch wrappers for iOS apps. Please refer to the <a href="docs/source/using-executorch-ios.md">iOS documentation</a> and <a href="https://pytorch.org/executorch/stable/apple-runtime.html">how to integrate into Apple platform</a> for more information.
61+
│ ├── <a href="extension/apple">apple</a> - ExecuTorch wrappers for iOS apps. Please refer to the <a href="docs/source/using-executorch-ios.md">iOS documentation</a> and <a href="https://pytorch.org/executorch/main/using-executorch-ios.html">how to integrate into Apple platform</a> for more information.
6262
│ ├── <a href="extension/aten_util">aten_util</a> - Converts to and from PyTorch ATen types.
6363
│ ├── <a href="extension/data_loader">data_loader</a> - 1st party data loader implementations.
6464
│ ├── <a href="extension/evalue_util">evalue_util</a> - Helpers for working with EValue objects.

docs/source/build-run-openvino.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,7 @@ For more information about OpenVINO build, refer to the [OpenVINO Build Instruct
6161

6262
Follow the steps below to setup your build environment:
6363

64-
1. **Setup ExecuTorch Environment**: Refer to the [Environment Setup](https://pytorch.org/executorch/stable/getting-started-setup#environment-setup) guide for detailed instructions on setting up the ExecuTorch environment.
64+
1. **Setup ExecuTorch Environment**: Refer to the [Environment Setup](getting-started-setup.md#environment-setup) guide for detailed instructions on setting up the ExecuTorch environment.
6565

6666
2. **Setup OpenVINO Backend Environment**
6767
- Install the dependent libs. Ensure that you are inside `executorch/backends/openvino/` directory
+5-5
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,9 @@
11
# Memory Planning Inspection in ExecuTorch
22

3-
After the [Memory Planning](https://pytorch.org/executorch/main/concepts.html#memory-planning) pass of ExecuTorch, memory allocation information is stored on the nodes of the [`ExportedProgram`](https://pytorch.org/executorch/main/concepts.html#exportedprogram). Here, we present a tool designed to inspect memory allocation and visualize all active tensor objects.
3+
After the [Memory Planning](concepts.md#memory-planning) pass of ExecuTorch, memory allocation information is stored on the nodes of the [`ExportedProgram`](concepts.md#exportedprogram). Here, we present a tool designed to inspect memory allocation and visualize all active tensor objects.
44

55
## Usage
6-
User should add this code after they call [to_executorch()](https://pytorch.org/executorch/main/export-to-executorch-api-reference.html#executorch.exir.EdgeProgramManager.to_executorch), and it will write memory allocation information stored on the nodes to the file path "memory_profile.json". The file is compatible with the Chrome trace viewer; see below for more information about interpreting the results.
6+
User should add this code after they call [to_executorch()](export-to-executorch-api-reference.rst#executorch.exir.EdgeProgramManager.to_executorch), and it will write memory allocation information stored on the nodes to the file path "memory_profile.json". The file is compatible with the Chrome trace viewer; see below for more information about interpreting the results.
77

88
```python
99
from executorch.util.activation_memory_profiler import generate_memory_trace
@@ -13,18 +13,18 @@ generate_memory_trace(
1313
enable_memory_offsets=True,
1414
)
1515
```
16-
* `prog` is an instance of [`ExecuTorchProgramManager`](https://pytorch.org/executorch/main/export-to-executorch-api-reference.html#executorch.exir.ExecutorchProgramManager), returned by [to_executorch()](https://pytorch.org/executorch/main/export-to-executorch-api-reference.html#executorch.exir.EdgeProgramManager.to_executorch).
16+
* `prog` is an instance of [`ExecuTorchProgramManager`](export-to-executorch-api-reference.rst#executorch.exir.ExecutorchProgramManager), returned by [to_executorch()](export-to-executorch-api-reference.rst#executorch.exir.EdgeProgramManager.to_executorch).
1717
* Set `enable_memory_offsets` to `True` to show the location of each tensor on the memory space.
1818

1919
## Chrome Trace
2020
Open a Chrome browser tab and navigate to <chrome://tracing/>. Upload the generated `.json` to view.
2121
Example of a [MobileNet V2](https://pytorch.org/vision/main/models/mobilenetv2.html) model:
2222

23-
![Memory planning Chrome trace visualization](/_static/img/memory_planning_inspection.png)
23+
![Memory planning Chrome trace visualization](_static/img/memory_planning_inspection.png)
2424

2525
Note that, since we are repurposing the Chrome trace tool, the axes in this context may have different meanings compared to other Chrome trace graphs you may have encountered previously:
2626
* The horizontal axis, despite being labeled in seconds (s), actually represents megabytes (MBs).
2727
* The vertical axis has a 2-level hierarchy. The first level, "pid", represents memory space. For CPU, everything is allocated on one "space"; other backends may have multiple. In the second level, each row represents one time step. Since nodes will be executed sequentially, each node represents one time step, thus you will have as many nodes as there are rows.
2828

2929
## Further Reading
30-
* [Memory Planning](https://pytorch.org/executorch/main/compiler-memory-planning.html)
30+
* [Memory Planning](compiler-memory-planning.md)

docs/source/new-contributor-guide.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -129,7 +129,7 @@ Before you can start writing any code, you need to get a copy of ExecuTorch code
129129
git push # push updated local main to your GitHub fork
130130
```
131131

132-
6. [Build the project](https://pytorch.org/executorch/main/using-executorch-building-from-source.html) and [run the tests](https://github.com/pytorch/executorch/blob/main/CONTRIBUTING.md#testing).
132+
6. [Build the project](using-executorch-building-from-source.md) and [run the tests](https://github.com/pytorch/executorch/blob/main/CONTRIBUTING.md#testing).
133133

134134
Unfortunately, this step is too long to detail here. If you get stuck at any point, please feel free to ask for help on our [Discord server](https://discord.com/invite/Dh43CKSAdc) — we're always eager to help newcomers get onboarded.
135135

docs/source/using-executorch-android.md

+10-10
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
To use from Android, ExecuTorch provides Java/Kotlin API bindings and Android platform integration, available as an AAR file.
44

5-
Note: This page covers Android app integration through the AAR library. The ExecuTorch C++ APIs can also be used from Android native, and the documentation can be found on [this page about cross compilation](https://pytorch.org/executorch/main/using-executorch-building-from-source.html#cross-compilation).
5+
Note: This page covers Android app integration through the AAR library. The ExecuTorch C++ APIs can also be used from Android native, and the documentation can be found on [this page about cross compilation](using-executorch-building-from-source.md#cross-compilation).
66

77
## Installation
88

@@ -41,8 +41,8 @@ dependencies {
4141
Note: If you want to use release v0.5.0, please use dependency `org.pytorch:executorch-android:0.5.1`.
4242

4343
Click the screenshot below to watch the *demo video* on how to add the package and run a simple ExecuTorch model with Android Studio.
44-
<a href="https://pytorch.org/executorch/main/_static/img/android_studio.mp4">
45-
<img src="https://pytorch.org/executorch/main/_static/img/android_studio.jpeg" width="800" alt="Integrating and Running ExecuTorch on Android">
44+
<a href="_static/img/android_studio.mp4">
45+
<img src="_static/img/android_studio.jpeg" width="800" alt="Integrating and Running ExecuTorch on Android">
4646
</a>
4747

4848
## Using AAR file directly
@@ -130,17 +130,17 @@ Set environment variable `EXECUTORCH_CMAKE_BUILD_TYPE` to `Release` or `Debug` b
130130

131131
#### Using MediaTek backend
132132

133-
To use [MediaTek backend](https://pytorch.org/executorch/main/backends-mediatek.html),
133+
To use [MediaTek backend](backends-mediatek.md),
134134
after installing and setting up the SDK, set `NEURON_BUFFER_ALLOCATOR_LIB` and `NEURON_USDK_ADAPTER_LIB` to the corresponding path.
135135

136136
#### Using Qualcomm AI Engine Backend
137137

138-
To use [Qualcomm AI Engine Backend](https://pytorch.org/executorch/main/backends-qualcomm.html#qualcomm-ai-engine-backend),
138+
To use [Qualcomm AI Engine Backend](backends-qualcomm.md#qualcomm-ai-engine-backend),
139139
after installing and setting up the SDK, set `QNN_SDK_ROOT` to the corresponding path.
140140

141141
#### Using Vulkan Backend
142142

143-
To use [Vulkan Backend](https://pytorch.org/executorch/main/backends-vulkan.html#vulkan-backend),
143+
To use [Vulkan Backend](backends-vulkan.md#vulkan-backend),
144144
set `EXECUTORCH_BUILD_VULKAN` to `ON`.
145145

146146
## Android Backends
@@ -149,10 +149,10 @@ The following backends are available for Android:
149149

150150
| Backend | Type | Doc |
151151
| ------- | -------- | --- |
152-
| [XNNPACK](https://github.com/google/XNNPACK) | CPU | [Doc](./backends-xnnpack.md) |
153-
| [MediaTek NeuroPilot](https://neuropilot.mediatek.com/) | NPU | [Doc](./backends-mediatek.md) |
154-
| [Qualcomm AI Engine](https://www.qualcomm.com/developer/software/qualcomm-ai-engine-direct-sdk) | NPU | [Doc](./backends-qualcomm.md) |
155-
| [Vulkan](https://www.vulkan.org/) | GPU | [Doc](./backends-vulkan.md) |
152+
| [XNNPACK](https://github.com/google/XNNPACK) | CPU | [Doc](backends-xnnpack.md) |
153+
| [MediaTek NeuroPilot](https://neuropilot.mediatek.com/) | NPU | [Doc](backends-mediatek.md) |
154+
| [Qualcomm AI Engine](https://www.qualcomm.com/developer/software/qualcomm-ai-engine-direct-sdk) | NPU | [Doc](backends-qualcomm.md) |
155+
| [Vulkan](https://www.vulkan.org/) | GPU | [Doc](backends-vulkan.md) |
156156

157157

158158
## Runtime Integration

docs/source/using-executorch-ios.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -35,8 +35,8 @@ Then select which ExecuTorch framework should link against which target.
3535

3636
Click the screenshot below to watch the *demo video* on how to add the package and run a simple ExecuTorch model on iOS.
3737

38-
<a href="https://pytorch.org/executorch/main/_static/img/swiftpm_xcode.mp4">
39-
<img src="https://pytorch.org/executorch/main/_static/img/swiftpm_xcode.png" width="800" alt="Integrating and Running ExecuTorch on Apple Platforms">
38+
<a href="_static/img/swiftpm_xcode.mp4">
39+
<img src="_static/img/swiftpm_xcode.png" width="800" alt="Integrating and Running ExecuTorch on Apple Platforms">
4040
</a>
4141

4242
#### CLI
@@ -293,7 +293,7 @@ From existing memory buffers:
293293

294294
From `NSData` / `Data`:
295295
- `init(data:shape:dataType:...)`: Creates a tensor using an `NSData` object, referencing its bytes without copying.
296-
296+
297297
From scalar arrays:
298298
- `init(_:shape:dataType:...)`: Creates a tensor from an array of `NSNumber` scalars. Convenience initializers exist to infer shape or data type.
299299

examples/demo-apps/android/LlamaDemo/docs/delegates/mediatek_README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -65,7 +65,7 @@ export ANDROID_ABIS=arm64-v8a
6565
MTK currently supports Llama 3 exporting.
6666

6767
### Set up Environment
68-
1. Follow the ExecuTorch set-up environment instructions found on the [Getting Started](https://pytorch.org/executorch/stable/getting-started-setup.html) page
68+
1. Follow the ExecuTorch set-up environment instructions found on the [Getting Started](https://pytorch.org/executorch/main/getting-started-setup.html) page
6969
2. Set-up MTK AoT environment
7070
```
7171
// Ensure that you are inside executorch/examples/mediatek directory

examples/demo-apps/apple_ios/LLaMA/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@ Link your binary with the ExecuTorch runtime and any backends or kernels used by
5656

5757
Note: To access logs, link against the Debug build of the ExecuTorch runtime, i.e., the executorch_debug framework. For optimal performance, always link against the Release version of the deliverables (those without the _debug suffix), which have all logging overhead removed.
5858

59-
For more details integrating and Running ExecuTorch on Apple Platforms, checkout this [link](https://pytorch.org/executorch/main/apple-runtime.html).
59+
For more details integrating and Running ExecuTorch on Apple Platforms, checkout this [link](https://pytorch.org/executorch/main/using-executorch-ios.html).
6060

6161
### XCode
6262
* Open XCode and select "Open an existing project" to open `examples/demo-apps/apple_ios/LLama`.

examples/demo-apps/apple_ios/LLaMA/docs/delegates/mps_README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ More specifically, it covers:
99
## Prerequisites
1010
* [Xcode 15](https://developer.apple.com/xcode)
1111
* [iOS 18 SDK](https://developer.apple.com/ios)
12-
* Set up your ExecuTorch repo and environment if you haven’t done so by following the [Setting up ExecuTorch](https://pytorch.org/executorch/stable/getting-started-setup) to set up the repo and dev environment:
12+
* Set up your ExecuTorch repo and environment if you haven’t done so by following the [Setting up ExecuTorch](https://pytorch.org/executorch/stable/using-executorch-building-from-source) to set up the repo and dev environment:
1313

1414
## Setup ExecuTorch
1515
In this section, we will need to set up the ExecuTorch repo first with Conda environment management. Make sure you have Conda available in your system (or follow the instructions to install it [here](https://conda.io/projects/conda/en/latest/user-guide/install/index.html)). The commands below are running on Linux (CentOS).
@@ -85,7 +85,7 @@ Link your binary with the ExecuTorch runtime and any backends or kernels used by
8585

8686
Note: To access logs, link against the Debug build of the ExecuTorch runtime, i.e., the executorch_debug framework. For optimal performance, always link against the Release version of the deliverables (those without the _debug suffix), which have all logging overhead removed.
8787

88-
For more details integrating and Running ExecuTorch on Apple Platforms, checkout this [link](https://pytorch.org/executorch/main/apple-runtime.html).
88+
For more details integrating and Running ExecuTorch on Apple Platforms, checkout this [link](https://pytorch.org/executorch/using-executorch-ios.html).
8989

9090
<p align="center">
9191
<img src="https://raw.githubusercontent.com/pytorch/executorch/refs/heads/main/docs/source/_static/img/ios_demo_app_swift_pm.png" alt="iOS LLaMA App Swift PM" style="width:600px">

examples/demo-apps/apple_ios/LLaMA/docs/delegates/xnnpack_README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -163,7 +163,7 @@ If you cannot add the package into your app target (it's greyed out), it might h
163163

164164

165165

166-
More details on integrating and Running ExecuTorch on Apple Platforms, check out the detailed guide [here](https://pytorch.org/executorch/main/apple-runtime.html#local-build).
166+
More details on integrating and Running ExecuTorch on Apple Platforms, check out the detailed guide [here](https://pytorch.org/executorch/main/using-executorch-ios.html#local-build).
167167

168168
### 3. Configure Build Schemes
169169

@@ -175,7 +175,7 @@ Navigate to `Product --> Scheme --> Edit Scheme --> Info --> Build Configuration
175175

176176
We recommend that you only use the Debug build scheme during development, where you might need to access additional logs. Debug build has logging overhead and will impact inferencing performance, while release build has compiler optimizations enabled and all logging overhead removed.
177177

178-
For more details integrating and Running ExecuTorch on Apple Platforms or building the package locally, checkout this [link](https://pytorch.org/executorch/main/apple-runtime.html).
178+
For more details integrating and Running ExecuTorch on Apple Platforms or building the package locally, checkout this [link](https://pytorch.org/executorch/main/using-executorch-ios.html).
179179

180180
### 4. Build and Run the project
181181

0 commit comments

Comments
 (0)