You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
3. The training repository, which is used for deep learning *model development and training*:
14
-
[ai8x-training](https://github.com/analogdevicesinc/ai8x-training)**(described in this document)**
14
+
[ai8x-training](https://github.com/analogdevicesinc/ai8x-training/tree/pytorch-2)**(described in this document)**
15
15
4. The synthesis repository, which is used to *convert a trained model into C code* using the “izer” tool:
16
-
[ai8x-synthesis](https://github.com/analogdevicesinc/ai8x-synthesis)**(described in this document)**
16
+
[ai8x-synthesis](https://github.com/analogdevicesinc/ai8x-synthesis/tree/pytorch-2)**(described in this document)**
17
17
5. The reference design repository, which contains host applications and sample applications for reference designs such as [MAXREFDES178 (Cube Camera)](https://www.analog.com/en/design-center/reference-designs/maxrefdes178.html):
*Note: Examples for EVkits and Feather boards are part of the MSDK*
@@ -75,15 +75,15 @@ Limited support and advice for using other hardware and software combinations is
75
75
76
76
**The only officially supported platforms for model training** are Ubuntu Linux 20.04 LTS and 22.04 LTS on amd64/x86_64, either the desktop or the [server version](https://ubuntu.com/download/server).
77
77
78
-
*Note that hardware acceleration using CUDA is <u>not available</u> in PyTorch for Raspberry Pi 4 and other <u>aarch64/arm64</u> devices, even those running Ubuntu Linux 20.04/22.04. See also [Development on Raspberry Pi 4 and 400](https://github.com/analogdevicesinc/ai8x-synthesis/blob/develop/docs/RaspberryPi.md) (unsupported).*
78
+
*Note that hardware acceleration using CUDA is <u>not available</u> in PyTorch for Raspberry Pi 4 and other <u>aarch64/arm64</u> devices, even those running Ubuntu Linux 20.04/22.04. See also [Development on Raspberry Pi 4 and 400](https://github.com/analogdevicesinc/ai8x-synthesis/blob/pytorch-2/docs/RaspberryPi.md) (unsupported).*
79
79
80
80
This document also provides instructions for installing on RedHat Enterprise Linux / CentOS 8 with limited support.
81
81
82
82
##### Windows
83
83
84
-
On Windows 10 version 21H2 or newer, and Windows 11, after installing the Windows Subsystem for Linux (WSL2), Ubuntu Linux 20.04 or 22.04 can be used inside Windows with full CUDA acceleration, please see *[Windows Subsystem for Linux](https://github.com/analogdevicesinc/ai8x-synthesis/blob/develop/docs/WSL2.md).* For the remainder of this document, follow the steps for Ubuntu Linux.
84
+
On Windows 10 version 21H2 or newer, and Windows 11, after installing the Windows Subsystem for Linux (WSL2), Ubuntu Linux 20.04 or 22.04 can be used inside Windows with full CUDA acceleration, please see *[Windows Subsystem for Linux](https://github.com/analogdevicesinc/ai8x-synthesis/blob/pytorch-2/docs/WSL2.md).* For the remainder of this document, follow the steps for Ubuntu Linux.
85
85
86
-
If WSL2 is not available, it is also possible (but not recommended due to inherent compatibility issues and slightly degraded performance) to run this software natively on Windows. Please see *[Native Windows Installation](https://github.com/analogdevicesinc/ai8x-synthesis/blob/develop/docs/Windows.md)*.
86
+
If WSL2 is not available, it is also possible (but not recommended due to inherent compatibility issues and slightly degraded performance) to run this software natively on Windows. Please see *[Native Windows Installation](https://github.com/analogdevicesinc/ai8x-synthesis/blob/pytorch-2/docs/Windows.md)*.
87
87
88
88
##### macOS
89
89
@@ -317,8 +317,8 @@ Change to the project root and run the following commands. Use your GitHub crede
@@ -329,7 +329,7 @@ To create the virtual environment and install basic wheels:
329
329
$ cd ai8x-training
330
330
```
331
331
332
-
The default branch is “develop” which is updated most frequently. If you want to use the “main” branch instead, switch to “main” using`git checkout main`.
332
+
Using the instructions above checks out the `pytorch-2` branch which supports PyTorch 2.3. For PyTorch 1.8 support, use the `develop` or `main` branches. To switch, use `git checkout`, for example`git checkout main`.
333
333
334
334
If using pyenv, set the local directory to use Python 3.11.8.
335
335
@@ -395,11 +395,11 @@ For all other systems, including macOS:
395
395
396
396
##### Repository Branches
397
397
398
-
By default, the `develop` branch is checked out. This branch is the most frequently updated branch and it contains the latest improvements to the project. To switch to the main branch that is updated less frequently, but may be more stable, use the command `git checkout main`.
398
+
When following these instructions, the `pytorch-2` branch is checked out. For PyTorch 1.8 support, use either the `develop` branch (the most frequently updated branch which it contains the latest improvements to the project) or the `main` branch (updated less frequently, but possibly more stable). To change branches, use the command`git checkout`, for example`git checkout main`.
399
399
400
400
###### TensorFlow / Keras
401
401
402
-
Support for TensorFlow / Keras is currently in the `develop-tf` branch.
402
+
Support for TensorFlow / Keras is deprecated.
403
403
404
404
#### Updating to the Latest Version
405
405
@@ -587,7 +587,7 @@ The MSDK is also available as a [git repository](https://github.com/analogdevice
587
587
$ pacman -S --needed base filesystem msys2-runtime make
588
588
```
589
589
590
-
5. Install packages forOpenOCD. OpenOCD binaries are availablein the “openocd” sub-folder of the ai8x-synthesis repository. However, some additional dependencies are required on most systems. See [openocd/README.md](https://github.com/analogdevicesinc/ai8x-synthesis/blob/develop/openocd/README.md) for a list of packages to install, thenreturn here to continue.
590
+
5. Install packages forOpenOCD. OpenOCD binaries are availablein the “openocd” sub-folder of the ai8x-synthesis repository. However, some additional dependencies are required on most systems. See [openocd/README.md](https://github.com/analogdevicesinc/ai8x-synthesis/blob/pytorch-2/openocd/README.md) for a list of packages to install, thenreturn here to continue.
591
591
592
592
6. Add the location of the toolchain binaries to the system path.
593
593
@@ -1075,12 +1075,12 @@ The MAX78000 hardware does not support arbitrary network parameters. Specificall
1075
1075
* The *final* streaming layer must use padding.
1076
1076
* Layers that use 1×1 kernels without padding are automatically replaced with equivalent layers that use 3×3 kernels with padding.
1077
1077
1078
-
* The weight memory supports up to 768 * 64 3×3 Q7 kernels (see [Number Format](#number-format)), for a total of [432 KiB of kernel memory](docs/AHBAddresses.md).
1078
+
* The weight memory supports up to 768 * 64 3×3 Q7 kernels (see [Number Format](#number-format)), for a total of [432 KiB of kernel memory](https://github.com/analogdevicesinc/ai8x-synthesis/blob/pytorch-2/docs/AHBAddresses.md).
1079
1079
When using 1-, 2- or 4-bit weights, the capacity increases accordingly.
1080
1080
When using more than 64 input or output channels, weight memory is shared, and effective capacity decreases proportionally (for example, 128 input channels require twice as much space as 64 input channels, and a layer with <u>both</u> 128 input and 128 output channels requires <u>four</u> times as much space as a layer with only 64 input channels and 64 output channels).
1081
1081
Weights must be arranged according to specific rules detailed in [Layers and Weight Memory](#layers-and-weight-memory).
1082
1082
1083
-
* There are 16 instances of 32 KiB data memory ([for a total of 512 KiB](docs/AHBAddresses.md)). When not using streaming mode, any data channel (input, intermediate, or output) must completely fit into one memory instance. This limits the first-layer input to 32,768 pixels per channel in the CHW format (181×181 when width = height). However, when using more than one input channel, the HWC format may be preferred, and all layer outputs are in HWC format as well. In those cases, it is required that four channels fit into a single memory instance — or 8192 pixels per channel (approximately 90×90 when width = height).
1083
+
* There are 16 instances of 32 KiB data memory ([for a total of 512 KiB](https://github.com/analogdevicesinc/ai8x-synthesis/blob/pytorch-2/docs/AHBAddresses.md)). When not using streaming mode, any data channel (input, intermediate, or output) must completely fit into one memory instance. This limits the first-layer input to 32,768 pixels per channel in the CHW format (181×181 when width = height). However, when using more than one input channel, the HWC format may be preferred, and all layer outputs are in HWC format as well. In those cases, it is required that four channels fit into a single memory instance — or 8192 pixels per channel (approximately 90×90 when width = height).
1084
1084
Note that the first layer commonly creates a wide expansion (i.e., a large number of output channels) that needs to fit into data memory, so the input size limit is mostly theoretical. In many cases, [Data Folding](#data-folding) (distributing the input data across multiple channels) can effectively increase both the input dimensions as well as improve model performance.
1085
1085
1086
1086
* The hardware supports 1D and 2D convolution layers, 2D transposed convolution layers (upsampling), element-wise addition, subtraction, binary OR, binary XOR as well as fully connected layers (`Linear`), which are implemented using 1×1 convolutions on 1×1 data:
@@ -1171,12 +1171,12 @@ The MAX78002 hardware does not support arbitrary network parameters. Specificall
1171
1171
* Layers that use 1×1 kernels without padding are automatically replaced with equivalent layers that use 3×3 kernels with padding.
1172
1172
* Streaming layers must use convolution (i.e., the `Conv1d`, `Conv2d`, or `ConvTranspose2d` [operators](#operation)).
1173
1173
1174
-
* The weight memory of processors 0, 16, 32, and 48 supports up to 5,120 3×3 Q7 kernels (see [Number Format](#number-format)), all other processors support up to 4,096 3×3 Q7 kernels, for a total of [2,340 KiB of kernel memory](docs/AHBAddresses.md).
1174
+
* The weight memory of processors 0, 16, 32, and 48 supports up to 5,120 3×3 Q7 kernels (see [Number Format](#number-format)), all other processors support up to 4,096 3×3 Q7 kernels, for a total of [2,340 KiB of kernel memory](https://github.com/analogdevicesinc/ai8x-synthesis/blob/pytorch-2/docs/AHBAddresses.md).
1175
1175
When using 1-, 2- or 4-bit weights, the capacity increases accordingly. The hardware supports two different flavors of 1-bit weights, either 0/–1 or +1/–1.
1176
1176
When using more than 64 input or output channels, weight memory is shared, and effective capacity decreases.
1177
1177
Weights must be arranged according to specific rules detailed in [Layers and Weight Memory](#layers-and-weight-memory).
1178
1178
1179
-
* The total of [1,280 KiB of data memory](docs/AHBAddresses.md) is split into 16 sections of 80 KiB each. When not using streaming mode, any data channel (input, intermediate, or output) must completely fit into one memory instance. This limits the first-layer input to 81,920 pixels per channel in CHW format (286×286 when height = width). However, when using more than one input channel, the HWC format may be preferred, and all layer outputs are in HWC format as well. In those cases, it is required that four channels fit into a single memory section — or 20,480 pixels per channel (143×143 when height = width).
1179
+
* The total of [1,280 KiB of data memory](https://github.com/analogdevicesinc/ai8x-synthesis/blob/pytorch-2/docs/AHBAddresses.md) is split into 16 sections of 80 KiB each. When not using streaming mode, any data channel (input, intermediate, or output) must completely fit into one memory instance. This limits the first-layer input to 81,920 pixels per channel in CHW format (286×286 when height = width). However, when using more than one input channel, the HWC format may be preferred, and all layer outputs are in HWC format as well. In those cases, it is required that four channels fit into a single memory section — or 20,480 pixels per channel (143×143 when height = width).
1180
1180
Note that the first layer commonly creates a wide expansion (i.e., a large number of output channels) that needs to fit into data memory, so the input size limit is mostly theoretical. In many cases, [Data Folding](#data-folding) (distributing the input data across multiple channels) can effectively increase both the input dimensions as well as improve model performance.
1181
1181
1182
1182
* The hardware supports 1D and 2D convolution layers, 2D transposed convolution layers (upsampling), element-wise addition, subtraction, binary OR, binary XOR as well as fully connected layers (`Linear`), which are implemented using 1×1 convolutions on 1×1 data:
@@ -2083,7 +2083,7 @@ The only model architecture implemented in this repository is the sequential mod
All required elastic search strategies are implemented in this [model file](https://github.com/analogdevicesinc/ai8x-training/blob/develop/models/ai85nasnet-sequential.py).
2086
+
All required elastic search strategies are implemented in this [model file](https://github.com/analogdevicesinc/ai8x-training/blob/pytorch-2/models/ai85nasnet-sequential.py).
2087
2087
2088
2088
A new model architecture can be implemented by implementing the `OnceForAllModel` interface. The new model class must implement the following:
2089
2089
@@ -3268,9 +3268,9 @@ See the [benchmarking guide](https://github.com/analogdevicesinc/MaximAI_Documen
3268
3268
3269
3269
Additional information about the evaluation kits, and the software development kit (MSDK) is available on the web at <https://github.com/analogdevicesinc/MaximAI_Documentation>.
3270
3270
3271
-
[AHB Addresses for MAX78000 and MAX78002](docs/AHBAddresses.md)
3271
+
[AHB Addresses for MAX78000 and MAX78002](https://github.com/analogdevicesinc/ai8x-synthesis/blob/pytorch-2/docs/AHBAddresses.md)
0 commit comments