|
1 | 1 | # ADI MAX78000/MAX78002 Model Training and Synthesis
|
2 | 2 |
|
3 |
| -April 19, 2024 |
| 3 | +May 20, 2024 |
| 4 | + |
| 5 | +**Note: This branch is compatible with PyTorch 1.8. Please go to the “pytorch-2” branch for PyTorch 2.3 compatibility.** |
4 | 6 |
|
5 | 7 | ADI’s MAX78000/MAX78002 project is comprised of five repositories:
|
6 | 8 |
|
@@ -63,7 +65,7 @@ Full support and documentation are provided for the following platform:
|
63 | 65 |
|
64 | 66 | * CPU: 64-bit amd64/x86_64 “PC” with [Ubuntu Linux 20.04 LTS or 22.04 LTS](https://ubuntu.com/download/server)
|
65 | 67 | * GPU for hardware acceleration (optional but highly recommended): Nvidia with [CUDA 11](https://developer.nvidia.com/cuda-toolkit-archive)
|
66 |
| -* [PyTorch 1.8.1 (LTS)](https://pytorch.org/get-started/locally/) on Python 3.8.x |
| 68 | +* [PyTorch 1.8.1 (LTS)](https://pytorch.org/get-started/locally/) on Python 3.8.x. *Please use the “pytorch-2” branch for PyTorch 2.3 compatibility.* |
67 | 69 |
|
68 | 70 | Limited support and advice for using other hardware and software combinations is available as follows.
|
69 | 71 |
|
@@ -1073,12 +1075,12 @@ The MAX78000 hardware does not support arbitrary network parameters. Specificall
|
1073 | 1075 | * The *final* streaming layer must use padding.
|
1074 | 1076 | * Layers that use 1×1 kernels without padding are automatically replaced with equivalent layers that use 3×3 kernels with padding.
|
1075 | 1077 |
|
1076 |
| -* The weight memory supports up to 768 * 64 3×3 Q7 kernels (see [Number Format](#number-format)), for a total of [432 KiB of kernel memory](docs/AHBAddresses.md). |
| 1078 | +* The weight memory supports up to 768 * 64 3×3 Q7 kernels (see [Number Format](#number-format)), for a total of [432 KiB of kernel memory](https://github.com/analogdevicesinc/ai8x-synthesis/blob/develop/docs/AHBAddresses.md). |
1077 | 1079 | When using 1-, 2- or 4-bit weights, the capacity increases accordingly.
|
1078 | 1080 | When using more than 64 input or output channels, weight memory is shared, and effective capacity decreases proportionally (for example, 128 input channels require twice as much space as 64 input channels, and a layer with <u>both</u> 128 input and 128 output channels requires <u>four</u> times as much space as a layer with only 64 input channels and 64 output channels).
|
1079 | 1081 | Weights must be arranged according to specific rules detailed in [Layers and Weight Memory](#layers-and-weight-memory).
|
1080 | 1082 |
|
1081 |
| -* There are 16 instances of 32 KiB data memory ([for a total of 512 KiB](docs/AHBAddresses.md)). When not using streaming mode, any data channel (input, intermediate, or output) must completely fit into one memory instance. This limits the first-layer input to 32,768 pixels per channel in the CHW format (181×181 when width = height). However, when using more than one input channel, the HWC format may be preferred, and all layer outputs are in HWC format as well. In those cases, it is required that four channels fit into a single memory instance — or 8192 pixels per channel (approximately 90×90 when width = height). |
| 1083 | +* There are 16 instances of 32 KiB data memory ([for a total of 512 KiB](https://github.com/analogdevicesinc/ai8x-synthesis/blob/develop/docs/AHBAddresses.md)). When not using streaming mode, any data channel (input, intermediate, or output) must completely fit into one memory instance. This limits the first-layer input to 32,768 pixels per channel in the CHW format (181×181 when width = height). However, when using more than one input channel, the HWC format may be preferred, and all layer outputs are in HWC format as well. In those cases, it is required that four channels fit into a single memory instance — or 8192 pixels per channel (approximately 90×90 when width = height). |
1082 | 1084 | Note that the first layer commonly creates a wide expansion (i.e., a large number of output channels) that needs to fit into data memory, so the input size limit is mostly theoretical. In many cases, [Data Folding](#data-folding) (distributing the input data across multiple channels) can effectively increase both the input dimensions as well as improve model performance.
|
1083 | 1085 |
|
1084 | 1086 | * The hardware supports 1D and 2D convolution layers, 2D transposed convolution layers (upsampling), element-wise addition, subtraction, binary OR, binary XOR as well as fully connected layers (`Linear`), which are implemented using 1×1 convolutions on 1×1 data:
|
@@ -1169,12 +1171,12 @@ The MAX78002 hardware does not support arbitrary network parameters. Specificall
|
1169 | 1171 | * Layers that use 1×1 kernels without padding are automatically replaced with equivalent layers that use 3×3 kernels with padding.
|
1170 | 1172 | * Streaming layers must use convolution (i.e., the `Conv1d`, `Conv2d`, or `ConvTranspose2d` [operators](#operation)).
|
1171 | 1173 |
|
1172 |
| -* The weight memory of processors 0, 16, 32, and 48 supports up to 5,120 3×3 Q7 kernels (see [Number Format](#number-format)), all other processors support up to 4,096 3×3 Q7 kernels, for a total of [2,340 KiB of kernel memory](docs/AHBAddresses.md). |
| 1174 | +* The weight memory of processors 0, 16, 32, and 48 supports up to 5,120 3×3 Q7 kernels (see [Number Format](#number-format)), all other processors support up to 4,096 3×3 Q7 kernels, for a total of [2,340 KiB of kernel memory](https://github.com/analogdevicesinc/ai8x-synthesis/blob/develop/docs/AHBAddresses.md). |
1173 | 1175 | When using 1-, 2- or 4-bit weights, the capacity increases accordingly. The hardware supports two different flavors of 1-bit weights, either 0/–1 or +1/–1.
|
1174 | 1176 | When using more than 64 input or output channels, weight memory is shared, and effective capacity decreases.
|
1175 | 1177 | Weights must be arranged according to specific rules detailed in [Layers and Weight Memory](#layers-and-weight-memory).
|
1176 | 1178 |
|
1177 |
| -* The total of [1,280 KiB of data memory](docs/AHBAddresses.md) is split into 16 sections of 80 KiB each. When not using streaming mode, any data channel (input, intermediate, or output) must completely fit into one memory instance. This limits the first-layer input to 81,920 pixels per channel in CHW format (286×286 when height = width). However, when using more than one input channel, the HWC format may be preferred, and all layer outputs are in HWC format as well. In those cases, it is required that four channels fit into a single memory section — or 20,480 pixels per channel (143×143 when height = width). |
| 1179 | +* The total of [1,280 KiB of data memory](https://github.com/analogdevicesinc/ai8x-synthesis/blob/develop/docs/AHBAddresses.md) is split into 16 sections of 80 KiB each. When not using streaming mode, any data channel (input, intermediate, or output) must completely fit into one memory instance. This limits the first-layer input to 81,920 pixels per channel in CHW format (286×286 when height = width). However, when using more than one input channel, the HWC format may be preferred, and all layer outputs are in HWC format as well. In those cases, it is required that four channels fit into a single memory section — or 20,480 pixels per channel (143×143 when height = width). |
1178 | 1180 | Note that the first layer commonly creates a wide expansion (i.e., a large number of output channels) that needs to fit into data memory, so the input size limit is mostly theoretical. In many cases, [Data Folding](#data-folding) (distributing the input data across multiple channels) can effectively increase both the input dimensions as well as improve model performance.
|
1179 | 1181 |
|
1180 | 1182 | * The hardware supports 1D and 2D convolution layers, 2D transposed convolution layers (upsampling), element-wise addition, subtraction, binary OR, binary XOR as well as fully connected layers (`Linear`), which are implemented using 1×1 convolutions on 1×1 data:
|
@@ -3272,7 +3274,7 @@ See the [benchmarking guide](https://github.com/analogdevicesinc/MaximAI_Documen
|
3272 | 3274 |
|
3273 | 3275 | Additional information about the evaluation kits, and the software development kit (MSDK) is available on the web at <https://github.com/analogdevicesinc/MaximAI_Documentation>.
|
3274 | 3276 |
|
3275 |
| -[AHB Addresses for MAX78000 and MAX78002](docs/AHBAddresses.md) |
| 3277 | +[AHB Addresses for MAX78000 and MAX78002](https://github.com/analogdevicesinc/ai8x-synthesis/blob/develop/docs/AHBAddresses.md) |
3276 | 3278 |
|
3277 | 3279 | [Facial Recognition System](https://github.com/analogdevicesinc/ai8x-training/blob/develop/docs/FacialRecognitionSystem.md)
|
3278 | 3280 |
|
|
0 commit comments