Skip to content

Commit 3cd2797

Browse files
author
Ilia Karmanov
authored
Revert "Mxnetdev"
1 parent b446855 commit 3cd2797

11 files changed

+157
-463
lines changed

Diff for: .gitignore

+1-2
Original file line numberDiff line numberDiff line change
@@ -5,5 +5,4 @@
55
cifar-10-batches-py/
66
__pycache__
77
.DS_Store
8-
*.params
9-
*-symbol.json
8+

Diff for: README.md

+10-25
Original file line numberDiff line numberDiff line change
@@ -8,8 +8,6 @@
88

99
**For more details check out our [blog-post](https://blogs.technet.microsoft.com/machinelearning/2018/03/14/comparing-deep-learning-frameworks-a-rosetta-stone-approach/)**
1010

11-
We want to extend our gratitude to the CNTK, Pytorch, Chainer, Caffe2, MXNet and Knet teams, and everyone else from the open-source community who contributed to the repo over the past few months.
12-
1311
## Goal
1412

1513
1. Create a Rosetta Stone of deep-learning frameworks to allow data-scientists to easily leverage their expertise from one framework to another
@@ -21,8 +19,6 @@ We want to extend our gratitude to the CNTK, Pytorch, Chainer, Caffe2, MXNet and
2119

2220
The notebooks are executed on an Azure [Deep Learning Virtual Machine](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/microsoft-ads.dsvm-deep-learning) using both the K80 and the newer P100.
2321

24-
Please see these notebooks as examples rather than any formal benchmark (only the multi-GPU examples have potential to be that at some point) - for further details check this [post](https://www.reddit.com/r/MachineLearning/comments/7v3ibo/discussion_stop_benchmark_stupidity_and_improve_it/dtp9hng/).
25-
2622
*Accuracies are reported in notebooks, they should match to ensure we have common mode/code*
2723

2824
## Results
@@ -33,20 +29,20 @@ Please see these notebooks as examples rather than any formal benchmark (only th
3329
| ----------------------------------------------------- | :----------------: | :-----------------: |
3430
| [Caffe2](notebooks/Caffe2_CNN.ipynb) | 148 | 54 |
3531
| [Chainer](notebooks/Chainer_CNN.ipynb) | 162 | 69 |
36-
| [CNTK](notebooks/CNTK_CNN.ipynb) ([HighAPI](notebooks/CNTK_CNN_highAPI.ipynb)) | 163 | 53 |
37-
| [MXNet(Gluon)](notebooks/Gluon_CNN.ipynb) | 152 | 57 |
32+
| [CNTK](notebooks/CNTK_CNN.ipynb) | 163 | 53 |
33+
| [Gluon](notebooks/Gluon_CNN.ipynb) | 152 | 62 |
3834
| [Keras(CNTK)](notebooks/Keras_CNTK_CNN.ipynb) | 194 | 76 |
3935
| [Keras(TF)](notebooks/Keras_TF_CNN.ipynb) | 241 | 76 |
4036
| [Keras(Theano)](notebooks/Keras_Theano_CNN.ipynb) | 269 | 93 |
41-
| [Tensorflow](notebooks/Tensorflow_CNN.ipynb) ([HighAPI](notebooks/Tensorflow_CNN_highAPI.ipynb)) | 173 | 57 |
37+
| [Tensorflow](notebooks/Tensorflow_CNN.ipynb) | 173 | 57 |
4238
| [Lasagne(Theano)](notebooks/Theano_Lasagne_CNN.ipynb) | 253 | 65 |
43-
| [MXNet(Module API)](notebooks/MXNet_CNN.ipynb) ([HighAPI](notebooks/MXNet_CNN_highAPI.ipynb)) | 145 | 52 |
39+
| [MXNet](notebooks/MXNet_CNN.ipynb) | 145 | 51 |
4440
| [PyTorch](notebooks/PyTorch_CNN.ipynb) | 169 | 51 |
4541
| [Julia - Knet](notebooks/Knet_CNN.ipynb) | 159 | ?? |
4642
| [R - MXNet](notebooks/.ipynb) | ??? | ?? |
4743

4844

49-
*Note: It is recommended to use higher level APIs where possible; see these notebooks for examples with [Tensorflow](notebooks/Tensorflow_CNN_highAPI.ipynb), [MXNet](notebooks/MXNet_CNN_highAPI.ipynb) and [CNTK](notebooks/CNTK_CNN_highAPI.ipynb). They are not linked in the table to keep the common-structure-for-all approach*
45+
*Note: It is recommended to use higher level APIs where possible; see these notebooks for examples with [Tensorflow](support/Tensorflow_CNN_highAPI.ipynb), [MXNet](support/MXNet_CNN_highAPI.ipynb) and [CNTK](support/CNTK_CNN_highAPI.ipynb). They are not linked in the table to keep the common-structure-for-all approach*
5046

5147
Input for this model is the standard [CIFAR-10 dataset](http://www.cs.toronto.edu/~kriz/cifar.html) containing 50k training images and 10k test images, uniformly split across 10 classes. Each 32 by 32 image is supplied as a tensor of shape (3, 32, 32) with pixel intensity re-scaled from 0-255 to 0-1.
5248

@@ -62,7 +58,7 @@ Input for this model is the standard [CIFAR-10 dataset](http://www.cs.toronto.ed
6258
| [Keras(TF)](notebooks/Keras_TF_MultiGPU.ipynb) | 51min27s | 32min1s | 22min49s | 18min30s |
6359
| [Tensorflow](notebooks/Tensorflow_MultiGPU.ipynb) | 62min8s | 44min13s | 31min4s | 17min10s |
6460
| [Chainer]() | ? | ? | ? | ? |
65-
| [MXNet(Module API)]() | ? | ? | ? | ? |
61+
| [MXNet]() | ? | ? | ? | ? |
6662

6763

6864
Input for this model is 112,120 PNGs of chest X-rays. **Note for the notebook to automatically download the data you must install [Azcopy](https://docs.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-linux#download-and-install-azcopy) and increase the size of your OS-Disk in Azure Portal so that you have at-least 45GB of free-space (the Chest X-ray data is large!). The notebooks may take more than 10 minutes to first download the data.** These notebooks train DenseNet-121 and use native data-loaders to pre-process the data and perform data-augmentation.
@@ -76,11 +72,10 @@ Comparing synthetic data to actual PNG files we can estimate the IO lag for **Py
7672
| [Caffe2](notebooks/Caffe2_Inference.ipynb) | 14.1 | 7.9 |
7773
| [Chainer](notebooks/Chainer_Inference.ipynb) | 9.3 | 2.7 |
7874
| [CNTK](notebooks/CNTK_Inference.ipynb) | 8.5 | 1.6 |
79-
| [MXNet(Gluon)](notebooks/Gluon_Inference.ipynb) | | 1.7 |
8075
| [Keras(CNTK)](notebooks/Keras_CNTK_Inference.ipynb) | 21.7 | 5.9 |
8176
| [Keras(TF)](notebooks/Keras_TF_Inference.ipynb) | 10.2 | 2.9 |
8277
| [Tensorflow](notebooks/Tensorflow_Inference.ipynb) | 6.5 | 1.8 |
83-
| [MXNet(Module API)](notebooks/MXNet_Inference.ipynb)| 7.7 | 1.6 |
78+
| [MXNet](notebooks/MXNet_Inference.ipynb) | 7.7 | 2.0 |
8479
| [PyTorch](notebooks/PyTorch_Inference.ipynb) | 7.7 | 1.9 |
8580
| [Julia - Knet](notebooks/Knet_Inference.ipynb) | 6.3 | ??? |
8681
| [R - MXNet](notebooks/.ipynb) | ??? | ??? |
@@ -95,7 +90,7 @@ A pre-trained ResNet50 model is loaded and chopped just after the avg_pooling at
9590
| [CNTK](notebooks/CNTK_RNN.ipynb) | 32 | 15 | Yes |
9691
| [Keras(CNTK)](notebooks/Keras_CNTK_RNN.ipynb) | 86 | 53 | No |
9792
| [Keras(TF)](notebooks/Keras_TF_RNN.ipynb) | 35 | 26 | Yes |
98-
| [MXNet(Module API)](notebooks/MXNet_RNN.ipynb) | 29 | 24 | Yes |
93+
| [MXNet](notebooks/MXNet_RNN.ipynb) | 29 | 24 | Yes |
9994
| [Pytorch](notebooks/PyTorch_RNN.ipynb) | 31 | 16 | Yes |
10095
| [Tensorflow](notebooks/Tensorflow_RNN.ipynb) | 30 | 22 | Yes |
10196
| [Julia - Knet](notebooks/Knet_RNN.ipynb) | 29 | ?? | Yes |
@@ -112,14 +107,10 @@ The classification model creates an embedding matrix of size (150x125) and then
112107

113108
## Lessons Learned
114109

115-
The below offer some insights we gained after trying to match test-accuracy across frameworks and from all the GitHub issues/PRs raised.
116-
117-
#### Multi-GPU DenseNet
118-
119-
1. Data loading and augmentation has the potential to flip the results around and curiously from all of the frameworks it seems by default Keras is most efficient at this. We will try to create openCV-based common data-loading and data-augmentation functions to help standardise the results and let forward+backward training take centre stage
120-
121110
#### CNN
122111

112+
The below offers some insights I gained after trying to match test-accuracy across frameworks and from all the GitHub issues/PRs raised.
113+
123114
1. The above examples (except for Keras), for ease of comparison, try to use the same level of API and so all use the same generator-function. For [MXNet](support/MXNet_CNN_highAPI.ipynb), [Tensorflow](support/Tensorflow_CNN_highAPI.ipynb), and [CNTK](support/CNTK_CNN_highAPI.ipynb) I have experimented with a higher-level API, where I use the framework's training generator function. The speed improvement is negligible in this example because the whole dataset is loaded as NumPy array in RAM and the only processing done each epoch is a shuffle. I suspect the framework's generators perform the shuffle asynchronously. Curiously, it seems that the frameworks shuffle on a batch-level, rather than on an observation level, and thus ever so slightly decreases the test-accuracy (at least after 10 epochs). For scenarios where we have IO activity and perhaps pre-processing and data-augmentation on the fly, custom generators would have a much bigger impact on performance.
124115

125116
2. Running on CuDNN we want to use [NCHW] instead of channels-last. Keras finally supports this for Tensorflow (previously it had NHWC hard-coded and would auto-reshape after every batch)
@@ -170,12 +161,6 @@ The below offer some insights we gained after trying to match test-accuracy acro
170161
make install
171162
```
172163

173-
13. When using MXNet, you should avoid assigning outputs or data to numpy np.array in your training loop. This causes the data to be copied from the GPU to the CPU. You should use mx.nd.array instead, allocated in the right context at the beginning. This can dramatically increase performance.
174-
175-
14. When using MXNet, operations are allocated on the queue of the back-end engine and parallelized, try to avoid any blocking operations in your training loop. You can add a nd.waitall(), which will force waiting for all operations to complete at the end of each epoch to avoid filling up your memory.
176-
177-
15. With MXNet/Gluon, calling `.hybridize()` on your network will cache the computation graph and you will get performance gains. However that means that you won't be able to step through every calculations anymore. Use it once you are done debugging your network.
178-
179164
#### RNN
180165

181166
1. There are multiple RNN implementations/kernels available for most frameworks (for example [Tensorflow](http://returnn.readthedocs.io/en/latest/tf_lstm_benchmark.html)); once reduced down to the cudnnLSTM/GRU level the execution is the fastest, however this implementation is less flexible (e.g. maybe you want layer normalisation) and may become problematic if inference is run on the CPU at a later stage. At the cudDNN level most of the frameworks' runtimes are very similar. [This](https://devblogs.nvidia.com/parallelforall/optimizing-recurrent-neural-networks-cudnn-5/) Nvidia blog-post goes through several interesting cuDNN optimisations for recurrent neural nets e.g. fusing - "combining the computation of many small matrices into that of larger ones and streaming the computation whenever possible, the ratio of computation to memory I/O can be increased, which results in better performance on GPU".

Diff for: notebooks/Gluon_CNN.ipynb

+31-51
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
"cell_type": "markdown",
55
"metadata": {},
66
"source": [
7-
"# MXNet/Gluon CNN example"
7+
"# High-level Gluon Example"
88
]
99
},
1010
{
@@ -71,7 +71,7 @@
7171
"outputs": [],
7272
"source": [
7373
"def SymbolModule(n_classes=N_CLASSES):\n",
74-
" sym = gluon.nn.HybridSequential()\n",
74+
" sym = gluon.nn.Sequential()\n",
7575
" with sym.name_scope():\n",
7676
" sym.add(gluon.nn.Conv2D(channels=50, kernel_size=3, padding=1, activation='relu'))\n",
7777
" sym.add(gluon.nn.Conv2D(channels=50, kernel_size=3, padding=1))\n",
@@ -121,8 +121,8 @@
121121
"Preparing test set...\n",
122122
"(50000, 3, 32, 32) (10000, 3, 32, 32) (50000,) (10000,)\n",
123123
"float32 float32 int32 int32\n",
124-
"CPU times: user 708 ms, sys: 589 ms, total: 1.3 s\n",
125-
"Wall time: 1.29 s\n"
124+
"CPU times: user 630 ms, sys: 588 ms, total: 1.22 s\n",
125+
"Wall time: 1.22 s\n"
126126
]
127127
}
128128
],
@@ -143,8 +143,8 @@
143143
"name": "stdout",
144144
"output_type": "stream",
145145
"text": [
146-
"CPU times: user 345 ms, sys: 421 ms, total: 766 ms\n",
147-
"Wall time: 768 ms\n"
146+
"CPU times: user 321 ms, sys: 392 ms, total: 713 ms\n",
147+
"Wall time: 876 ms\n"
148148
]
149149
}
150150
],
@@ -164,8 +164,8 @@
164164
"name": "stdout",
165165
"output_type": "stream",
166166
"text": [
167-
"CPU times: user 683 µs, sys: 444 µs, total: 1.13 ms\n",
168-
"Wall time: 406 µs\n"
167+
"CPU times: user 203 µs, sys: 128 µs, total: 331 µs\n",
168+
"Wall time: 337 µs\n"
169169
]
170170
}
171171
],
@@ -178,49 +178,31 @@
178178
"cell_type": "code",
179179
"execution_count": 9,
180180
"metadata": {},
181-
"outputs": [],
182-
"source": [
183-
"train_loss = nd.zeros(1, ctx=ctx)"
184-
]
185-
},
186-
{
187-
"cell_type": "code",
188-
"execution_count": 10,
189-
"metadata": {},
190-
"outputs": [],
191-
"source": [
192-
"train_loss += nd.ones(1, ctx=ctx)"
193-
]
194-
},
195-
{
196-
"cell_type": "code",
197-
"execution_count": 11,
198-
"metadata": {},
199181
"outputs": [
200182
{
201183
"name": "stdout",
202184
"output_type": "stream",
203185
"text": [
204-
"Epoch 0: loss: 1.8314\n",
205-
"Epoch 1: loss: 1.3397\n",
206-
"Epoch 2: loss: 1.1221\n",
207-
"Epoch 3: loss: 0.9576\n",
208-
"Epoch 4: loss: 0.8261\n",
209-
"Epoch 5: loss: 0.7215\n",
210-
"Epoch 6: loss: 0.6226\n",
211-
"Epoch 7: loss: 0.5389\n",
212-
"Epoch 8: loss: 0.4729\n",
213-
"Epoch 9: loss: 0.4072\n",
214-
"CPU times: user 1min 5s, sys: 18 s, total: 1min 23s\n",
215-
"Wall time: 56.6 s\n"
186+
"Epoch 0: loss: 1.8405\n",
187+
"Epoch 1: loss: 1.3773\n",
188+
"Epoch 2: loss: 1.1577\n",
189+
"Epoch 3: loss: 0.9811\n",
190+
"Epoch 4: loss: 0.8450\n",
191+
"Epoch 5: loss: 0.7354\n",
192+
"Epoch 6: loss: 0.6391\n",
193+
"Epoch 7: loss: 0.5559\n",
194+
"Epoch 8: loss: 0.4810\n",
195+
"Epoch 9: loss: 0.4157\n",
196+
"CPU times: user 1min 18s, sys: 15.3 s, total: 1min 34s\n",
197+
"Wall time: 1min 2s\n"
216198
]
217199
}
218200
],
219201
"source": [
220202
"%%time\n",
221-
"sym.hybridize()\n",
203+
"# Main training loop: 62s\n",
222204
"for j in range(EPOCHS):\n",
223-
" train_loss = nd.zeros(1, ctx=ctx)\n",
205+
" train_loss = 0.0\n",
224206
" for data, target in yield_mb(x_train, y_train, BATCHSIZE, shuffle=True):\n",
225207
" # Get samples\n",
226208
" data = nd.array(data).as_in_context(ctx)\n",
@@ -233,24 +215,22 @@
233215
" # Back-prop\n",
234216
" loss.backward()\n",
235217
" trainer.step(data.shape[0])\n",
236-
" train_loss += nd.sum(loss)\n",
237-
" # Log \n",
238-
" # Waiting for the operations on the \n",
239-
" nd.waitall()\n",
240-
" print('Epoch %3d: loss: %5.4f'%(j, train_loss.asscalar()/len(x_train)))"
218+
" train_loss += nd.sum(loss).asscalar()\n",
219+
" # Log\n",
220+
" print('Epoch %3d: loss: %5.4f'%(j, train_loss/len(x_train)))"
241221
]
242222
},
243223
{
244224
"cell_type": "code",
245-
"execution_count": 12,
225+
"execution_count": 10,
246226
"metadata": {},
247227
"outputs": [
248228
{
249229
"name": "stdout",
250230
"output_type": "stream",
251231
"text": [
252-
"CPU times: user 382 ms, sys: 115 ms, total: 496 ms\n",
253-
"Wall time: 429 ms\n"
232+
"CPU times: user 627 ms, sys: 73.1 ms, total: 700 ms\n",
233+
"Wall time: 453 ms\n"
254234
]
255235
}
256236
],
@@ -274,14 +254,14 @@
274254
},
275255
{
276256
"cell_type": "code",
277-
"execution_count": 13,
257+
"execution_count": 11,
278258
"metadata": {},
279259
"outputs": [
280260
{
281261
"name": "stdout",
282262
"output_type": "stream",
283263
"text": [
284-
"Accuracy: 0.7675280448717948\n"
264+
"Accuracy: 0.7661258012820513\n"
285265
]
286266
}
287267
],
@@ -293,7 +273,7 @@
293273
"metadata": {
294274
"anaconda-cloud": {},
295275
"kernelspec": {
296-
"display_name": "Python [default]",
276+
"display_name": "Python 3",
297277
"language": "python",
298278
"name": "python3"
299279
},

0 commit comments

Comments
 (0)