Skip to content

Commit cbe6477

Browse files
committed
updated the numbers on K80
1 parent e35e9a2 commit cbe6477

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

README.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ The notebooks are executed on an Azure [Deep Learning Virtual Machine](https://a
4040
| [PyTorch](notebooks/PyTorch_CNN.ipynb) | 169 | 51 |
4141
| [Julia - Knet](notebooks/Knet_CNN.ipynb) | 159 | ?? |
4242
| [R - MXNet](notebooks/.ipynb) | ??? | ?? |
43-
| [R - Keras(TF)](notebooks/KerasR_TF_CNN.ipynb) | 211 | 72 |
43+
| [R - Keras(TF)](notebooks/KerasR_TF_CNN.ipynb) | 205 | 72 |
4444

4545

4646
*Note: It is recommended to use higher level APIs where possible; see these notebooks for examples with [Tensorflow](notebooks/Tensorflow_CNN_highAPI.ipynb), [MXNet](notebooks/MXNet_CNN_highAPI.ipynb) and [CNTK](notebooks/CNTK_CNN_highAPI.ipynb). They are not linked in the table to keep the common-structure-for-all approach*
@@ -94,7 +94,7 @@ Input for this model is 112,120 PNGs of chest X-rays resized to (264, 264). **No
9494
| [PyTorch](notebooks/PyTorch_Inference.ipynb) | 7.7 | 1.9 |
9595
| [Julia - Knet](notebooks/Knet_Inference.ipynb) | 6.3 | ??? |
9696
| [R - MXNet](notebooks/.ipynb) | ??? | ??? |
97-
| [R - Keras(TF)](notebooks/KerasR_TF_Inference.ipynb)| 16 | 7.4 |
97+
| [R - Keras(TF)](notebooks/KerasR_TF_Inference.ipynb)| 17 | 7.4 |
9898

9999

100100
A pre-trained ResNet50 model is loaded and chopped just after the avg_pooling at the end (7, 7), which outputs a 2048D dimensional vector. This can be plugged into a softmax layer or another classifier such as a boosted tree to perform transfer learning. Allowing for a warm start; this forward-only pass to the avg_pool layer is timed. *Note: batch-size remains constant, however filling the RAM on a GPU would produce further performance boosts (greater for GPUs with more RAM).*
@@ -111,7 +111,7 @@ A pre-trained ResNet50 model is loaded and chopped just after the avg_pooling at
111111
| [Tensorflow](notebooks/Tensorflow_RNN.ipynb) | 30 | 22 | Yes |
112112
| [Julia - Knet](notebooks/Knet_RNN.ipynb) | 29 | ?? | Yes |
113113
| [R - MXNet](notebooks/.ipynb) | ?? | ?? | ??? |
114-
| [R - Keras(TF)](notebooks/KerasR_TF_RNN.ipynb) | ?? | 25 | Yes |
114+
| [R - Keras(TF)](notebooks/KerasR_TF_RNN.ipynb) | 35 | 25 | Yes |
115115

116116

117117
Input for this model is the standard [IMDB movie review dataset](http://ai.stanford.edu/~amaas/data/sentiment/) containing 25k training reviews and 25k test reviews, uniformly split across 2 classes (positive/negative). Processing follows [Keras](https://github.com/fchollet/keras/blob/master/keras/datasets/imdb.py) approach where start-character is set as 1, out-of-vocab (vocab size of 30k is used) represented as 2 and thus word-index starts from 3. Zero-padded / truncated to fixed axis of 150 words per review.

0 commit comments

Comments
 (0)