You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: README.md
+22-6
Original file line number
Diff line number
Diff line change
@@ -5,6 +5,12 @@ Classifies an image as containing either a dog or a cat (using Kaggle's <a href=
5
5
To run these scripts/notebooks, you must have keras, numpy, scipy, and h5py installed, and enabling GPU acceleration is highly recommended if that's an option.
6
6
7
7
## img_clf.py
8
+
After playing around with hyperparameters a bit, this reaches around 96-98% accuracy on the validation data, and when tested on Kaggle's hidden test data achieved a log loss score around 0.18.
9
+
10
+
Most of the code / strategy here was based on <ahref="https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html">this</a> Keras tutorial.
11
+
12
+
Pre-trained VGG16 model weights can be downloaded <ahref="https://gist.github.com/baraldilorenzo/07d7802847aaad0a35d3">here</a>.
13
+
8
14
The data directory structure I used was:
9
15
10
16
* project
@@ -17,12 +23,6 @@ The data directory structure I used was:
17
23
* cats
18
24
* test
19
25
* test
20
-
21
-
After playing around with hyperparameters a bit, this reaches around 96-98% accuracy on the validation data, and when tested in Kaggle's competition maxed out with a log loss score around 0.18.
22
-
23
-
Most of the code / strategy here was based on <ahref="https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html">this</a> Keras tutorial.
24
-
25
-
Pre-trained VGG16 model weights can be downloaded <ahref="https://gist.github.com/baraldilorenzo/07d7802847aaad0a35d3">here</a>.
26
26
27
27
## cats_n_dogs.ipynb:
28
28
This produced a slightly better score (.161 log loss on kaggle test set). The better score most likely comes from having larger images and ensembling a few models, despite the fact there's no image augmentation in the notebook.
@@ -37,4 +37,20 @@ Might run into memory errors because of the large image dimensions -- if so redu
37
37
* test
38
38
* test
39
39
40
+
## cats_n_dogs_BN.ipynb:
41
+
This produced the best score (0.069 loss without any ensembling). The notebook incorporates some of the techniques from Jeremy Howard's <ahref="http://course.fast.ai/">deep learning class</a> , with the inclusion of batch normalization being the biggest factor. I also added extra layers of augmentation to the prediction script, which greatly improved performance.
42
+
43
+
Pre-trained model weights for VGG16 w/ batch normalization can be downloaded <ahref="http://www.platform.ai/models/">here</a>.
44
+
45
+
The VGG16BN class is defined in <em>vgg_bn.py</em>, and the data directory structure used was:
0 commit comments