Skip to content

Commit dcaf783

Browse files
author
Hiroya Chiba
committed
typos
1 parent 7e819b6 commit dcaf783

2 files changed

+2
-2
lines changed

6.1-one-hot-encoding-of-words-or-characters.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -155,7 +155,7 @@
155155
"samples = ['The cat sat on the mat.', 'The dog ate my homework.']\n",
156156
"\n",
157157
"# We create a tokenizer, configured to only take\n",
158-
"# into account the top-1000 most common on words\n",
158+
"# into account the top-1000 most common words\n",
159159
"tokenizer = Tokenizer(num_words=1000)\n",
160160
"# This builds the word index\n",
161161
"tokenizer.fit_on_texts(samples)\n",

6.1-using-word-embeddings.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -589,7 +589,7 @@
589589
"Additionally, we freeze the embedding layer (we set its `trainable` attribute to `False`), following the same rationale as what you are \n",
590590
"already familiar with in the context of pre-trained convnet features: when parts of a model are pre-trained (like our `Embedding` layer), \n",
591591
"and parts are randomly initialized (like our classifier), the pre-trained parts should not be updated during training to avoid forgetting \n",
592-
"what they already know. The large gradient updated triggered by the randomly initialized layers would be very disruptive to the already \n",
592+
"what they already know. The large gradient update triggered by the randomly initialized layers would be very disruptive to the already \n",
593593
"learned features."
594594
]
595595
},

0 commit comments

Comments
 (0)