Skip to content

Commit 3a05570

Browse files
authored
Merge pull request #4728 from MarkDaoust/typos
Typos
2 parents 1635e56 + e5b88ad commit 3a05570

File tree

3 files changed

+4
-4
lines changed

3 files changed

+4
-4
lines changed

samples/core/tutorials/eager/custom_training_walkthrough.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -345,7 +345,7 @@
345345
"TensorFlow's [Dataset API](https://www.tensorflow.org/guide/datasets) handles many common cases for loading data into a model. This is a high-level API for reading data and transforming it into a form used for training. See the [Datasets Quick Start guide](https://www.tensorflow.org/get_started/datasets_quickstart) for more information.\n",
346346
"\n",
347347
"\n",
348-
"Since the dataset is a CSV-formatted text file, use the the [make_csv_dataset](https://www.tensorflow.org/api_docs/python/tf/contrib/data/make_csv_dataset) function to parse the data into a suitable format. Since this function generates data for training models, the default behavior is to shuffle the data (`shuffle=True, shuffle_buffer_size=10000`), and repeat the dataset forever (`num_epochs=None`). We also set the [batch_size](https://developers.google.com/machine-learning/glossary/#batch_size) parameter."
348+
"Since the dataset is a CSV-formatted text file, use the [make_csv_dataset](https://www.tensorflow.org/api_docs/python/tf/contrib/data/make_csv_dataset) function to parse the data into a suitable format. Since this function generates data for training models, the default behavior is to shuffle the data (`shuffle=True, shuffle_buffer_size=10000`), and repeat the dataset forever (`num_epochs=None`). We also set the [batch_size](https://developers.google.com/machine-learning/glossary/#batch_size) parameter."
349349
]
350350
},
351351
{

samples/core/tutorials/keras/basic_regression.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -115,7 +115,7 @@
115115
},
116116
"cell_type": "markdown",
117117
"source": [
118-
"In a *regression* problem, we aim to predict the output of a continous value, like a price or a probability. Contrast this with a *classification* problem, where we aim to predict a discrete label (for example, where a picture contains an apple or an orange). \n",
118+
"In a *regression* problem, we aim to predict the output of a continuous value, like a price or a probability. Contrast this with a *classification* problem, where we aim to predict a discrete label (for example, where a picture contains an apple or an orange). \n",
119119
"\n",
120120
"This notebook builds a model to predict the median price of homes in a Boston suburb during the mid-1970s. To do this, we'll provide the model with some data points about the suburb, such as the crime rate and the local property tax rate.\n",
121121
"\n",
@@ -342,7 +342,7 @@
342342
"source": [
343343
"## Create the model\n",
344344
"\n",
345-
"Let's build our model. Here, we'll use a `Sequential` model with two densely connected hidden layers, and an output later that returns a single, continous value. The model building steps are wrapped in a function, `build_model`, since we'll create a second model, later on."
345+
"Let's build our model. Here, we'll use a `Sequential` model with two densely connected hidden layers, and an output later that returns a single, continuous value. The model building steps are wrapped in a function, `build_model`, since we'll create a second model, later on."
346346
]
347347
},
348348
{

samples/core/tutorials/keras/basic_text_classification.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -512,7 +512,7 @@
512512
},
513513
"cell_type": "markdown",
514514
"source": [
515-
"The layers are stacked sequentially to build the classifer:\n",
515+
"The layers are stacked sequentially to build the classifier:\n",
516516
"\n",
517517
"1. The first layer is an `Embedding` layer. This layer takes the integer-encoded vocabulary and looks up the embedding vector for each word-index. These vectors are learned as the model trains. The vectors add a dimension to the output array. The resulting dimensions are: `(batch, sequence, embedding)`.\n",
518518
"2. Next, a `GlobalAveragePooling1D` layer returns a fixed-length output vector for each example by averaging over the sequence dimension. This allows the model can handle input of variable length, in the simplest way possible.\n",

0 commit comments

Comments
 (0)