Skip to content

Commit de705fd

Browse files
warshallrhozsdonghao
authored andcommitted
weights -> all_weights, trainable_weights, nontrainable_weights (#966)
* (non)trainable weights, layer all_layers * weights -> all_weights * weights -> all_weights, trainable weights, nontrainable_weights * fix bugs, yapf * fix bugs * fix bugs * fix bugs
1 parent 3b53142 commit de705fd

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

47 files changed

+315
-246
lines changed

CHANGELOG.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -75,6 +75,8 @@ To release a new version, please update the changelog as followed:
7575

7676
### Changed
7777
- remove `tl.layers.initialize_global_variables(sess)` (PR #931)
78+
- change `tl.layers.core`, `tl.models.core` (PR #966)
79+
- change `weights` into `all_weights`, `trainable_weights`, `nontrainable_weights`
7880

7981
### Dependencies Update
8082
- nltk>=3.3,<3.4 => nltk>=3.3,<3.5 (PR #892)

docs/modules/files.rst

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -142,14 +142,14 @@ sake of cross-platform. Other file formats such as ``.npz`` are also available.
142142
.. code-block:: python
143143
144144
## save model as .h5
145-
tl.files.save_weights_to_hdf5('model.h5', network.weights)
145+
tl.files.save_weights_to_hdf5('model.h5', network.all_weights)
146146
# restore model from .h5 (in order)
147-
tl.files.load_hdf5_to_weights_in_order('model.h5', network.weights)
147+
tl.files.load_hdf5_to_weights_in_order('model.h5', network.all_weights)
148148
# restore model from .h5 (by name)
149-
tl.files.load_hdf5_to_weights('model.h5', network.weights)
149+
tl.files.load_hdf5_to_weights('model.h5', network.all_weights)
150150
151151
## save model as .npz
152-
tl.files.save_npz(network.weights , name='model.npz')
152+
tl.files.save_npz(network.all_weights , name='model.npz')
153153
# restore model from .npz (method 1)
154154
load_params = tl.files.load_npz(name='model.npz')
155155
tl.files.assign_weights(sess, load_params, network)

docs/user/faq.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -46,19 +46,19 @@ To choose which variables to update, you can do as below.
4646

4747
.. code-block:: python
4848
49-
train_params = network.weights[3:]
49+
train_params = network.trainable_weights[3:]
5050
5151
The second way is to get the variables by a given name. For example, if you want to get all variables which the layer name contain ``dense``, you can do as below.
5252

5353
.. code-block:: python
5454
55-
train_params = network.get_layer('dense').weights
55+
train_params = network.get_layer('dense').trainable_weights
5656
5757
After you get the variable list, you can define your optimizer like that so as to update only a part of the variables.
5858

5959
.. code-block:: python
6060
61-
train_weights = network.weights
61+
train_weights = network.trainable_weights
6262
optimizer.apply_gradients(zip(grad, train_weights))
6363
6464
Logging

docs/user/get_start_advance.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ Get a part of CNN
3636
nn = tl.layers.Dense(n_units=100, name='out')(nn)
3737
model = tl.models.Model(inputs=ni, outputs=nn)
3838
# train your own classifier (only update the last layer)
39-
train_params = model.get_layer('out').weights
39+
train_params = model.get_layer('out').all_weights
4040
4141
Reuse CNN
4242
------------------

docs/user/get_start_model.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -149,11 +149,11 @@ We can get the specific weights by indexing or naming.
149149
.. code-block:: python
150150
151151
# indexing
152-
all_weights = MLP.weights
153-
some_weights = MLP.weights[1:3]
152+
all_weights = MLP.all_weights
153+
some_weights = MLP.all_weights[1:3]
154154
155155
# naming
156-
some_weights = MLP.get_layer('dense1').weights
156+
some_weights = MLP.get_layer('dense1').all_weights
157157
158158
159159
Save and restore model

examples/basic_tutorials/tutorial_cifar10_cnn_static.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -87,7 +87,7 @@ def get_model_batchnorm(inputs_shape):
8787
# learning_rate_decay_factor = 0.1
8888
# num_epoch_decay = 350
8989

90-
train_weights = net.weights
90+
train_weights = net.trainable_weights
9191
# learning_rate = tf.Variable(init_learning_rate)
9292
optimizer = tf.optimizers.Adam(learning_rate)
9393

examples/basic_tutorials/tutorial_mnist_mlp_dynamic.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ def forward(self, x, foo=None):
4646
n_epoch = 500
4747
batch_size = 500
4848
print_freq = 5
49-
train_weights = MLP.weights
49+
train_weights = MLP.trainable_weights
5050
optimizer = tf.optimizers.Adam(learning_rate=0.0001)
5151

5252
## the following code can help you understand SGD deeply

examples/basic_tutorials/tutorial_mnist_mlp_dynamic_2.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -65,7 +65,7 @@ def forward(self, x, foo=None):
6565
n_epoch = 500
6666
batch_size = 500
6767
print_freq = 5
68-
train_weights = MLP1.weights + MLP2.weights
68+
train_weights = MLP1.trainable_weights + MLP2.trainable_weights
6969
optimizer = tf.optimizers.Adam(learning_rate=0.0001)
7070

7171
## the following code can help you understand SGD deeply

examples/basic_tutorials/tutorial_mnist_mlp_static.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ def get_model(inputs_shape):
3737
n_epoch = 500
3838
batch_size = 500
3939
print_freq = 5
40-
train_weights = MLP.weights
40+
train_weights = MLP.trainable_weights
4141
optimizer = tf.optimizers.Adam(lr=0.0001)
4242

4343
## the following code can help you understand SGD deeply

examples/basic_tutorials/tutorial_mnist_mlp_static_2.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ def get_model(inputs_shape, hmodel):
4646
n_epoch = 500
4747
batch_size = 500
4848
print_freq = 5
49-
train_weights = MLP.weights
49+
train_weights = MLP.trainable_weights
5050
optimizer = tf.optimizers.Adam(lr=0.0001)
5151

5252
## the following code can help you understand SGD deeply

0 commit comments

Comments
 (0)