Skip to content

Commit e258397

Browse files
authored
Update README.md
1 parent 260789a commit e258397

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

README.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -472,19 +472,19 @@ __Can I use pretrained word embeddings (GloVe, CBOW, skipgram, etc.) instead of
472472

473473
Yes, you could, with the `load_pretrained_embeddings()` method in the `Decoder` class. You could also choose to fine-tune (or not) with the `fine_tune_embeddings()` method.
474474

475-
After creating the Decoder in [`train.py`](https://github.com/sgrvinod/a-PyTorch-Tutorial-to-Image-Captioning/blob/master/train.py), you should provide the pretrained vectors to `load_pretrained_embeddings()` stacked in the same order as in the `word_map`. For words that you don't have pretrained vectors for, like <start>, you can initialize embeddings randomly like we did in `init_weights()`. I recommend fine-tuning to learn more meaningful vectors for these randomly initialized vectors.
475+
After creating the Decoder in [`train.py`](https://github.com/sgrvinod/a-PyTorch-Tutorial-to-Image-Captioning/blob/master/train.py), you should provide the pretrained vectors to `load_pretrained_embeddings()` stacked in the same order as in the `word_map`. For words that you don't have pretrained vectors for, like `<start>`, you can initialize embeddings randomly like we did in `init_weights()`. I recommend fine-tuning to learn more meaningful vectors for these randomly initialized vectors.
476476

477477
```python
478478
decoder = DecoderWithAttention(attention_dim=attention_dim,
479479
embed_dim=emb_dim,
480480
decoder_dim=decoder_dim,
481481
vocab_size=len(word_map),
482482
dropout=dropout)
483-
decoder.load_pretrained_embeddings(pretrained_embeddings)
483+
decoder.load_pretrained_embeddings(pretrained_embeddings) # pretrained_embeddings should be of dimensions (len(word_map), emb_dim)
484484
decoder.fine_tune_embeddings(True) # or False
485485
```
486486

487-
Also make sure to change the `embed_dim` parameter to the size of your pre-trained embeddings. This should automatically adjust the input size of the decoder LSTM to accomodate them.
487+
Also make sure to change the `emb_dim` parameter to the size of your pre-trained embeddings. This should automatically adjust the input size of the decoder LSTM to accomodate them.
488488

489489
---
490490

0 commit comments

Comments
 (0)