Skip to content

Commit 055ca9c

Browse files
hadh93c-p-i-o
authored andcommitted
Fix typos in dynamic_quantization_bert_tutorial.rst (#3019)
1 parent 7211f92 commit 055ca9c

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

intermediate_source/dynamic_quantization_bert_tutorial.rst

+3-3
Original file line numberDiff line numberDiff line change
@@ -79,7 +79,7 @@ Mac:
7979

8080
.. code:: shell
8181
82-
yes y | pip uninstall torch tochvision
82+
yes y | pip uninstall torch torchvision
8383
yes y | pip install --pre torch -f https://download.pytorch.org/whl/nightly/cu101/torch_nightly.html
8484
8585
@@ -206,7 +206,7 @@ in `examples <https://github.com/huggingface/transformers/tree/master/examples#m
206206
--save_steps 100000 \
207207
--output_dir $OUT_DIR
208208
209-
We provide the fined-tuned BERT model for MRPC task `here <https://download.pytorch.org/tutorial/MRPC.zip>`_.
209+
We provide the fine-tuned BERT model for MRPC task `here <https://download.pytorch.org/tutorial/MRPC.zip>`_.
210210
To save time, you can download the model file (~400 MB) directly into your local folder ``$OUT_DIR``.
211211

212212
2.1 Set global configurations
@@ -273,7 +273,7 @@ We load the tokenizer and fine-tuned BERT sequence classifier model
273273
2.3 Define the tokenize and evaluation function
274274
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
275275

276-
We reuse the tokenize and evaluation function from `Huggingface <https://github.com/huggingface/transformers/blob/master/examples/run_glue.py>`_.
276+
We reuse the tokenize and evaluation function from `HuggingFace <https://github.com/huggingface/transformers/blob/master/examples/run_glue.py>`_.
277277

278278
.. code:: python
279279

0 commit comments

Comments
 (0)