Skip to content

Commit 9ccee7e

Browse files
apaniukoveaidova
andauthored
[128] OpenVINO Tokenizers Notebook Small Fixes (#1729)
Add onnx install for optimum-intel Fix link to openvino.genai --------- Co-authored-by: Ekaterina Aidova <[email protected]>
1 parent 414128d commit 9ccee7e

File tree

1 file changed

+11
-13
lines changed

1 file changed

+11
-13
lines changed

notebooks/128-openvino-tokenizers/128-opnevino-tokenizers.ipynb

Lines changed: 11 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -70,10 +70,8 @@
7070
"Uninstalling openvino-2023.3.0:\n",
7171
" Successfully uninstalled openvino-2023.3.0\n",
7272
"\u001B[33mWARNING: Skipping openvino-nightly as it is not installed.\u001B[0m\u001B[33m\n",
73-
"\u001B[0mFound existing installation: openvino-dev 2023.3.0\n",
74-
"Uninstalling openvino-dev-2023.3.0:\n",
75-
" Successfully uninstalled openvino-dev-2023.3.0\n",
76-
"Note: you may need to restart the kernel to use updated packages.\n",
73+
"\u001B[0m\u001B[33mWARNING: Skipping openvino-dev as it is not installed.\u001B[0m\u001B[33m\n",
74+
"\u001B[0mNote: you may need to restart the kernel to use updated packages.\n",
7775
"Note: you may need to restart the kernel to use updated packages.\n"
7876
]
7977
}
@@ -271,12 +269,12 @@
271269
"name": "stderr",
272270
"output_type": "stream",
273271
"text": [
274-
"2024-02-16 17:13:59.306808: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.\n",
275-
"2024-02-16 17:13:59.308436: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.\n",
276-
"2024-02-16 17:13:59.341678: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.\n",
277-
"2024-02-16 17:13:59.342886: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\n",
272+
"2024-02-20 14:05:25.856016: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.\n",
273+
"2024-02-20 14:05:25.857714: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.\n",
274+
"2024-02-20 14:05:25.892124: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.\n",
275+
"2024-02-20 14:05:25.893158: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\n",
278276
"To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\n",
279-
"2024-02-16 17:14:00.000099: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT\n"
277+
"2024-02-20 14:05:26.599940: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT\n"
280278
]
281279
},
282280
{
@@ -319,7 +317,7 @@
319317
"\n",
320318
"if not model_dir.exists():\n",
321319
" # converting the original model\n",
322-
" # %pip install -U \"git+https://github.com/huggingface/optimum-intel.git\" \"nncf>=2.8.0\"a\n",
320+
" # %pip install -U \"git+https://github.com/huggingface/optimum-intel.git\" \"nncf>=2.8.0\" onnx\n",
323321
" # %optimum-cli export openvino -m $model_id --task text-generation-with-past $model_dir\n",
324322
" \n",
325323
" # load already converted model\n",
@@ -372,7 +370,7 @@
372370
{
373371
"data": {
374372
"application/vnd.jupyter.widget-view+json": {
375-
"model_id": "c20e21457ddd4de4bbcd95f8bc5c7619",
373+
"model_id": "61e8811e77f34e7aaa277847730dcd50",
376374
"version_major": 2,
377375
"version_minor": 0
378376
},
@@ -467,7 +465,7 @@
467465
"model_dir = Path(Path(model_id).name)\n",
468466
"\n",
469467
"if not model_dir.exists():\n",
470-
" %pip install -qU git+https://github.com/huggingface/optimum-intel.git\n",
468+
" %pip install -qU git+https://github.com/huggingface/optimum-intel.git onnx\n",
471469
" !optimum-cli export openvino --model $model_id --task text-classification $model_dir\n",
472470
" !convert_tokenizer $model_id -o $model_dir"
473471
]
@@ -529,7 +527,7 @@
529527
"\n",
530528
"- [Installation instructions for different environments](https://github.com/openvinotoolkit/openvino_tokenizers?tab=readme-ov-file#installation)\n",
531529
"- [Supported Tokenizer Types](https://github.com/openvinotoolkit/openvino_tokenizers?tab=readme-ov-file#supported-tokenizer-types)\n",
532-
"- [OpenVINO.genAI repository with the C++ example of OpenVINO Tokenizers usage]((https://github.com/openvinotoolkit/openvino.genai/tree/master/text_generation/causal_lm/cpp))\n",
530+
"- [OpenVINO.GenAI repository with the C++ example of OpenVINO Tokenizers usage](https://github.com/openvinotoolkit/openvino.genai/tree/master/text_generation/causal_lm/cpp)\n",
533531
"- [HuggingFace Tokenizers Comparison Table](https://github.com/openvinotoolkit/openvino_tokenizers?tab=readme-ov-file#output-match-by-model)"
534532
]
535533
}

0 commit comments

Comments
 (0)