Skip to content

Commit

Permalink
fix broken links
Browse files Browse the repository at this point in the history
  • Loading branch information
eaidova committed Feb 5, 2025
1 parent be2dbf5 commit 085b9d9
Show file tree
Hide file tree
Showing 5 changed files with 21 additions and 6 deletions.
2 changes: 1 addition & 1 deletion notebooks/deepseek-r1/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ The tutorial supports different models, you can select one from the provided opt
* **DeepSeek-R1-Distill-Llama-8B** is a distilled model based on [Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B), that prioritizes high performance and advanced reasoning capabilities, particularly excelling in tasks requiring mathematical and factual precision. Check [model card](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) for more info.
* **DeepSeek-R1-Distill-Qwen-1.5B** is the smallest DeepSeek-R1 distilled model based on [Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B). Despite its compact size, the model demonstrates strong capabilities in solving basic mathematical tasks, at the same time its programming capabilities are limited. Check [model card](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) for more info.
* **DeepSeek-R1-Distill-Qwen-7B** is a distilled model based on [Qwen-2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B). The model demonstrates a good balance between mathematical and factual reasoning and can be less suited for complex coding tasks. Check [model card](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) for more info.
* **DeepSeek-R1-Distil-Qwen-14B** is a distilled model based on [Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) that has great competence in factual reasoning and solving complex mathematical tasks. Check [model card](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-15B) for more info.
* **DeepSeek-R1-Distil-Qwen-14B** is a distilled model based on [Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) that has great competence in factual reasoning and solving complex mathematical tasks. Check [model card](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B) for more info.

## Notebook Contents

Expand Down
2 changes: 1 addition & 1 deletion notebooks/deepseek-r1/deepseek-r1.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@
"* **DeepSeek-R1-Distill-Llama-8B** is a distilled model based on [Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B), that prioritizes high performance and advanced reasoning capabilities, particularly excelling in tasks requiring mathematical and factual precision. Check [model card](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) for more info.\n",
"* **DeepSeek-R1-Distill-Qwen-1.5B** is the smallest DeepSeek-R1 distilled model based on [Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B). Despite its compact size, the model demonstrates strong capabilities in solving basic mathematical tasks, at the same time its programming capabilities are limited. Check [model card](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) for more info.\n",
"* **DeepSeek-R1-Distill-Qwen-7B** is a distilled model based on [Qwen-2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B). The model demonstrates a good balance between mathematical and factual reasoning and can be less suited for complex coding tasks. Check [model card](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) for more info.\n",
"* **DeepSeek-R1-Distil-Qwen-14B** is a distilled model based on [Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) that has great competence in factual reasoning and solving complex mathematical tasks. Check [model card](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-15B) for more info.\n",
"* **DeepSeek-R1-Distil-Qwen-14B** is a distilled model based on [Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) that has great competence in factual reasoning and solving complex mathematical tasks. Check [model card](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B) for more info.\n",
"\n",
"[Weight compression](https://docs.openvino.ai/2024/openvino-workflow/model-optimization-guide/weight-compression.html) is a technique for enhancing the efficiency of models, especially those with large memory requirements. This method reduces the model’s memory footprint, a crucial factor for Large Language Models (LLMs). We provide several options for model weight compression:\n",
"\n",
Expand Down
2 changes: 1 addition & 1 deletion notebooks/hugging-face-hub/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
The Hugging Face (HF) Model Hub is a central repository for pre-trained deep learning models. It allows exploration and provides access to thousands of models for a wide range of tasks, including text classification, question answering, and image classification.
Hugging Face provides Python packages that serve as APIs and tools to easily download and fine tune state-of-the-art pretrained models, namely [transformers] and [diffusers] packages.

![](https://github.com/huggingface/optimum-intel/raw/main/readme_logo.png)
![](https://huggingface.co/datasets/optimum/documentation-images/resolve/main/intel/logo/hf_intel_logo.png)

## Contents:
Throughout this notebook we will learn:
Expand Down
2 changes: 1 addition & 1 deletion notebooks/hugging-face-hub/hugging-face-hub.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@
"The Hugging Face (HF) [Model Hub](https://huggingface.co/models) is a central repository for pre-trained deep learning models. It allows exploration and provides access to thousands of models for a wide range of tasks, including text classification, question answering, and image classification.\n",
"Hugging Face provides Python packages that serve as APIs and tools to easily download and fine tune state-of-the-art pretrained models, namely [transformers](https://github.com/huggingface/transformers) and [diffusers](https://github.com/huggingface/diffusers) packages.\n",
"\n",
"![](https://github.com/huggingface/optimum-intel/raw/main/readme_logo.png)\n",
"![](https://huggingface.co/datasets/optimum/documentation-images/resolve/main/intel/logo/hf_intel_logo.png)\n",
"\n",
"Throughout this notebook we will learn:\n",
"1. How to load a HF pipeline using the `transformers` package and then convert it to OpenVINO.\n",
Expand Down
19 changes: 17 additions & 2 deletions notebooks/outetts-text-to-speech/outetts-text-to-speech.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -66,8 +66,23 @@
"\n",
"from pip_helper import pip_install\n",
"\n",
"pip_install(\"torch>=2.1\", \"torchaudio\", \"einops\", \"transformers>=4.46.1\", \"loguru\", \"inflect\", \"pesq\", \"torchcrepe\",\n",
" \"natsort\" \"polars\" \"uroman\", \"mecab-python3\" \"unidic-lite\", \"--extra-index-url\", \"https://download.pytorch.org/whl/cpu\")\n",
"pip_install(\n",
" \"torch>=2.1\",\n",
" \"torchaudio\",\n",
" \"einops\",\n",
" \"transformers>=4.46.1\",\n",
" \"loguru\",\n",
" \"inflect\",\n",
" \"pesq\",\n",
" \"torchcrepe\",\n",
" \"natsort\",\n",
" \"polars\",\n",
" \"uroman\",\n",
" \"mecab-python3\",\n",
" \"unidic-lite\",\n",
" \"--extra-index-url\",\n",
" \"https://download.pytorch.org/whl/cpu\",\n",
")\n",
"pip_install(\"gradio>=4.19\", \"openvino>=2024.4.0\", \"tqdm\", \"pyyaml\", \"librosa\", \"soundfile\")\n",
"pip_install(\"git+https://github.com/huggingface/optimum-intel.git\", \"--extra-index-url\", \"https://download.pytorch.org/whl/cpu\")\n",
"\n",
Expand Down

0 comments on commit 085b9d9

Please sign in to comment.