Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release/2025.1 #2638

Merged
merged 2 commits into from
Mar 24, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
The diff you're trying to view is too large. We only load the first 3000 changed files.
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -4,3 +4,5 @@ build/
# emacs save files
*~

# Jupyter Notebook checkpoint directories
.ipynb_checkpoints/
12 changes: 6 additions & 6 deletions AI-and-Analytics/End-to-end-Workloads/Census/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,14 +19,14 @@ Intel® Distribution of Modin* uses HDK to speed up your Pandas notebooks, scrip
| :--- | :---
| OS | 64-bit Ubuntu* 18.04 or higher
| Hardware | Intel Atom® processors <br> Intel® Core™ processor family <br> Intel® Xeon® processor family <br> Intel® Xeon® Scalable processor family
| Software | Intel® AI Analytics Toolkit (AI Kit) (Python version 3.8 or newer, Intel® Distribution of Modin*) <br> Intel® Extension for Scikit-learn* <br> NumPy
| Software | AI Tools (Python version 3.8 or newer, Intel® Distribution of Modin*) <br> Intel® Extension for Scikit-learn* <br> NumPy

The Intel® Distribution of Modin* and Intel® Extension for Scikit-learn* libraries are available together in [Intel® AI Analytics Toolkit (AI Kit)](https://software.intel.com/content/www/us/en/develop/tools/oneapi/ai-analytics-toolkit.html).
The Intel® Distribution of Modin* and Intel® Extension for Scikit-learn* libraries are available together in [AI Tools](https://www.intel.com/content/www/us/en/developer/topic-technology/artificial-intelligence/frameworks-tools.html).


## Key Implementation Details

This end-to-end workload sample code is implemented for CPU using the Python language. Once you have installed AI Kit, the Conda environment is prepared with Python version 3.8 (or newer), Intel Distribution of Modin*, Intel® Extension for Scikit-Learn, and NumPy.
This end-to-end workload sample code is implemented for CPU using the Python language. Once you have installed AI Tools, the Conda environment is prepared with Python version 3.8 (or newer), Intel Distribution of Modin*, Intel® Extension for Scikit-Learn, and NumPy.

In this sample, you will use Intel® Distribution of Modin* to ingest and process U.S. census data from 1970 to 2010 in order to build a ridge regression-based model to find the relation between education and total income earned in the US.

Expand All @@ -36,11 +36,11 @@ The data transformation stage normalizes the income to yearly inflation, balance


## Configure the Development Environment
If you do not already have the AI Kit installed, then download an online or offline installer for the [Intel® AI Analytics Toolkit (AI Kit)](https://software.intel.com/content/www/us/en/develop/tools/oneapi/ai-analytics-toolkit.html) or install the AI Kit using Conda.
If you do not already have the AI Tools installed, then download an online or offline installer for the [AI Tools](https://www.intel.com/content/www/us/en/developer/topic-technology/artificial-intelligence/frameworks-tools.html) or install the AI Tools using Conda.

>**Note**: See [Install Intel® AI Analytics Toolkit via Conda*](https://software.intel.com/content/www/us/en/develop/documentation/installation-guide-for-intel-oneapi-toolkits-linux/top/installation/install-using-package-managers/conda/install-intel-ai-analytics-toolkit-via-conda.html) in the *Intel® oneAPI Toolkits Installation Guide for Linux* OS* for information on Conda installation and configuration.
>**Note**: See [Install AI Tools via Conda*](https://software.intel.com/content/www/us/en/develop/documentation/installation-guide-for-intel-oneapi-toolkits-linux/top/installation/install-using-package-managers/conda/install-intel-ai-analytics-toolkit-via-conda.html) in the *Intel® oneAPI Toolkits Installation Guide for Linux* OS* for information on Conda installation and configuration.

The Intel® Distribution of Modin* and the Intel® Extension for Scikit-learn* are ready to use after AI Kit installation with the Conda Package Manager.
The Intel® Distribution of Modin* and the Intel® Extension for Scikit-learn* are ready to use after AI Tools installation with the Conda Package Manager.

## Set Environment Variables

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -291,6 +291,15 @@
"# release resources\n",
"%reset -f"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(\"[CODE_SAMPLE_COMPLETED_SUCCESSFULLY]\")"
]
}
],
"metadata": {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ You will need to download and install the following toolkits, tools, and compone

Required AI Tools: <Intel® Extension for TensorFlow* - GPU><!-- List specific AI Tools that needs to be installed before running this sample -->

If you have not already, select and install these Tools via [AI Tools Selector](https://www.intel.com/content/www/us/en/developer/tools/oneapi/ai-tools-selector.html). AI and Analytics samples are validated on AI Tools Offline Installer. It is recommended to select Offline Installer option in AI Tools Selector.
If you have not already, select and install these Tools via [AI Tools Selector](https://www.intel.com/content/www/us/en/developer/topic-technology/artificial-intelligence/frameworks-tools-selector.html). AI and Analytics samples are validated on AI Tools Offline Installer. It is recommended to select Offline Installer option in AI Tools Selector.

>**Note**: If Docker option is chosen in AI Tools Selector, refer to [Working with Preset Containers](https://github.com/intel/ai-containers/tree/main/preset) to learn how to run the docker and samples.

Expand Down Expand Up @@ -85,7 +85,7 @@ For Jupyter Notebook, refer to [Installing Jupyter](https://jupyter.org/install)
## Run the Sample
>**Note**: Before running the sample, make sure [Environment Setup](https://github.com/oneapi-src/oneAPI-samples/tree/master/AI-and-Analytics/Getting-Started-Samples/INC-Quantization-Sample-for-PyTorch#environment-setup) is completed.

Go to the section which corresponds to the installation method chosen in [AI Tools Selector](https://www.intel.com/content/www/us/en/developer/tools/oneapi/ai-tools-selector.html) to see relevant instructions:
Go to the section which corresponds to the installation method chosen in [AI Tools Selector](https://www.intel.com/content/www/us/en/developer/topic-technology/artificial-intelligence/frameworks-tools-selector.html) to see relevant instructions:
* [AI Tools Offline Installer (Validated)](#ai-tools-offline-installer-validated)
* [Conda/PIP](#condapip)
* [Docker](#docker)
Expand Down
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
ipykernel
matplotlib
sentence_transformers
transformers
datasets
accelerate
wordcloud
spacy
sentence-transformers
transformers
datasets
accelerate
wordcloud
spacy
jinja2
nltk
Original file line number Diff line number Diff line change
Expand Up @@ -182,6 +182,15 @@
"If the model appears to be giving the same output regardless of input, try running clean.sh to remove the RIR_NOISES and speechbrain \n",
"folders so they can be re-pulled. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(\"[CODE_SAMPLE_COMPLETED_SUCCESSFULLY]\")"
]
}
],
"metadata": {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ Languages are selected from the CommonVoice dataset for training, validation, an

## Purpose

Spoken audio comes in different languages and this sample uses a model to identify what that language is. The user will use an Intel® AI Analytics Toolkit container environment to train a model and perform inference leveraging Intel-optimized libraries for PyTorch*. There is also an option to quantize the trained model with Intel® Neural Compressor (INC) to speed up inference.
Spoken audio comes in different languages and this sample uses a model to identify what that language is. The user will use an AI Tools container environment to train a model and perform inference leveraging Intel-optimized libraries for PyTorch*. There is also an option to quantize the trained model with Intel® Neural Compressor (INC) to speed up inference.

## Prerequisites

Expand Down Expand Up @@ -39,7 +39,7 @@ For both training and inference, you can run the sample and scripts in Jupyter N

### Create and Set Up Environment

1. Create your conda environment by following the instructions on the Intel [AI Tools Selector](https://www.intel.com/content/www/us/en/developer/tools/oneapi/ai-tools-selector.html). You can follow these settings:
1. Create your conda environment by following the instructions on the Intel [AI Tools Selector](https://www.intel.com/content/www/us/en/developer/topic-technology/artificial-intelligence/frameworks-tools-selector.html). You can follow these settings:

* Tool: AI Tools
* Preset or customize: Customize
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -200,6 +200,15 @@
"\n",
">**Note**: If the folder name containing the model is changed from `lang_id_commonvoice_model`, you will need to modify the `pretrained_path` in `train_ecapa.yaml`, and the `source_model_path` variable in both the `inference_commonVoice.py` and `inference_custom.py` files in the `speechbrain_inference` class. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(\"[CODE_SAMPLE_COMPLETED_SUCCESSFULLY]\")"
]
}
],
"metadata": {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ class DeviceManager {
GetDevices();
return false;
} else {
if (current_device_.is_host()) {
if (current_device_.is_cpu()) {
std::cout << "Using Host device (single-threaded CPU)\n";
} else {
std::cout << "Using " << current_device_.get_info<sycl::info::device::name>() << "\n";
Expand Down
2 changes: 1 addition & 1 deletion AI-and-Analytics/End-to-end-Workloads/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ through machine learning, and provide interoperability for efficient model
development.

You can find more information at
[Intel AI Tools](https://software.intel.com/content/www/us/en/develop/tools/oneapi/ai-analytics-toolkit.html).
[Intel AI Tools](https://www.intel.com/content/www/us/en/developer/topic-technology/artificial-intelligence/frameworks-tools.html).


# End-to-end Samples
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -392,7 +392,7 @@
"metadata": {},
"outputs": [],
"source": [
"print(\"[CODE_SAMPLE_COMPLETED_SUCCESFULLY]\")"
"print(\"[CODE_SAMPLE_COMPLETED_SUCCESSFULLY]\")"
]
}
],
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -260,4 +260,4 @@ def compute_metrics(eval_pred):
# In[ ]:


print("[CODE_SAMPLE_COMPLETED_SUCCESFULLY]")
print("[CODE_SAMPLE_COMPLETED_SUCCESSFULLY]")
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ The `Fine-tuning Text Classification Model with Intel® Neural Compressor (INC)`
| Time to complete | 10 minutes
| Category | Concepts and Functionality

Intel® Neural Compressor (INC) simplifies the process of converting the FP32 model to INT8/BF16. At the same time, Intel® Neural Compressor (INC) tunes the quantization method to reduce the accuracy loss, which is a big blocker for low-precision inference as part of Intel® AI Analytics Toolkit (AI Kit).
Intel® Neural Compressor (INC) simplifies the process of converting the FP32 model to INT8/BF16. At the same time, Intel® Neural Compressor (INC) tunes the quantization method to reduce the accuracy loss, which is a big blocker for low-precision inference as part of AI Tools.

## Purpose

Expand All @@ -26,9 +26,9 @@ This sample shows how to fine-tune text model for emotion classification on pre-

You will need to download and install the following toolkits, tools, and components to use the sample.

- **Intel® AI Analytics Toolkit (AI Kit)**
- **AI Tools**

You can get the AI Kit from [Intel® oneAPI Toolkits](https://www.intel.com/content/www/us/en/developer/tools/oneapi/toolkits.html#analytics-kit). <br> See [*Get Started with the Intel® AI Analytics Toolkit for Linux**](https://www.intel.com/content/www/us/en/develop/documentation/get-started-with-ai-linux) for AI Kit installation information and post-installation steps and scripts.
You can get the AI Tools from [Intel® oneAPI Toolkits](https://www.intel.com/content/www/us/en/developer/tools/oneapi/toolkits.html#analytics-kit). <br> See [*Get Started with the AI Tools for Linux**](https://www.intel.com/content/www/us/en/docs/oneapi-ai-analytics-toolkit/get-started-guide-linux/current/before-you-begin.html) for AI Tools installation information and post-installation steps and scripts.

- **Jupyter Notebook**

Expand Down Expand Up @@ -90,7 +90,7 @@ When working with the command-line interface (CLI), you should configure the one
```
2. Activate Conda environment without Root access (Optional).

By default, the AI Kit is installed in the `/opt/intel/oneapi` folder and requires root privileges to manage it.
By default, the AI Tools is installed in the `/opt/intel/oneapi` folder and requires root privileges to manage it.

You can choose to activate Conda environment without root access. To bypass root access to manage your Conda environment, clone and activate your desired Conda environment using the following commands similar to the following.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -612,7 +612,7 @@
"metadata": {},
"outputs": [],
"source": [
"print('[CODE_SAMPLE_COMPLETED_SUCCESFULLY]')"
"print('[CODE_SAMPLE_COMPLETED_SUCCESSFULLY]')"
]
}
],
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -295,6 +295,6 @@ def Eval_model(cp_file = 'checkpoint_model.pth', dataType = "fp32" , device="gpu
print(f'Accuracy drop with AMP BF16 is: {acc_fp32-acc_bf16}')
plt.savefig('./accuracy.png')

print('[CODE_SAMPLE_COMPLETED_SUCCESFULLY]')
print('[CODE_SAMPLE_COMPLETED_SUCCESSFULLY]')


Original file line number Diff line number Diff line change
Expand Up @@ -20,15 +20,15 @@ The Intel® Extension for PyTorch (IPEX) gives users the ability to perform PyTo
|:--- |:---
| OS | Ubuntu* 22.04 or newer
| Hardware | Intel® Data Center GPU Flex Series, Intel® Data Center GPU Max Series, and Intel® ARC™ A-Series GPUs(Experimental Support)
| Software | Intel® oneAPI AI Analytics Toolkit 2023.1 or later
| Software | AI Tools 2023.1 or later

### For Local Development Environments

You will need to download and install the following toolkits, tools, and components to use the sample.

- **Intel® AI Analytics Toolkit (AI Kit) 2023.1 or later**
- **AI Tools 2023.1 or later**

You can get the AI Kit from [Intel® oneAPI Toolkits](https://www.intel.com/content/www/us/en/developer/tools/oneapi/toolkits.html#analytics-kit). <br> See [*Get Started with the Intel® AI Analytics Toolkit for Linux**](https://www.intel.com/content/www/us/en/develop/documentation/get-started-with-ai-linux) for AI Kit installation information and post-installation steps and scripts.
You can get the AI Tools from [Intel® oneAPI Toolkits](https://www.intel.com/content/www/us/en/developer/tools/oneapi/toolkits.html#analytics-kit). <br> See [*Get Started with the AI Tools for Linux**](https://www.intel.com/content/www/us/en/docs/oneapi-ai-analytics-toolkit/get-started-guide-linux/current/before-you-begin.html) for AI Tools installation information and post-installation steps and scripts.

- **Jupyter Notebook**

Expand Down Expand Up @@ -88,7 +88,7 @@ When working with the command-line interface (CLI), you should configure the one
```
2. Activate Conda environment without Root access (Optional).

By default, the AI Kit is installed in the `/opt/intel/oneapi` folder and requires root privileges to manage it.
By default, the AI Tools is installed in the `/opt/intel/oneapi` folder and requires root privileges to manage it.

You can choose to activate Conda environment without root access. To bypass root access to manage your Conda environment, clone and activate your desired Conda environment and create a jupyter kernal using the following commands similar to the following.

Expand All @@ -110,7 +110,7 @@ When working with the command-line interface (CLI), you should configure the one
```
IntelPyTorch_GPU_InferenceOptimization_with_AMP.ipynb
```
5. Change your Jupyter Notebook kernel to **PyTorch (AI kit)**.
5. Change your Jupyter Notebook kernel to **PyTorch (AI Tools)**.
6. Run every cell in the Notebook in sequence.

#### Running on the Command Line (Optional)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -385,7 +385,7 @@
"metadata": {},
"outputs": [],
"source": [
"print('[CODE_SAMPLE_COMPLETED_SUCCESFULLY]')"
"print('[CODE_SAMPLE_COMPLETED_SUCCESSFULLY]')"
]
}
],
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ You will need to download and install the following toolkits, tools, and compone

Required AI Tools: Intel® Extension for PyTorch* (CPU)

If you have not already, select and install these Tools via [AI Tools Selector](https://www.intel.com/content/www/us/en/developer/tools/oneapi/ai-tools-selector.html). AI and Analytics samples are validated on AI Tools Offline Installer. It is recommended to select Offline Installer option in AI Tools Selector.
If you have not already, select and install these Tools via [AI Tools Selector](https://www.intel.com/content/www/us/en/developer/topic-technology/artificial-intelligence/frameworks-tools-selector.html). AI and Analytics samples are validated on AI Tools Offline Installer. It is recommended to select Offline Installer option in AI Tools Selector.

>**Note**: If Docker option is chosen in AI Tools Selector, refer to [Working with Preset Containers](https://github.com/intel/ai-containers/tree/main/preset) to learn how to run the docker and samples.

Expand Down Expand Up @@ -74,7 +74,7 @@ For Jupyter Notebook, refer to [Installing Jupyter](https://jupyter.org/install)
## Run the Sample
>**Note**: Before running the sample, make sure [Environment Setup](https://github.com/oneapi-src/oneAPI-samples/tree/master/AI-and-Analytics/Features-and-Functionality/IntelPyTorch_TrainingOptimizations_AMX_BF16#environment-setup) is completed.

Go to the section which corresponds to the installation method chosen in [AI Tools Selector](https://www.intel.com/content/www/us/en/developer/tools/oneapi/ai-tools-selector.html) to see relevant instructions:
Go to the section which corresponds to the installation method chosen in [AI Tools Selector](https://www.intel.com/content/www/us/en/developer/topic-technology/artificial-intelligence/frameworks-tools-selector.html) to see relevant instructions:
* [AI Tools Offline Installer (Validated)](#ai-tools-offline-installer-validated)
* [Conda/PIP](#condapip)
* [Docker](#docker)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -153,4 +153,4 @@ def main():

if __name__ == '__main__':
main()
print('[CODE_SAMPLE_COMPLETED_SUCCESFULLY]')
print('[CODE_SAMPLE_COMPLETED_SUCCESSFULLY]')
Original file line number Diff line number Diff line change
Expand Up @@ -110,4 +110,4 @@ def main():

if __name__ == '__main__':
main()
print('[CODE_SAMPLE_COMPLETED_SUCCESFULLY]')
print('[CODE_SAMPLE_COMPLETED_SUCCESSFULLY]')
Original file line number Diff line number Diff line change
Expand Up @@ -747,7 +747,7 @@
"metadata": {},
"outputs": [],
"source": [
"print(\"[CODE_SAMPLE_COMPLETED_SUCCESFULLY]\")"
"print(\"[CODE_SAMPLE_COMPLETED_SUCCESSFULLY]\")"
]
}
],
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -557,5 +557,5 @@ def next_generation_TSP(chromosomes, fitnesses):
# In[ ]:


print("[CODE_SAMPLE_COMPLETED_SUCCESFULLY]")
print("[CODE_SAMPLE_COMPLETED_SUCCESSFULLY]")

Original file line number Diff line number Diff line change
Expand Up @@ -391,7 +391,7 @@
"metadata": {},
"outputs": [],
"source": [
"print(\"[CODE_SAMPLE_COMPLETED_SUCCESFULLY]\")"
"print(\"[CODE_SAMPLE_COMPLETED_SUCCESSFULLY]\")"
]
}
],
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -278,5 +278,5 @@ def knn_dpnp(train, train_labels, test, k):
# In[ ]:


print("[CODE_SAMPLE_COMPLETED_SUCCESFULLY]")
print("[CODE_SAMPLE_COMPLETED_SUCCESSFULLY]")

Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,7 @@ Numba accuracy: 0.7222222222222222

Numba_dpex accuracy 0.7222222222222222

[CODE_SAMPLE_COMPLETED_SUCCESFULLY]
[CODE_SAMPLE_COMPLETED_SUCCESSFULLY]
```

## Related Samples
Expand Down
Loading