Skip to content

Commit d9ea61a

Browse files
authored
Update README.md (#2152)
* Update README.md * Update requirements.txt
1 parent d217052 commit d9ea61a

File tree

2 files changed

+9
-12
lines changed

2 files changed

+9
-12
lines changed

AI-and-Analytics/Getting-Started-Samples/Intel_Extension_For_TensorFlow_GettingStarted/README.md

+8-12
Original file line numberDiff line numberDiff line change
@@ -1,36 +1,33 @@
11
# Intel Extension for TensorFlow Getting Started Sample
2-
This code sample will guide users how to run a tensorflow inference workload on both GPU and CPU by using oneAPI AI Analytics Toolkit and also analyze the GPU and CPU usage via oneDNN verbose logs
2+
This code sample will guide users how to run a tensorflow inference workload on both GPU and CPU by using Intel® AI Tools and also analyze the GPU and CPU usage via oneDNN verbose logs
33

44
## Purpose
5-
- Guide users how to use different conda environments in oneAPI AI Analytics Toolkit to run TensorFlow workloads on both CPU and GPU
5+
- Guide users how to use different conda environments in Intel® AI Tools to run TensorFlow workloads on both CPU and GPU
66
- Guide users how to validate the GPU or CPU usages for TensorFlow workloads on Intel CPU or GPU
77

88

99
## Key implementation details
10-
1. leverage the [resnet50 inference sample] (https://github.com/intel/intel-extension-for-tensorflow/tree/main/examples/infer_resnet50) from intel-extension-for-tensorflow
10+
1. leverage the [resnet50 inference sample](https://github.com/intel/intel-extension-for-tensorflow/tree/main/examples/infer_resnet50) from intel-extension-for-tensorflow
1111
2. use the resnet50v1.5 pretrained model from TensorFlow Hub
1212
3. infernece with images in intel caffe github
1313
4. guide users how to use different conda environment to run on Intel CPU and GPU
1414
5. analyze oneDNN verbose logs to validate GPU or CPU usage
1515

16-
## Running Samples on the Intel® DevCloud
17-
If you are running this sample on the DevCloud, skip the Pre-requirements and go to the [Activate Conda Environment](#activate-conda) section.
18-
1916
## Pre-requirements (Local or Remote Host Installation)
2017

21-
TensorFlow* is ready for use once you finish the Intel® AI Analytics Toolkit (AI Kit) installation and have run the post installation script.
18+
TensorFlow* is ready for use once you finish the Intel® AI Tools installation and have run the post installation script.
2219

23-
You can refer to the oneAPI [main page](https://software.intel.com/en-us/oneapi) for toolkit installation and the Toolkit [Intel® AI Analytics Toolkit Get Started Guide for Linux](https://software.intel.com/en-us/get-started-with-intel-oneapi-linux-get-started-with-the-intel-ai-analytics-toolkit) for post-installation steps and scripts.
20+
TensorFlow* is ready for use once you finish the Intel AI Tools installation. You can refer to the oneAPI [product page](https://software.intel.com/en-us/oneapi) for tools installation and the *[Get Started with the Intel® AI Tools for Linux*](https://software.intel.com/en-us/get-started-with-intel-oneapi-linux-get-started-with-the-intel-ai-analytics-toolkit)* for post-installation steps and scripts.
2421

2522
## Environment Setup
2623
This sample requires two additional pip packages: tensorflow_hub and ipykerenl.
2724
Therefore users need to clone the tensorflow conda environment into users' home folder and install those additional packages accordingly.
2825
Please follow bellow steps to setup GPU environment.
2926

30-
1. Source oneAPI environment variables: ```$source /opt/intel/oneapi/setvars.sh ```
27+
1. Source oneAPI environment variables: ``` $source $HOME/intel/oneapi/intelpython/bin/activate ```
3128
2. Create conda env: ```$conda create --name user-tensorflow-gpu --clone tensorflow-gpu ```
3229
3. Activate the created conda env: ```$source activate user-tensorflow-gpu ```
33-
4. Install the required packages: ```(user-tensorflow-gpu) $pip install tensorflow_hub ipykernel ```
30+
4. Install the required packages: ```(user-tensorflow-gpu) $pip install -r requirements.txt ```
3431
5. Deactivate conda env: ```(user-tensorflow-gpu)$conda deactivate ```
3532
6. Register the kernel to Jupyter NB: ``` $~/.conda/envs/user-tensorflow-gpu/bin/python -m ipykernel install --user --name=user-tensorflow-gpu ```
3633

@@ -40,9 +37,8 @@ In the end, you will have two new conda environments which are user-tensorflow-g
4037
## How to Build and Run
4138

4239
You can run the Jupyter notebook with the sample code on your local
43-
server or download the sample code from the notebook as a Python file and run it locally or on the Intel DevCloud.
40+
server or download the sample code from the notebook as a Python file and run it locally.
4441

45-
**Note:** You can run this sample on the Intel DevCloud using the Dask and OmniSci engine backends for Modin. To learn how to set the engine backend for Intel Distribution of Modin, visit the [Intel® Distribution of Modin Getting Started Guide](https://www.intel.com/content/www/us/en/developer/articles/technical/intel-distribution-of-modin-getting-started-guide.html). The Ray backend cannot be used on Intel DevCloud at this time. Thank you for your patience.
4642

4743
### Run the Sample in Jupyter Notebook<a name="run-as-jupyter-notebook"></a>
4844

Original file line numberDiff line numberDiff line change
@@ -1,2 +1,3 @@
11
tensorflow_hub
22
ipykernel
3+
matplotlib

0 commit comments

Comments
 (0)