-
Notifications
You must be signed in to change notification settings - Fork 259
torch_compile_backend_ipex.rst 번역 #901
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
hyoyoung
merged 7 commits into
PyTorchKorea:master
from
jh941213:translator_torch_compile_backend_ipex
Oct 15, 2024
Merged
Changes from 5 commits
Commits
Show all changes
7 commits
Select commit
Hold shift + click to select a range
984c173
Update torch_compile_backend_ipex.rst
jh941213 35139ca
Update torch_compile_backend_ipex.rst
jh941213 bf0ad21
Update torch_compile_backend_ipex.rst
jh941213 f295259
Update torch_compile_backend_ipex.rst
jh941213 b991960
Update torch_compile_backend_ipex.rst
jh941213 5080b0b
Update torch_compile_backend_ipex.rst
jh941213 d5f6f39
Update torch_compile_backend_ipex.rst
jh941213 File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,19 +1,17 @@ | ||
Intel® Extension for PyTorch* Backend | ||
Intel® Extension for PyTorch* 백엔드 | ||
===================================== | ||
|
||
To work better with `torch.compile`, Intel® Extension for PyTorch* implements a backend ``ipex``. | ||
It targets to improve hardware resource usage efficiency on Intel platforms for better performance. | ||
The `ipex` backend is implemented with further customizations designed in Intel® Extension for | ||
PyTorch* for the model compilation. | ||
- `torch.compile` 과 더 잘 작동하도록, Intel® Extension for PyTorch는 ``ipex`` 라는 백엔드를 구현했습니다. | ||
- 이 백엔드는 Intel 플랫폼에서 하드웨어 자원 사용 효율성을 개선하여 성능을 향상시키는 것을 목표로 합니다. | ||
- 모델 컴파일을 위한 Intel® Extension for PyTorch에 설계된 추가 커스터마이징을 통해, `ipex` 백엔드가 구현되었습니다. | ||
|
||
Usage Example | ||
사용 예시 | ||
~~~~~~~~~~~~~ | ||
|
||
Train FP32 | ||
FP32 학습 | ||
---------- | ||
|
||
Check the example below to learn how to utilize the `ipex` backend with `torch.compile` for model training with FP32 data type. | ||
|
||
아래 예제를 통해, 여러분은 FP32 데이터 타입으로 모델을 학습할 때 `torch.compile` 과 함께 `ipex` 백엔드를 사용하는 방법을 배울 수 있습니다. | ||
.. code:: python | ||
|
||
import torch | ||
|
@@ -44,10 +42,10 @@ Check the example below to learn how to utilize the `ipex` backend with `torch.c | |
optimizer = torch.optim.SGD(model.parameters(), lr = LR, momentum=0.9) | ||
model.train() | ||
|
||
#################### code changes #################### | ||
#################### 코드 변경 부분 #################### | ||
import intel_extension_for_pytorch as ipex | ||
|
||
# Invoke the following API optionally, to apply frontend optimizations | ||
# 선택적으로 다음 API를 호출하여, 프론트엔드 최적화를 적용합니다. | ||
model, optimizer = ipex.optimize(model, optimizer=optimizer) | ||
|
||
compile_model = torch.compile(model, backend="ipex") | ||
|
@@ -61,10 +59,10 @@ Check the example below to learn how to utilize the `ipex` backend with `torch.c | |
optimizer.step() | ||
|
||
|
||
Train BF16 | ||
BF16 학습 | ||
---------- | ||
|
||
Check the example below to learn how to utilize the `ipex` backend with `torch.compile` for model training with BFloat16 data type. | ||
아래 예시를 통해 BFloat16 데이터 타입으로 모델 학습을 위해 `torch.compile` 와 함께 `ipex` 백엔드를 활용하는 방법을 알아보세요. | ||
|
||
.. code:: python | ||
|
||
|
@@ -96,10 +94,10 @@ Check the example below to learn how to utilize the `ipex` backend with `torch.c | |
optimizer = torch.optim.SGD(model.parameters(), lr = LR, momentum=0.9) | ||
model.train() | ||
|
||
#################### code changes #################### | ||
#################### 코드 변경 부분 #################### | ||
import intel_extension_for_pytorch as ipex | ||
|
||
# Invoke the following API optionally, to apply frontend optimizations | ||
# 선택적으로 다음 API를 호출하여, 프론트엔드 최적화를 적용합니다. | ||
model, optimizer = ipex.optimize(model, dtype=torch.bfloat16, optimizer=optimizer) | ||
|
||
compile_model = torch.compile(model, backend="ipex") | ||
|
@@ -114,10 +112,10 @@ Check the example below to learn how to utilize the `ipex` backend with `torch.c | |
optimizer.step() | ||
|
||
|
||
Inference FP32 | ||
FP32 추론 | ||
-------------- | ||
|
||
Check the example below to learn how to utilize the `ipex` backend with `torch.compile` for model inference with FP32 data type. | ||
아래 예시를 통해 `ipex` 백엔드를 `torch.compile` 와 함께 활용하여 FP32 데이터 타입으로 모델을 추론하는 방법을 알아보세요. | ||
|
||
.. code:: python | ||
|
||
|
@@ -128,10 +126,10 @@ Check the example below to learn how to utilize the `ipex` backend with `torch.c | |
model.eval() | ||
data = torch.rand(1, 3, 224, 224) | ||
|
||
#################### code changes #################### | ||
#################### 코드 변경 부분 #################### | ||
import intel_extension_for_pytorch as ipex | ||
|
||
# Invoke the following API optionally, to apply frontend optimizations | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 줄 앞에 빈 공백이 있습니다 There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 여기 공백라인은 앞에 space가 있는거 같은데 그 space를 제거해주셔야합니다. |
||
# 선택적으로 다음 API를 호출하여, 프론트엔드 최적화를 적용합니다. | ||
model = ipex.optimize(model, weights_prepack=False) | ||
|
||
compile_model = torch.compile(model, backend="ipex") | ||
|
@@ -141,10 +139,10 @@ Check the example below to learn how to utilize the `ipex` backend with `torch.c | |
compile_model(data) | ||
|
||
|
||
Inference BF16 | ||
BF16 추론 | ||
-------------- | ||
|
||
Check the example below to learn how to utilize the `ipex` backend with `torch.compile` for model inference with BFloat16 data type. | ||
아래 예시를 통해 `ipex` 백엔드를 `torch.compile`와 함께 활용하여 BFloat16 데이터 타입으로 모델을 추론하는 방법을 알아보세요. | ||
|
||
.. code:: python | ||
|
||
|
@@ -155,10 +153,10 @@ Check the example below to learn how to utilize the `ipex` backend with `torch.c | |
model.eval() | ||
data = torch.rand(1, 3, 224, 224) | ||
|
||
#################### code changes #################### | ||
#################### 코드 변경 부분 #################### | ||
import intel_extension_for_pytorch as ipex | ||
|
||
# Invoke the following API optionally, to apply frontend optimizations | ||
# 선택적으로 다음 API를 호출하여, 프론트엔드 최적화를 적용합니다. | ||
model = ipex.optimize(model, dtype=torch.bfloat16, weights_prepack=False) | ||
|
||
compile_model = torch.compile(model, backend="ipex") | ||
|
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
역자 정보 추가 부탁드립니다.