Skip to content

Commit 267bf65

Browse files
authored
🌐 [i18n-KO] Translated docs to Korean (added 7 docs and etc) (huggingface#8804)
* remove unused docs * add ko-18n docs * docs typo, edit etc * reorder list, add `in translation` in toctree * fix minor translation * fix docs minor tone, etc
1 parent 1a8b3c2 commit 267bf65

21 files changed

+2170
-1024
lines changed

Diff for: docs/source/en/using-diffusers/shap-e.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@ images = pipe(
5252
).images
5353
```
5454

55-
Now use the [`~utils.export_to_gif`] function to turn the list of image frames into a gif of the 3D object.
55+
이제 [`~utils.export_to_gif`] 함수를 사용해 이미지 프레임 리스트를 3D 오브젝트의 gif로 변환합니다.
5656

5757
```py
5858
from diffusers.utils import export_to_gif

Diff for: docs/source/en/using-diffusers/svd.md

+1
Original file line numberDiff line numberDiff line change
@@ -21,6 +21,7 @@ This guide will show you how to use SVD to generate short videos from images.
2121
Before you begin, make sure you have the following libraries installed:
2222

2323
```py
24+
# Colab에서 필요한 라이브러리를 설치하기 위해 주석을 제외하세요
2425
!pip install -q -U diffusers transformers accelerate
2526
```
2627

Diff for: docs/source/ko/_toctree.yml

+150-88
Original file line numberDiff line numberDiff line change
@@ -1,121 +1,183 @@
11
- sections:
22
- local: index
3-
title: "🧨 Diffusers"
3+
title: 🧨 Diffusers
44
- local: quicktour
55
title: "훑어보기"
66
- local: stable_diffusion
77
title: Stable Diffusion
88
- local: installation
9-
title: "설치"
10-
title: "시작하기"
9+
title: 설치
10+
title: 시작하기
1111
- sections:
1212
- local: tutorials/tutorial_overview
1313
title: 개요
1414
- local: using-diffusers/write_own_pipeline
1515
title: 모델과 스케줄러 이해하기
16-
- local: in_translation
17-
title: AutoPipeline
16+
- local: in_translation # tutorials/autopipeline
17+
title: (번역중) AutoPipeline
1818
- local: tutorials/basic_training
1919
title: Diffusion 모델 학습하기
20-
title: Tutorials
20+
- local: in_translation # tutorials/using_peft_for_inference
21+
title: (번역중) 추론을 위한 LoRAs 불러오기
22+
- local: in_translation # tutorials/fast_diffusion
23+
title: (번역중) Text-to-image diffusion 모델 추론 가속화하기
24+
- local: in_translation # tutorials/inference_with_big_models
25+
title: (번역중) 큰 모델로 작업하기
26+
title: 튜토리얼
2127
- sections:
22-
- sections:
23-
- local: using-diffusers/loading_overview
24-
title: 개요
25-
- local: using-diffusers/loading
26-
title: 파이프라인, 모델, 스케줄러 불러오기
27-
- local: using-diffusers/schedulers
28-
title: 다른 스케줄러들을 가져오고 비교하기
29-
- local: using-diffusers/custom_pipeline_overview
30-
title: 커뮤니티 파이프라인 불러오기
31-
- local: using-diffusers/using_safetensors
32-
title: 세이프텐서 불러오기
33-
- local: using-diffusers/other-formats
34-
title: 다른 형식의 Stable Diffusion 불러오기
35-
- local: in_translation
36-
title: Hub에 파일 push하기
37-
title: 불러오기 & 허브
38-
- sections:
39-
- local: using-diffusers/pipeline_overview
40-
title: 개요
41-
- local: using-diffusers/unconditional_image_generation
42-
title: Unconditional 이미지 생성
43-
- local: using-diffusers/conditional_image_generation
44-
title: Text-to-image 생성
45-
- local: using-diffusers/img2img
46-
title: Text-guided image-to-image
47-
- local: using-diffusers/inpaint
48-
title: Text-guided 이미지 인페인팅
49-
- local: using-diffusers/depth2img
50-
title: Text-guided depth-to-image
51-
- local: using-diffusers/textual_inversion_inference
52-
title: Textual inversion
53-
- local: training/distributed_inference
54-
title: 여러 GPU를 사용한 분산 추론
55-
- local: in_translation
56-
title: Distilled Stable Diffusion 추론
57-
- local: using-diffusers/reusing_seeds
58-
title: Deterministic 생성으로 이미지 퀄리티 높이기
59-
- local: using-diffusers/control_brightness
60-
title: 이미지 밝기 조정하기
61-
- local: using-diffusers/reproducibility
62-
title: 재현 가능한 파이프라인 생성하기
63-
- local: using-diffusers/custom_pipeline_examples
64-
title: 커뮤니티 파이프라인들
65-
- local: using-diffusers/contribute_pipeline
66-
title: 커뮤티니 파이프라인에 기여하는 방법
67-
- local: using-diffusers/stable_diffusion_jax_how_to
68-
title: JAX/Flax에서의 Stable Diffusion
69-
- local: using-diffusers/weighted_prompts
70-
title: Weighting Prompts
71-
title: 추론을 위한 파이프라인
72-
- sections:
73-
- local: training/overview
74-
title: 개요
75-
- local: training/create_dataset
76-
title: 학습을 위한 데이터셋 생성하기
77-
- local: training/adapt_a_model
78-
title: 새로운 태스크에 모델 적용하기
28+
- local: using-diffusers/loading
29+
title: 파이프라인 불러오기
30+
- local: using-diffusers/custom_pipeline_overview
31+
title: 커뮤니티 파이프라인과 컴포넌트 불러오기
32+
- local: using-diffusers/schedulers
33+
title: 스케줄러와 모델 불러오기
34+
- local: using-diffusers/other-formats
35+
title: 모델 파일과 레이아웃
36+
- local: using-diffusers/loading_adapters
37+
title: 어댑터 불러오기
38+
- local: using-diffusers/push_to_hub
39+
title: 파일들을 Hub로 푸시하기
40+
title: 파이프라인과 어댑터 불러오기
41+
- sections:
42+
- local: using-diffusers/unconditional_image_generation
43+
title: Unconditional 이미지 생성
44+
- local: using-diffusers/conditional_image_generation
45+
title: Text-to-image
46+
- local: using-diffusers/img2img
47+
title: Image-to-image
48+
- local: using-diffusers/inpaint
49+
title: 인페인팅
50+
- local: in_translation # using-diffusers/text-img2vid
51+
title: (번역중) Text 또는 image-to-video
52+
- local: using-diffusers/depth2img
53+
title: Depth-to-image
54+
title: 생성 태스크
55+
- sections:
56+
- local: in_translation # using-diffusers/overview_techniques
57+
title: (번역중) 개요
58+
- local: training/distributed_inference
59+
title: 여러 GPU를 사용한 분산 추론
60+
- local: in_translation # using-diffusers/merge_loras
61+
title: (번역중) LoRA 병합
62+
- local: in_translation # using-diffusers/scheduler_features
63+
title: (번역중) 스케줄러 기능
64+
- local: in_translation # using-diffusers/callback
65+
title: (번역중) 파이프라인 콜백
66+
- local: in_translation # using-diffusers/reusing_seeds
67+
title: (번역중) 재현 가능한 파이프라인
68+
- local: in_translation # using-diffusers/image_quality
69+
title: (번역중) 이미지 퀄리티 조절하기
70+
- local: using-diffusers/weighted_prompts
71+
title: 프롬프트 기술
72+
title: 추론 테크닉
73+
- sections:
74+
- local: in_translation # advanced_inference/outpaint
75+
title: (번역중) Outpainting
76+
title: 추론 심화
77+
- sections:
78+
- local: in_translation # using-diffusers/sdxl
79+
title: (번역중) Stable Diffusion XL
80+
- local: using-diffusers/sdxl_turbo
81+
title: SDXL Turbo
82+
- local: using-diffusers/kandinsky
83+
title: Kandinsky
84+
- local: in_translation # using-diffusers/ip_adapter
85+
title: (번역중) IP-Adapter
86+
- local: in_translation # using-diffusers/pag
87+
title: (번역중) PAG
88+
- local: in_translation # using-diffusers/controlnet
89+
title: (번역중) ControlNet
90+
- local: in_translation # using-diffusers/t2i_adapter
91+
title: (번역중) T2I-Adapter
92+
- local: in_translation # using-diffusers/inference_with_lcm
93+
title: (번역중) Latent Consistency Model
94+
- local: using-diffusers/textual_inversion_inference
95+
title: Textual inversion
96+
- local: using-diffusers/shap-e
97+
title: Shap-E
98+
- local: using-diffusers/diffedit
99+
title: DiffEdit
100+
- local: in_translation # using-diffusers/inference_with_tcd_lora
101+
title: (번역중) Trajectory Consistency Distillation-LoRA
102+
- local: using-diffusers/svd
103+
title: Stable Video Diffusion
104+
- local: in_translation # using-diffusers/marigold_usage
105+
title: (번역중) Marigold 컴퓨터 비전
106+
title: 특정 파이프라인 예시
107+
- sections:
108+
- local: training/overview
109+
title: 개요
110+
- local: training/create_dataset
111+
title: 학습을 위한 데이터셋 생성하기
112+
- local: training/adapt_a_model
113+
title: 새로운 태스크에 모델 적용하기
114+
- isExpanded: false
115+
sections:
79116
- local: training/unconditional_training
80117
title: Unconditional 이미지 생성
81-
- local: training/text_inversion
82-
title: Textual Inversion
83-
- local: training/dreambooth
84-
title: DreamBooth
85118
- local: training/text2image
86119
title: Text-to-image
87-
- local: training/lora
88-
title: Low-Rank Adaptation of Large Language Models (LoRA)
120+
- local: in_translation # training/sdxl
121+
title: (번역중) Stable Diffusion XL
122+
- local: in_translation # training/kandinsky
123+
title: (번역중) Kandinsky 2.2
124+
- local: in_translation # training/wuerstchen
125+
title: (번역중) Wuerstchen
89126
- local: training/controlnet
90127
title: ControlNet
128+
- local: in_translation # training/t2i_adapters
129+
title: (번역중) T2I-Adapters
91130
- local: training/instructpix2pix
92-
title: InstructPix2Pix 학습
131+
title: InstructPix2Pix
132+
title: 모델
133+
- isExpanded: false
134+
sections:
135+
- local: training/text_inversion
136+
title: Textual Inversion
137+
- local: training/dreambooth
138+
title: DreamBooth
139+
- local: training/lora
140+
title: LoRA
93141
- local: training/custom_diffusion
94142
title: Custom Diffusion
95-
title: Training
96-
title: Diffusers 사용하기
143+
- local: in_translation # training/lcm_distill
144+
title: (번역중) Latent Consistency Distillation
145+
- local: in_translation # training/ddpo
146+
title: (번역중) DDPO 강화학습 훈련
147+
title: 메서드
148+
title: 학습
97149
- sections:
98-
- local: optimization/opt_overview
99-
title: 개요
100150
- local: optimization/fp16
101-
title: 메모리와 속도
151+
title: 추론 스피드업
152+
- local: in_translation # optimization/memory
153+
title: (번역중) 메모리 사용량 줄이기
102154
- local: optimization/torch2.0
103-
title: Torch2.0 지원
155+
title: PyTorch 2.0
104156
- local: optimization/xformers
105157
title: xFormers
106-
- local: optimization/onnx
107-
title: ONNX
108-
- local: optimization/open_vino
109-
title: OpenVINO
110-
- local: optimization/coreml
111-
title: Core ML
112-
- local: optimization/mps
113-
title: MPS
114-
- local: optimization/habana
115-
title: Habana Gaudi
116158
- local: optimization/tome
117-
title: Token Merging
118-
title: 최적화/특수 하드웨어
159+
title: Token merging
160+
- local: in_translation # optimization/deepcache
161+
title: (번역중) DeepCache
162+
- local: in_translation # optimization/tgate
163+
title: (번역중) TGATE
164+
- sections:
165+
- local: using-diffusers/stable_diffusion_jax_how_to
166+
title: JAX/Flax
167+
- local: optimization/onnx
168+
title: ONNX
169+
- local: optimization/open_vino
170+
title: OpenVINO
171+
- local: optimization/coreml
172+
title: Core ML
173+
title: 최적화된 모델 형식
174+
- sections:
175+
- local: optimization/mps
176+
title: Metal Performance Shaders (MPS)
177+
- local: optimization/habana
178+
title: Habana Gaudi
179+
title: 최적화된 하드웨어
180+
title: 추론 가속화와 메모리 줄이기
119181
- sections:
120182
- local: conceptual/philosophy
121183
title: 철학

Diff for: docs/source/ko/index.md

+1-49
Original file line numberDiff line numberDiff line change
@@ -46,52 +46,4 @@ specific language governing permissions and limitations under the License.
4646
<p class="text-gray-700">🤗 Diffusers 클래스 및 메서드의 작동 방식에 대한 기술 설명.</p>
4747
</a>
4848
</div>
49-
</div>
50-
51-
## Supported pipelines
52-
53-
| Pipeline | Paper/Repository | Tasks |
54-
|---|---|:---:|
55-
| [alt_diffusion](./api/pipelines/alt_diffusion) | [AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities](https://arxiv.org/abs/2211.06679) | Image-to-Image Text-Guided Generation |
56-
| [audio_diffusion](./api/pipelines/audio_diffusion) | [Audio Diffusion](https://github.com/teticio/audio-diffusion.git) | Unconditional Audio Generation |
57-
| [controlnet](./api/pipelines/stable_diffusion/controlnet) | [Adding Conditional Control to Text-to-Image Diffusion Models](https://arxiv.org/abs/2302.05543) | Image-to-Image Text-Guided Generation |
58-
| [cycle_diffusion](./api/pipelines/cycle_diffusion) | [Unifying Diffusion Models' Latent Space, with Applications to CycleDiffusion and Guidance](https://arxiv.org/abs/2210.05559) | Image-to-Image Text-Guided Generation |
59-
| [dance_diffusion](./api/pipelines/dance_diffusion) | [Dance Diffusion](https://github.com/williamberman/diffusers.git) | Unconditional Audio Generation |
60-
| [ddpm](./api/pipelines/ddpm) | [Denoising Diffusion Probabilistic Models](https://arxiv.org/abs/2006.11239) | Unconditional Image Generation |
61-
| [ddim](./api/pipelines/ddim) | [Denoising Diffusion Implicit Models](https://arxiv.org/abs/2010.02502) | Unconditional Image Generation |
62-
| [if](./if) | [**IF**](./api/pipelines/if) | Image Generation |
63-
| [if_img2img](./if) | [**IF**](./api/pipelines/if) | Image-to-Image Generation |
64-
| [if_inpainting](./if) | [**IF**](./api/pipelines/if) | Image-to-Image Generation |
65-
| [latent_diffusion](./api/pipelines/latent_diffusion) | [High-Resolution Image Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2112.10752)| Text-to-Image Generation |
66-
| [latent_diffusion](./api/pipelines/latent_diffusion) | [High-Resolution Image Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2112.10752)| Super Resolution Image-to-Image |
67-
| [latent_diffusion_uncond](./api/pipelines/latent_diffusion_uncond) | [High-Resolution Image Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2112.10752) | Unconditional Image Generation |
68-
| [paint_by_example](./api/pipelines/paint_by_example) | [Paint by Example: Exemplar-based Image Editing with Diffusion Models](https://arxiv.org/abs/2211.13227) | Image-Guided Image Inpainting |
69-
| [pndm](./api/pipelines/pndm) | [Pseudo Numerical Methods for Diffusion Models on Manifolds](https://arxiv.org/abs/2202.09778) | Unconditional Image Generation |
70-
| [score_sde_ve](./api/pipelines/score_sde_ve) | [Score-Based Generative Modeling through Stochastic Differential Equations](https://openreview.net/forum?id=PxTIG12RRHS) | Unconditional Image Generation |
71-
| [score_sde_vp](./api/pipelines/score_sde_vp) | [Score-Based Generative Modeling through Stochastic Differential Equations](https://openreview.net/forum?id=PxTIG12RRHS) | Unconditional Image Generation |
72-
| [semantic_stable_diffusion](./api/pipelines/semantic_stable_diffusion) | [Semantic Guidance](https://arxiv.org/abs/2301.12247) | Text-Guided Generation |
73-
| [stable_diffusion_text2img](./api/pipelines/stable_diffusion/text2img) | [Stable Diffusion](https://stability.ai/blog/stable-diffusion-public-release) | Text-to-Image Generation |
74-
| [stable_diffusion_img2img](./api/pipelines/stable_diffusion/img2img) | [Stable Diffusion](https://stability.ai/blog/stable-diffusion-public-release) | Image-to-Image Text-Guided Generation |
75-
| [stable_diffusion_inpaint](./api/pipelines/stable_diffusion/inpaint) | [Stable Diffusion](https://stability.ai/blog/stable-diffusion-public-release) | Text-Guided Image Inpainting |
76-
| [stable_diffusion_panorama](./api/pipelines/stable_diffusion/panorama) | [MultiDiffusion](https://multidiffusion.github.io/) | Text-to-Panorama Generation |
77-
| [stable_diffusion_pix2pix](./api/pipelines/stable_diffusion/pix2pix) | [InstructPix2Pix: Learning to Follow Image Editing Instructions](https://arxiv.org/abs/2211.09800) | Text-Guided Image Editing|
78-
| [stable_diffusion_pix2pix_zero](./api/pipelines/stable_diffusion/pix2pix_zero) | [Zero-shot Image-to-Image Translation](https://pix2pixzero.github.io/) | Text-Guided Image Editing |
79-
| [stable_diffusion_attend_and_excite](./api/pipelines/stable_diffusion/attend_and_excite) | [Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models](https://arxiv.org/abs/2301.13826) | Text-to-Image Generation |
80-
| [stable_diffusion_self_attention_guidance](./api/pipelines/stable_diffusion/self_attention_guidance) | [Improving Sample Quality of Diffusion Models Using Self-Attention Guidance](https://arxiv.org/abs/2210.00939) | Text-to-Image Generation Unconditional Image Generation |
81-
| [stable_diffusion_image_variation](./stable_diffusion/image_variation) | [Stable Diffusion Image Variations](https://github.com/LambdaLabsML/lambda-diffusers#stable-diffusion-image-variations) | Image-to-Image Generation |
82-
| [stable_diffusion_latent_upscale](./stable_diffusion/latent_upscale) | [Stable Diffusion Latent Upscaler](https://twitter.com/StabilityAI/status/1590531958815064065) | Text-Guided Super Resolution Image-to-Image |
83-
| [stable_diffusion_model_editing](./api/pipelines/stable_diffusion/model_editing) | [Editing Implicit Assumptions in Text-to-Image Diffusion Models](https://time-diffusion.github.io/) | Text-to-Image Model Editing |
84-
| [stable_diffusion_2](./api/pipelines/stable_diffusion_2) | [Stable Diffusion 2](https://stability.ai/blog/stable-diffusion-v2-release) | Text-to-Image Generation |
85-
| [stable_diffusion_2](./api/pipelines/stable_diffusion_2) | [Stable Diffusion 2](https://stability.ai/blog/stable-diffusion-v2-release) | Text-Guided Image Inpainting |
86-
| [stable_diffusion_2](./api/pipelines/stable_diffusion_2) | [Depth-Conditional Stable Diffusion](https://github.com/Stability-AI/stablediffusion#depth-conditional-stable-diffusion) | Depth-to-Image Generation |
87-
| [stable_diffusion_2](./api/pipelines/stable_diffusion_2) | [Stable Diffusion 2](https://stability.ai/blog/stable-diffusion-v2-release) | Text-Guided Super Resolution Image-to-Image |
88-
| [stable_diffusion_safe](./api/pipelines/stable_diffusion_safe) | [Safe Stable Diffusion](https://arxiv.org/abs/2211.05105) | Text-Guided Generation |
89-
| [stable_unclip](./stable_unclip) | Stable unCLIP | Text-to-Image Generation |
90-
| [stable_unclip](./stable_unclip) | Stable unCLIP | Image-to-Image Text-Guided Generation |
91-
| [stochastic_karras_ve](./api/pipelines/stochastic_karras_ve) | [Elucidating the Design Space of Diffusion-Based Generative Models](https://arxiv.org/abs/2206.00364) | Unconditional Image Generation |
92-
| [text_to_video_sd](./api/pipelines/text_to_video) | [Modelscope's Text-to-video-synthesis Model in Open Domain](https://modelscope.cn/models/damo/text-to-video-synthesis/summary) | Text-to-Video Generation |
93-
| [unclip](./api/pipelines/unclip) | [Hierarchical Text-Conditional Image Generation with CLIP Latents](https://arxiv.org/abs/2204.06125)(implementation by [kakaobrain](https://github.com/kakaobrain/karlo)) | Text-to-Image Generation |
94-
| [versatile_diffusion](./api/pipelines/versatile_diffusion) | [Versatile Diffusion: Text, Images and Variations All in One Diffusion Model](https://arxiv.org/abs/2211.08332) | Text-to-Image Generation |
95-
| [versatile_diffusion](./api/pipelines/versatile_diffusion) | [Versatile Diffusion: Text, Images and Variations All in One Diffusion Model](https://arxiv.org/abs/2211.08332) | Image Variations Generation |
96-
| [versatile_diffusion](./api/pipelines/versatile_diffusion) | [Versatile Diffusion: Text, Images and Variations All in One Diffusion Model](https://arxiv.org/abs/2211.08332) | Dual Image and Text Guided Generation |
97-
| [vq_diffusion](./api/pipelines/vq_diffusion) | [Vector Quantized Diffusion Model for Text-to-Image Synthesis](https://arxiv.org/abs/2111.14822) | Text-to-Image Generation |
49+
</div>

0 commit comments

Comments
 (0)