Skip to content

Commit 4748a91

Browse files
committed
📝 fix typo ktransformer->ktransformers
1 parent 19f058e commit 4748a91

File tree

5 files changed

+7
-7
lines changed

5 files changed

+7
-7
lines changed

README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -163,9 +163,9 @@ If you are interested in our design principles and the implementation of the inj
163163

164164
<h2 id="ack">Acknowledgment and Contributors</h2>
165165

166-
The development of KTransformer is based on the flexible and versatile framework provided by Transformers. We also benefit from advanced kernels such as GGUF/GGML, Llamafile, Marlin, sglang and flashinfer. We are planning to contribute back to the community by upstreaming our modifications.
166+
The development of KTransformers is based on the flexible and versatile framework provided by Transformers. We also benefit from advanced kernels such as GGUF/GGML, Llamafile, Marlin, sglang and flashinfer. We are planning to contribute back to the community by upstreaming our modifications.
167167

168-
KTransformer is actively maintained and developed by contributors from the <a href="https://madsys.cs.tsinghua.edu.cn/">MADSys group</a> at Tsinghua University and members from <a href="http://approaching.ai/">Approaching.AI</a>. We welcome new contributors to join us in making KTransformer faster and easier to use.
168+
KTransformers is actively maintained and developed by contributors from the <a href="https://madsys.cs.tsinghua.edu.cn/">MADSys group</a> at Tsinghua University and members from <a href="http://approaching.ai/">Approaching.AI</a>. We welcome new contributors to join us in making KTransformers faster and easier to use.
169169

170170

171171
<h2 id="ack">Discussion</h2>

README_ZH.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -152,9 +152,9 @@ YAML 文件中的每个规则都有两部分:`match` 和 `replace`。`match`
152152

153153
<h2 id="ack">致谢和贡献者</h2>
154154

155-
KTransformer 的开发基于 Transformers 提供的灵活和多功能框架。我们还受益于 GGUF/GGML、Llamafile 、 Marlin、sglang和flashinfer 等高级内核。我们计划通过向上游贡献我们的修改来回馈社区。
155+
KTransformers 的开发基于 Transformers 提供的灵活和多功能框架。我们还受益于 GGUF/GGML、Llamafile 、 Marlin、sglang和flashinfer 等高级内核。我们计划通过向上游贡献我们的修改来回馈社区。
156156

157-
KTransformer 由清华大学 <a href="https://madsys.cs.tsinghua.edu.cn/">MADSys group</a> 小组的成员以及 <a href="http://approaching.ai/">Approaching.AI</a> 的成员积极维护和开发。我们欢迎新的贡献者加入我们,使 KTransformer 更快、更易于使用。
157+
KTransformers 由清华大学 <a href="https://madsys.cs.tsinghua.edu.cn/">MADSys group</a> 小组的成员以及 <a href="http://approaching.ai/">Approaching.AI</a> 的成员积极维护和开发。我们欢迎新的贡献者加入我们,使 KTransformers 更快、更易于使用。
158158

159159

160160
<h2 id="ack">讨论</h2>

doc/SUMMARY.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# Ktransformer
1+
# Ktransformers
22

33
[Introduction](./README.md)
44
# Install

doc/en/Docker.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ There is a Docker image available for our project, you can pull the docker image
99
```
1010
docker pull approachingai/ktransformers:0.2.1
1111
```
12-
**Notice**: In this image, we compile the ktransformers in AVX512 instuction CPUs, if your cpu not support AVX512, it is suggested to recompile and install ktransformer in the /workspace/ktransformers directory within the container.
12+
**Notice**: In this image, we compile the ktransformers in AVX512 instuction CPUs, if your cpu not support AVX512, it is suggested to recompile and install ktransformers in the /workspace/ktransformers directory within the container.
1313

1414
## Building docker image locally
1515
- Download Dockerfile in [there](../../Dockerfile)

doc/en/FAQ.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -118,7 +118,7 @@ From: https://github.com/kvcache-ai/ktransformers/issues/374
118118

119119
1. First, download the latest source code using git.
120120
2. Then, modify the DeepSeek-V3-Chat-multi-gpu-4.yaml in the source code and all related yaml files, replacing all instances of KLinearMarlin with KLinearTorch.
121-
3. Next, you need to compile from the ktransformer source code until it successfully compiles on your local machine.
121+
3. Next, you need to compile from the ktransformers source code until it successfully compiles on your local machine.
122122
4. Then, install flash-attn. It won't be used, but not installing it will cause an error.
123123
5. Then, modify local_chat.py, replacing all instances of flash_attention_2 with eager.
124124
6. Then, run local_chat.py. Be sure to follow the official tutorial's commands and adjust according to your local machine's parameters.

0 commit comments

Comments
 (0)