Skip to content

Commit 84fab83

Browse files
committed
fix typo chineese translation
1 parent 45dc127 commit 84fab83

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

translate_cache/transformers/__init__.zh.json

+1-1
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@
3838
"<p>This is an implementation of the paper <a href=\"https://papers.labml.ai/paper/2105.14103\">An Attention Free Transformer</a>.</p>\n": "<p>\u8fd9\u662f\u8bba\u6587\u300a<a href=\"https://papers.labml.ai/paper/2105.14103\">\u65e0\u6ce8\u610f\u529b\u53d8\u538b\u5668\u300b\u7684</a>\u5b9e\u73b0\u3002</p>\n",
3939
"<p>This is an implementation of the paper <a href=\"https://papers.labml.ai/paper/2109.08668\">Primer: Searching for Efficient Transformers for Language Modeling</a>.</p>\n": "<p>\u8fd9\u662f\u8bba\u6587\u300a\u5165<a href=\"https://papers.labml.ai/paper/2109.08668\">\u95e8\uff1a\u4e3a\u8bed\u8a00\u5efa\u6a21\u5bfb\u627e\u9ad8\u6548\u7684\u53d8\u6362\u5668\u300b\u7684</a>\u5b9e\u73b0\u3002</p>\n",
4040
"<p>This is an implementation of the paper <a href=\"https://papers.labml.ai/paper/2110.13711\">Hierarchical Transformers Are More Efficient Language Models</a></p>\n": "<p>\u8fd9\u662f\u8bba\u6587\u300a<a href=\"https://papers.labml.ai/paper/2110.13711\">\u5206\u5c42\u53d8\u6362\u5668\u662f\u66f4\u6709\u6548\u7684\u8bed\u8a00\u6a21\u578b</a>\u300b\u7684\u5b9e\u73b0</p>\n",
41-
"<p>This module contains <a href=\"https://pytorch.org/\">PyTorch</a> implementations and explanations of original transformer from paper <a href=\"https://papers.labml.ai/paper/1706.03762\">Attention Is All You Need</a>, and derivatives and enhancements of it.</p>\n": "</a><p>\u672c\u6a21\u5757\u5305\u542b <a href=\"https://pytorch.org/\">PyTorch \u5b9e\u73b0\u548c\u8bba\u6587 Attronger Is <a href=\"https://papers.labml.ai/paper/1706.03762\">All You Need</a> \u4e2d\u5bf9\u539f\u521b\u53d8\u538b\u5668\u7684\u89e3\u91ca\uff0c\u4ee5\u53ca\u5b83\u7684\u884d\u751f\u54c1\u548c\u589e\u5f3a\u529f\u80fd\u3002</p>\n",
41+
"<p>This module contains <a href=\"https://pytorch.org/\">PyTorch</a> implementations and explanations of original transformer from paper <a href=\"https://papers.labml.ai/paper/1706.03762\">Attention Is All You Need</a>, and derivatives and enhancements of it.</p>\n": "</a><p>\u672c\u6a21\u5757\u5305\u542b <a href=\"https://pytorch.org/\">PyTorch \u5b9e\u73b0\u548c\u8bba\u6587 Attention Is <a href=\"https://papers.labml.ai/paper/1706.03762\">All You Need</a> \u4e2d\u5bf9\u539f\u521b\u53d8\u538b\u5668\u7684\u89e3\u91ca\uff0c\u4ee5\u53ca\u5b83\u7684\u884d\u751f\u54c1\u548c\u589e\u5f3a\u529f\u80fd\u3002</p>\n",
4242
"<ul><li><a href=\"mha.html\">Multi-head attention</a> </li>\n<li><a href=\"models.html\">Transformer Encoder and Decoder Models</a> </li>\n<li><a href=\"feed_forward.html\">Position-wise Feed Forward Network (FFN)</a> </li>\n<li><a href=\"positional_encoding.html\">Fixed positional encoding</a></li></ul>\n": "<ul><li><a href=\"mha.html\">\u591a\u5934\u5173\u6ce8</a></li>\n<li><a href=\"models.html\">\u53d8\u538b\u5668\u7f16\u7801\u5668\u548c\u89e3\u7801\u5668\u578b\u53f7</a></li>\n<li><a href=\"feed_forward.html\">\u4f4d\u7f6e\u524d\u9988\u7f51\u7edc (FFN)</a></li>\n<li><a href=\"positional_encoding.html\">\u56fa\u5b9a\u4f4d\u7f6e\u7f16\u7801</a></li></ul>\n",
4343
"This is a collection of PyTorch implementations/tutorials of transformers and related techniques.": "\u8fd9\u662f\u53d8\u538b\u5668\u548c\u76f8\u5173\u6280\u672f\u7684 PyTorch \u5b9e\u73b0/\u6559\u7a0b\u7684\u96c6\u5408\u3002",
4444
"Transformers": "\u53d8\u538b\u5668"

0 commit comments

Comments
 (0)