You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/readthedocs/source/doc/LLM/Overview/KeyFeatures/langchain_api.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -31,7 +31,7 @@ You may also convert Hugging Face *Transformers* models into native INT4 format,
31
31
```eval_rst
32
32
.. note::
33
33
34
-
* Currently only llama/bloom/gptneox/starcoder/chatglm model families are supported; for other models, you may use the Hugging Face ``transformers`` INT4 format as described `above <./langchain_api.html#using-hugging-face-transformers-int4-format>`_.
34
+
* Currently only llama/bloom/gptneox/starcoder model families are supported; for other models, you may use the Hugging Face ``transformers`` INT4 format as described `above <./langchain_api.html#using-hugging-face-transformers-int4-format>`_.
35
35
36
36
* You may choose the corresponding API developed for specific native models to load the converted model.
37
37
```
@@ -41,9 +41,9 @@ from ipex_llm.langchain.llms import LlamaLLM
41
41
from ipex_llm.langchain.embeddings import LlamaEmbeddings
42
42
from langchain.chains.question_answering import load_qa_chain
43
43
44
-
# switch to ChatGLMEmbeddings/GptneoxEmbeddings/BloomEmbeddings/StarcoderEmbeddings to load other models
44
+
# switch to GptneoxEmbeddings/BloomEmbeddings/StarcoderEmbeddings to load other models
Copy file name to clipboardExpand all lines: docs/readthedocs/source/doc/PythonAPI/LLM/langchain.rst
+1-13Lines changed: 1 addition & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -31,7 +31,7 @@ IPEX-LLM provides ``TransformersLLM`` and ``TransformersPipelineLLM``, which imp
31
31
Native Model
32
32
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
33
33
34
-
For ``llama``/``chatglm``/``bloom``/``gptneox``/``starcoder`` model families, you could also use the following LLM wrappers with the native (cpp) implementation for maximum performance.
34
+
For ``llama``/``bloom``/``gptneox``/``starcoder`` model families, you could also use the following LLM wrappers with the native (cpp) implementation for maximum performance.
35
35
36
36
.. tabs::
37
37
@@ -47,18 +47,6 @@ For ``llama``/``chatglm``/``bloom``/``gptneox``/``starcoder`` model families, yo
0 commit comments