Skip to content

Commit aafd6d5

Browse files
Add initial python api doc in mddoc (1/2) (#11389)
* Add initial python api mddoc * Fix based on comments
1 parent a027121 commit aafd6d5

File tree

1 file changed

+73
-0
lines changed

1 file changed

+73
-0
lines changed

docs/mddocs/PythonAPI/transformers.md

Lines changed: 73 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,73 @@
1+
# IPEX-LLM `transformers`-style API
2+
3+
## Hugging Face `transformers` AutoModel
4+
5+
You can apply IPEX-LLM optimizations on any Hugging Face Transformers models by using the standard AutoModel APIs.
6+
7+
> [!NOTE]
8+
> Here we take `ipex_llm.transformers.AutoModelForCausalLM` as an example. The API documentation for the following class, including `ipex_llm.transformers.AutoModel` / `AutoModelForSpeechSeq2Seq` / `AutoModelForSeq2SeqLM` / `AutoModelForSequenceClassification` / `AutoModelForMaskedLM` / `AutoModelForQuestionAnswering` / `AutoModelForNextSentencePrediction` / `AutoModelForMultipleChoice` / `AutoModelForTokenClassification`, are the same.
9+
10+
### _`class`_ **`ipex_llm.transformers.AutoModelForCausalLM`**
11+
12+
#### _`classmethod`_ **`from_pretrained`**_`(*args, **kwargs)`_
13+
14+
Load a model from a directory or the HF Hub. Use load_in_4bit or load_in_low_bit parameter the weight of model’s linears can be loaded to low-bit format, like int4, int5 and int8.
15+
16+
Three new arguments are added to extend Hugging Face’s from_pretrained method as follows:
17+
18+
- **Parameters**:
19+
20+
- **load_in_4bit**: boolean value, True means loading linear's weight to symmetric int 4 if the model is a regular fp16/bf16/fp32 model, and to asymmetric int 4 if the model is GPTQ model. Default to be `False`.
21+
22+
- **load_in_low_bit**: `str` value, options are `'sym_int4'`, `'asym_int4'`, `'sym_int5'`, `'asym_int5'`, `'sym_int8'`, `'nf3'`, `'nf4'`, `'fp4'`, `'fp8'`, `'fp8_e4m3'`, `'fp8_e5m2'`, `'fp6'`, `'gguf_iq2_xxs'`, `'gguf_iq2_xs'`, `'gguf_iq1_s'`, `'gguf_q4k_m'`, `'gguf_q4k_s'`, `'fp16'`, `'bf16'`, `'fp6_k'`, `'sym_int4'` means symmetric int 4, `'asym_int4'` means asymmetric int 4, `'nf4'` means 4-bit NormalFloat, etc. Relevant low bit optimizations will be applied to the model.
23+
24+
- **optimize_model**: boolean value, Whether to further optimize the low_bit llm model. Default to be `True`.
25+
26+
- **modules_to_not_convert**: list of str value, modules (`nn.Module`) that are skipped when conducting model optimizations. Default to be `None`.
27+
28+
- **speculative**: `boolean` value, Whether to use speculative decoding. Default to be `False`.
29+
30+
- **cpu_embedding**: Whether to replace the Embedding layer, may need to set it to `True` when running IPEX-LLM on GPU. Default to be `False`.
31+
32+
- **lightweight_bmm**: Whether to replace the torch.bmm ops, may need to set it to `True` when running IPEX-LLM on GPU on Windows. Default to be `False`.
33+
34+
- **imatrix**: `str` value, represent filename of importance matrix pretrained on specific datasets for use with the improved quantization methods recently added to llama.cpp.
35+
36+
- **model_hub**: `str` value, options are `'huggingface'` and `'modelscope'`, specify the model hub. Default to be `'huggingface'`.
37+
38+
- **embedding_qtype**: `str` value, options are `'q2_k'`, `'q4_k'` now. Default to be None. Relevant low bit optimizations will be applied to `nn.Embedding` layer.
39+
40+
- **mixed_precision**: `boolean` value, Whether to use mixed precision quantization. Default to be `False`. If set to `True`, we will use `sym_int8` for lm_head when `load_in_low_bit` is `sym_int4` or `asym_int4`.
41+
42+
- **pipeline_parallel_stages**: `int` value, the number of GPUs allocated for pipeline parallel. Default to be `1`. Please set `pipeline_parallel_stages > 1` to run pipeline parallel inference on multiple GPUs.
43+
44+
- **Returns**:A model instance
45+
46+
#### _`classmethod`_ **`from_gguf`**_`(fpath, optimize_model=True, cpu_embedding=False, low_bit="sym_int4")`_
47+
48+
Load gguf model and tokenizer and convert it to bigdl-llm model and huggingface tokenzier
49+
50+
- **Parameters**:
51+
52+
- **fpath**: Path to gguf model file
53+
54+
- **optimize_model**: Whether to further optimize llm model, defaults to `True`
55+
56+
- **cpu_embedding**: Whether to replace the Embedding layer, may need to set it to `True` when running IPEX-LLM on GPU, defaults to `False`
57+
58+
- **Returns**:An optimized ipex-llm model and a huggingface tokenizer
59+
60+
#### _`classmethod`_ **`load_convert`**_`(q_k, optimize_model, *args, **kwargs)`_
61+
62+
#### _`classmethod`_ **`load_low_bit`**_`(pretrained_model_name_or_path, *model_args, **kwargs)`_
63+
64+
Load a low bit optimized model (including INT4, INT5 and INT8) from a saved ckpt.
65+
66+
- **Parameters**:
67+
68+
- **pretrained_model_name_or_path**: `str` value, Path to load the optimized model ckpt.
69+
70+
- **optimize_model**: `boolean` value, Whether to further optimize the low_bit llm model.
71+
Default to be `True`.
72+
73+
- **Returns**:A model instance

0 commit comments

Comments
 (0)