Skip to content

Commit ed70d27

Browse files
authored
Fix 'Block pattern could not be match. Pass block_name_to_quantize argument in quantize_model' while loading Qwen VL GPTQ model (#2295)
Updated BLOCK_PATTERNS
1 parent 824e368 commit ed70d27

File tree

1 file changed

+1
-0
lines changed

1 file changed

+1
-0
lines changed

optimum/gptq/constants.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,7 @@
1818
"model.decoder.layers",
1919
"gpt_neox.layers",
2020
"model.layers",
21+
"model.language_model.layers",
2122
# modules loaded by AutoModel vs AutoModelForCausalLM have different prefixes
2223
"h",
2324
"decoder.layers",

0 commit comments

Comments
 (0)