Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

when would you show how to fintune the Qwen2.5-VL model? #707

Open
MengHao666 opened this issue Feb 1, 2025 · 8 comments
Open

when would you show how to fintune the Qwen2.5-VL model? #707

MengHao666 opened this issue Feb 1, 2025 · 8 comments

Comments

@MengHao666
Copy link

No description provided.

@hiyouga
Copy link
Contributor

hiyouga commented Feb 2, 2025

It is the same as the Qwen2-VL model: https://github.com/QwenLM/Qwen2.5-VL/tree/35ba6e18636510de4bf8d4a7caaca3f4f5163a84?tab=readme-ov-file#training

@2U1
Copy link

2U1 commented Feb 2, 2025

I've made a code for finetuning Qwen2.5-VL
You could use this

https://github.com/2U1/Qwen2-VL-Finetune

@SFTJBD
Copy link

SFTJBD commented Feb 3, 2025

It is the same as the Qwen2-VL model: https://github.com/QwenLM/Qwen2.5-VL/tree/35ba6e18636510de4bf8d4a7caaca3f4f5163a84?tab=readme-ov-file#training

Thank you so much for the suggestions. However, I'm wondering, does the data template in the qwen2vl_full_sft.yaml file not need any changes? Furthermore, is there no specific data format requirement for fine-tuning on 2.5vl on grounding tasks?

@echoht
Copy link

echoht commented Feb 11, 2025

有人用llama_factory微调成功吗? transformers版本不兼容怎么解决哈?

@zhangfaen
Copy link

@MengHao666
Copy link
Author

有人用llama_factory微调成功吗? transformers版本不兼容怎么解决哈?

我的可以成功,requirement 如下,另外我的其他环境
deepspeed==0.8.2 & xformers==0.0.28 & flash-attn==2.6.3,pytorch2.4.0
cuda版本:cuda12.4,python3.10.13

transformers==4.48.2
datasets==3.2.0
accelerate==1.2.1
peft==0.12.0
trl==0.9.6
openlm-hub
tokenizers>=0.19.0,<=0.21.0
gradio>=4.38.0,<=5.12.0
pandas>=2.0.0
scipy
einops
sentencepiece
tiktoken
protobuf
uvicorn
pydantic
fastapi
sse-starlette
matplotlib>=3.7.0
fire
packaging
pyyaml
numpy<2.0.0
av
librosa
tyro<0.9.0

@echoht
Copy link

echoht commented Feb 11, 2025

有人用llama_factory微调成功吗? transformers版本不兼容怎么解决哈?

我的可以成功,requirement 如下,另外我的其他环境 deepspeed==0.8.2 & xformers==0.0.28 & flash-attn==2.6.3,pytorch2.4.0 cuda版本:cuda12.4,python3.10.13

transformers==4.48.2
datasets==3.2.0
accelerate==1.2.1
peft==0.12.0
trl==0.9.6
openlm-hub
tokenizers>=0.19.0,<=0.21.0
gradio>=4.38.0,<=5.12.0
pandas>=2.0.0
scipy
einops
sentencepiece
tiktoken
protobuf
uvicorn
pydantic
fastapi
sse-starlette
matplotlib>=3.7.0
fire
packaging
pyyaml
numpy<2.0.0
av
librosa
tyro<0.9.0

qwenvl2.5啊,transformers版本不对吧。

@MengHao666
Copy link
Author

有人用llama_factory微调成功吗? transformers版本不兼容怎么解决哈?

我的可以成功,requirement 如下,另外我的其他环境 deepspeed==0.8.2 & xformers==0.0.28 & flash-attn==2.6.3,pytorch2.4.0 cuda版本:cuda12.4,python3.10.13

transformers==4.48.2
datasets==3.2.0
accelerate==1.2.1
peft==0.12.0
trl==0.9.6
openlm-hub
tokenizers>=0.19.0,<=0.21.0
gradio>=4.38.0,<=5.12.0
pandas>=2.0.0
scipy
einops
sentencepiece
tiktoken
protobuf
uvicorn
pydantic
fastapi
sse-starlette
matplotlib>=3.7.0
fire
packaging
pyyaml
numpy<2.0.0
av
librosa
tyro<0.9.0

qwenvl2.5啊,transformers版本不对吧。

我的环境可以,llamafactory也用最新版

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants