Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to full-parameter fine-tuning Vit while using LoRA to train LLM #3021

Open
zengxingchen opened this issue Feb 5, 2025 · 1 comment
Open
Labels
enhancement New feature or request

Comments

@zengxingchen
Copy link

zengxingchen commented Feb 5, 2025

Thanks for the amazing repo!.
For MLLM finetuning, how could we full-parameter tune Vit while using LoRA to train LLM?

@zengxingchen zengxingchen changed the title How to unfreeze Vit while using LoRA How to full parameter fine-tuning Vit while using LoRA to train LLM Feb 5, 2025
@zengxingchen zengxingchen changed the title How to full parameter fine-tuning Vit while using LoRA to train LLM How to full-parameter fine-tuning Vit while using LoRA to train LLM Feb 5, 2025
@zengxingchen zengxingchen reopened this Feb 5, 2025
@Jintao-Huang
Copy link
Collaborator

You might need to modify some code. You can try using the modules_to_save parameter.

@Jintao-Huang Jintao-Huang added the enhancement New feature or request label Feb 9, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants