Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] Unsloth fine-tuning support #1322

Open
brragorn opened this issue Jan 31, 2025 · 5 comments
Open

[Feature] Unsloth fine-tuning support #1322

brragorn opened this issue Jan 31, 2025 · 5 comments
Assignees
Labels
Feature help wanted Extra attention is needed

Comments

@brragorn
Copy link
Contributor

Feature request

AdOdd4004 and Amazing Q request from Reddit

Motivation / references

toward universal support for fine-tuning

Your contribution

linguistic

@taenin
Copy link
Collaborator

taenin commented Jan 31, 2025

Thanks for the feature request! Here's the reddit post referenced in the request for posterity: https://www.reddit.com/r/LocalLLaMA/comments/1id3ak8/comment/m9yq4v4/

@taenin taenin added the Feature label Jan 31, 2025
@brragorn
Copy link
Contributor Author

Thanks @taenin
Let's post the update back to that thread if this gets taken up and completed

@nikg4 nikg4 added the triage This issue needs review by the core team. label Feb 1, 2025
@oelachqar oelachqar changed the title [Feature]: Unsloth fine-tuning support [Feature] Unsloth fine-tuning support Feb 4, 2025
@oelachqar oelachqar added the help wanted Extra attention is needed label Feb 4, 2025
@oelachqar oelachqar removed their assignment Feb 4, 2025
@oelachqar oelachqar removed the triage This issue needs review by the core team. label Feb 4, 2025
@georgedouzas
Copy link

Hey everyone,

I’d love to help out with this! Just wondering:

  • Any key things to keep in mind for integration?
  • Is there a preferred approach, or is it open-ended?
  • Any blockers or decisions that need sorting first?

@taenin
Copy link
Collaborator

taenin commented Mar 21, 2025

@georgedouzas I would recommend starting by understanding how both Oumi and Unsloth ingest datasets and models.

Ultimately to add unsloth support, you'll need to update this method: https://github.com/oumi-ai/oumi/blob/main/src/oumi/builders/training.py#L49

This will involve adding a TrainerType.UNSLOTH. You'll also need to ensure that we can support UNSLOTH with our https://github.com/oumi-ai/oumi/blob/main/src/oumi/core/trainers/base_trainer.py#L21 (or update this class as needed).

It would also be good to understand how our standard training configurations can be ported to inputs for Unsloth: https://oumi.ai/docs/en/latest/api/oumi.core.configs.html#oumi.core.configs.TrainingConfig

@georgedouzas
Copy link

Started working on this, please assign it to me. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Feature help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

5 participants