-
Notifications
You must be signed in to change notification settings - Fork 555
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] Unsloth fine-tuning support #1322
Comments
Thanks for the feature request! Here's the reddit post referenced in the request for posterity: https://www.reddit.com/r/LocalLLaMA/comments/1id3ak8/comment/m9yq4v4/ |
Thanks @taenin |
Hey everyone, I’d love to help out with this! Just wondering:
|
@georgedouzas I would recommend starting by understanding how both Oumi and Unsloth ingest datasets and models. Ultimately to add unsloth support, you'll need to update this method: https://github.com/oumi-ai/oumi/blob/main/src/oumi/builders/training.py#L49 This will involve adding a It would also be good to understand how our standard training configurations can be ported to inputs for Unsloth: https://oumi.ai/docs/en/latest/api/oumi.core.configs.html#oumi.core.configs.TrainingConfig |
Started working on this, please assign it to me. Thanks! |
Feature request
AdOdd4004 and Amazing Q request from Reddit
Motivation / references
toward universal support for fine-tuning
Your contribution
linguistic
The text was updated successfully, but these errors were encountered: