This repository contains code and resources for the "Finetuning Large Language Models" course. The course covers:
- Distinctions between Finetuning and Other Methods: Learn the differences between finetuning, prompt engineering, and retrieval augmented generation.
- Integrating Finetuning into the Training Process: How to effectively incorporate finetuning.
- Instruction Finetuning: Training LLMs to follow instructions, similar to ChatGPT.
This repository demonstrates how to use Lamini to finetune large language models with just 3 lines of code.
To get started, clone the repository and follow the instructions provided in the Jupyter notebooks. Ensure you have the necessary dependencies installed.
Feel free to open issues or submit pull requests with improvements or questions.
This project is licensed under the MIT License.