Skip to content

sdivyanshu90/Finetuning-Large-Language-Models

Repository files navigation

๐Ÿ“˜ Finetuning Large Language Models

By DeepLearning.AI

Welcome! ๐Ÿ‘‹ This repository is for students and practitioners taking the Finetuning Large Language Models course offered by DeepLearning.AI.

The course explores how to adapt powerful foundation models to specialized tasks efficiently, responsibly, and with practical hands-on exercises.


๐Ÿš€ What Youโ€™ll Learn

  • Introduction to LLMs โ€“ why and when finetuning is needed
  • Parameter-Efficient Finetuning (PEFT) โ€“ techniques like LoRA and adapters
  • Instruction Finetuning โ€“ aligning models for task-specific performance
  • Evaluation & Safety โ€“ measuring effectiveness and mitigating risks
  • Practical Walkthroughs โ€“ coding labs with Hugging Face & modern tools

๐Ÿ› ๏ธ Getting Started

1. Clone the Repository

git clone https://github.com/sdivyanshu90/Finetuning-Large-Language-Models.git
cd Finetuning-Large-Language-Models

2. Run the Labs

Open Jupyter notebooks:

jupyter notebook

๐Ÿ“š Prerequisites

  • Basic Python programming ๐Ÿ
  • Familiarity with PyTorch or TensorFlow
  • Understanding of transformers and Hugging Face libraries (helpful, but not mandatory!)

๐Ÿค Contributing

This repo is for learning purposes. If youโ€™d like to fix a typo, improve instructions, or add resources, feel free to open a pull request. ๐Ÿ™Œ


๐Ÿ™ Acknowledgment

This course is created and provided by DeepLearning.AI. We are grateful for their mission to make cutting-edge AI education accessible worldwide.


โœจ Happy finetuning!

About

No description or website provided.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published