In this short workshop, you'll get to fine-tune a language model on a custom dataset. We'll cover the main challenges and the building blocks of the fine-tuning procedure: model quantization, parameter-efficient fine-tuning (PEFT) and low-rank adapters (LoRA), chat templates and dataset formatting, and training arguments such as gradient checkpointing, gradient accumulation, sequence length, and optimizers. We'll use Google Colab, BitsAndBytes, and several Hugging Face libraries (peft, datasets, and transformers).