rasbt-LLMs-from-scratch/ch07
Sebastian Raschka 4bfbcd069d
Auto download DPO dataset if not already available in path (#479)
* Auto download DPO dataset if not already available in path

* update tests to account for latest HF transformers release in unit tests

* pep 8
2025-01-12 12:27:28 -06:00
..
01_main-chapter-code fix ch07 unit test (#470) 2025-01-05 17:40:57 -06:00
02_dataset-utilities fix typos, add codespell pre-commit hook (#264) 2024-07-16 07:07:04 -05:00
03_model-evaluation Fix 8-billion-parameter spelling 2024-07-28 10:48:56 -05:00
04_preference-tuning-with-dpo Auto download DPO dataset if not already available in path (#479) 2025-01-12 12:27:28 -06:00
05_dataset-generation Clarify API usage limits in bonus content 2024-09-15 08:05:04 -05:00
06_user_interface Add user interface to ch06 and ch07 (#366) 2024-09-21 20:33:00 -05:00
README.md Update bonus section formatting (#400) 2024-10-12 10:26:08 -05:00

Chapter 7: Finetuning to Follow Instructions

 

Main Chapter Code

 

Bonus Materials

  • 02_dataset-utilities contains utility code that can be used for preparing an instruction dataset
  • 03_model-evaluation contains utility code for evaluating instruction responses using a local Llama 3 model and the GPT-4 API
  • 04_preference-tuning-with-dpo implements code for preference finetuning with Direct Preference Optimization (DPO)
  • 05_dataset-generation contains code to generate and improve synthetic datasets for instruction finetuning
  • 06_user_interface implements an interactive user interface to interact with the pretrained LLM