rasbt-LLMs-from-scratch/ch07/01_main-chapter-code
Sebastian Raschka c21bfe4a23
Add PyPI package (#576)
* Add PyPI package

* fixes

* fixes
2025-03-23 19:28:49 -05:00
..
ch07.ipynb Add PyPI package (#576) 2025-03-23 19:28:49 -05:00
exercise_experiments.py Fix default argument in ex 7.2 (#506) 2025-01-25 10:46:48 -06:00
exercise-solutions.ipynb Uv workflow improvements (#531) 2025-02-16 13:16:51 -06:00
gpt_download.py Specify UTF-8 encoding in the json load command explicitely (#557) 2025-03-05 11:46:21 -06:00
gpt_instruction_finetuning.py A few cosmetic updates (#504) 2025-01-23 09:38:55 -06:00
instruction-data-with-response.json Updated ch07 (#213) 2024-06-15 15:10:01 -05:00
instruction-data.json add instruction dataset 2024-06-08 10:38:41 -05:00
load-finetuned-model.ipynb Test code in pytorch 2.4 (#285) 2024-07-24 21:53:41 -05:00
ollama_evaluate.py Use deterministic ollama settings (#250) 2024-06-27 07:16:48 -05:00
previous_chapters.py Show epochs as integers on x-axis (#241) 2024-06-23 07:41:25 -05:00
README.md Exercise solutions (#237) 2024-06-22 08:30:45 -05:00
tests.py fix ch07 unit test (#470) 2025-01-05 17:40:57 -06:00

Chapter 7: Finetuning to Follow Instructions

Main Chapter Code

  • ch07.ipynb contains all the code as it appears in the chapter
  • previous_chapters.py is a Python module that contains the GPT model we coded and trained in previous chapters, alongside many utility functions, which we reuse in this chapter
  • gpt_download.py contains the utility functions for downloading the pretrained GPT model weights
  • exercise-solutions.ipynb contains the exercise solutions for this chapter

Optional Code

  • load-finetuned-model.ipynb is a standalone Jupyter notebook to load the instruction finetuned model we created in this chapter

  • gpt_instruction_finetuning.py is a standalone Python script to instruction finetune the model as described in the main chapter (think of it as a chapter summary focused on the finetuning parts)

Usage:

python gpt_instruction_finetuning.py
matplotlib version: 3.9.0
tiktoken version: 0.7.0
torch version: 2.3.1
tqdm version: 4.66.4
tensorflow version: 2.16.1
--------------------------------------------------
Training set length: 935
Validation set length: 55
Test set length: 110
--------------------------------------------------
Device: cpu
--------------------------------------------------
File already exists and is up-to-date: gpt2/355M/checkpoint
File already exists and is up-to-date: gpt2/355M/encoder.json
File already exists and is up-to-date: gpt2/355M/hparams.json
File already exists and is up-to-date: gpt2/355M/model.ckpt.data-00000-of-00001
File already exists and is up-to-date: gpt2/355M/model.ckpt.index
File already exists and is up-to-date: gpt2/355M/model.ckpt.meta
File already exists and is up-to-date: gpt2/355M/vocab.bpe
Loaded model: gpt2-medium (355M)
--------------------------------------------------
Initial losses
   Training loss: 3.839039182662964
   Validation loss: 3.7619192123413088
Ep 1 (Step 000000): Train loss 2.611, Val loss 2.668
Ep 1 (Step 000005): Train loss 1.161, Val loss 1.131
Ep 1 (Step 000010): Train loss 0.939, Val loss 0.973
...
Training completed in 15.66 minutes.
Plot saved as loss-plot-standalone.pdf
--------------------------------------------------
Generating responses
100%|█████████████████████████████████████████████████████████| 110/110 [06:57<00:00,  3.80s/it]
Responses saved as instruction-data-with-response-standalone.json
Model saved as gpt2-medium355M-sft-standalone.pth
  • ollama_evaluate.py is a standalone Python script to evaluate the responses of the finetuned model as described in the main chapter (think of it as a chapter summary focused on the evaluation parts)

Usage:

python ollama_evaluate.py --file_path instruction-data-with-response-standalone.json
Ollama running: True
Scoring entries: 100%|███████████████████████████████████████| 110/110 [01:08<00:00,  1.62it/s]
Number of scores: 110 of 110
Average score: 51.75