rasbt-LLMs-from-scratch/ch05
2025-11-05 20:04:44 -06:00
..
01_main-chapter-code Fix empty device issue (#904) 2025-11-05 20:04:44 -06:00
02_alternative_weight_loading Switch from urllib to requests to improve reliability (#867) 2025-10-07 15:22:59 -05:00
03_bonus_pretraining_on_gutenberg Make quote style consistent (#891) 2025-10-21 19:42:33 -05:00
04_learning_rate_schedulers Add and link bonus material (#84) 2024-03-23 07:27:43 -05:00
05_bonus_hparam_tuning Make quote style consistent (#891) 2025-10-21 19:42:33 -05:00
06_user_interface Add PyPI package (#576) 2025-03-23 19:28:49 -05:00
07_gpt_to_llama Make quote style consistent (#891) 2025-10-21 19:42:33 -05:00
08_memory_efficient_weight_loading Make quote style consistent (#891) 2025-10-21 19:42:33 -05:00
09_extending-tokenizers Add PyPI package (#576) 2025-03-23 19:28:49 -05:00
10_llm-training-speed Make quote style consistent (#891) 2025-10-21 19:42:33 -05:00
11_qwen3 Mixture-of-Experts intro (#888) 2025-10-19 22:17:59 -05:00
12_gemma3 More efficient angles computation in RoPE (#830) 2025-09-16 03:23:33 +00:00
README.md - added (missing) Gemma3 bullet point in parent folder's readme.md (#788) 2025-08-22 15:03:47 -05:00

Chapter 5: Pretraining on Unlabeled Data

 

Main Chapter Code

 

Bonus Materials

  • 02_alternative_weight_loading contains code to load the GPT model weights from alternative places in case the model weights become unavailable from OpenAI
  • 03_bonus_pretraining_on_gutenberg contains code to pretrain the LLM longer on the whole corpus of books from Project Gutenberg
  • 04_learning_rate_schedulers contains code implementing a more sophisticated training function including learning rate schedulers and gradient clipping
  • 05_bonus_hparam_tuning contains an optional hyperparameter tuning script
  • 06_user_interface implements an interactive user interface to interact with the pretrained LLM
  • 07_gpt_to_llama contains a step-by-step guide for converting a GPT architecture implementation to Llama 3.2 and loads pretrained weights from Meta AI
  • 08_memory_efficient_weight_loading contains a bonus notebook showing how to load model weights via PyTorch's load_state_dict method more efficiently
  • 09_extending-tokenizers contains a from-scratch implementation of the GPT-2 BPE tokenizer
  • 10_llm-training-speed shows PyTorch performance tips to improve the LLM training speed
  • 11_qwen3 A from-scratch implementation of Qwen3 0.6B and Qwen3 30B-A3B (Mixture-of-Experts) including code to load the pretrained weights of the base, reasoning, and coding model variants
  • 12_gemma3 A from-scratch implementation of Gemma 3 270M and alternative with KV cache, including code to load the pretrained weights


Link to the video