rasbt-LLMs-from-scratch/ch05
2026-02-19 16:42:19 -06:00
..
01_main-chapter-code Readability and code quality improvements (#959) 2026-02-17 18:44:56 -06:00
02_alternative_weight_loading Switch from urllib to requests to improve reliability (#867) 2025-10-07 15:22:59 -05:00
03_bonus_pretraining_on_gutenberg Make quote style consistent (#891) 2025-10-21 19:42:33 -05:00
04_learning_rate_schedulers Add and link bonus material (#84) 2024-03-23 07:27:43 -05:00
05_bonus_hparam_tuning Make quote style consistent (#891) 2025-10-21 19:42:33 -05:00
06_user_interface Add PyPI package (#576) 2025-03-23 19:28:49 -05:00
07_gpt_to_llama Readability and code quality improvements (#959) 2026-02-17 18:44:56 -06:00
08_memory_efficient_weight_loading upload saved nb 2025-12-20 18:41:34 -06:00
09_extending-tokenizers Add PyPI package (#576) 2025-03-23 19:28:49 -05:00
10_llm-training-speed Make quote style consistent (#891) 2025-10-21 19:42:33 -05:00
11_qwen3 Readability and code quality improvements (#959) 2026-02-17 18:44:56 -06:00
12_gemma3 Readability and code quality improvements (#959) 2026-02-17 18:44:56 -06:00
13_olmo3 Readability and code quality improvements (#959) 2026-02-17 18:44:56 -06:00
14_ch05_with_other_llms Optional weight tying for Qwen3 and Llama3.2 pretraining (#949) 2026-01-14 09:07:04 -06:00
15_tiny-aya image size 2026-02-19 16:42:19 -06:00
README.md Olmo 3 from scratch (#914) 2025-11-22 22:42:18 -06:00

Chapter 5: Pretraining on Unlabeled Data

 

Main Chapter Code

 

Bonus Materials

 

LLM Architectures From Scratch

 

  • 07_gpt_to_llama contains a step-by-step guide for converting a GPT architecture implementation to Llama 3.2 and loads pretrained weights from Meta AI
  • 11_qwen3 A from-scratch implementation of Qwen3 0.6B and Qwen3 30B-A3B (Mixture-of-Experts) including code to load the pretrained weights of the base, reasoning, and coding model variants
  • 12_gemma3 A from-scratch implementation of Gemma 3 270M and alternative with KV cache, including code to load the pretrained weights
  • 13_olmo3 A from-scratch implementation of Olmo 3 7B and 32B (Base, Instruct, and Think variants) and alternative with KV cache, including code to load the pretrained weights

 

Code-Along Video for This Chapter



Link to the video