FileMood

Download [ WebToolTip.com ] Code with the Author of Build an LLM (From Scratch)

WebToolTip com Code with the Author of Build an LLM From Scratch

Name

[ WebToolTip.com ] Code with the Author of Build an LLM (From Scratch)

  DOWNLOAD Copy Link

Trouble downloading? see How To

Total Size

3.4 GB

Total Files

50

Last Seen

2025-07-19 00:24

Hash

1B0592C4D2DBAA7DC770E95FE0C0A773F8A659DD

/

Get Bonus Downloads Here.url

0.2 KB

/~Get Your Files Here !/

001. Chapter 1. Python Environment Setup.mp4

94.6 MB

002. Chapter 2. Tokenizing text.mp4

103.9 MB

003. Chapter 2. Converting tokens into token IDs.mp4

41.9 MB

004. Chapter 2. Adding special context tokens.mp4

36.5 MB

005. Chapter 2. Byte pair encoding.mp4

73.1 MB

006. Chapter 2. Data sampling with a sliding window.mp4

96.3 MB

007. Chapter 2. Creating token embeddings.mp4

34.4 MB

008. Chapter 2. Encoding word positions.mp4

51.6 MB

009. Chapter 3. A simple self-attention mechanism without trainable weights Part 1.mp4

182.3 MB

010. Chapter 3. A simple self-attention mechanism without trainable weights Part 2.mp4

57.6 MB

011. Chapter 3. Computing the attention weights step by step.mp4

66.6 MB

012. Chapter 3. Implementing a compact self-attention Python class.mp4

35.3 MB

013. Chapter 3. Applying a causal attention mask.mp4

59.1 MB

014. Chapter 3. Masking additional attention weights with dropout.mp4

17.6 MB

015. Chapter 3. Implementing a compact causal self-attention class.mp4

43.5 MB

016. Chapter 3. Stacking multiple single-head attention layers.mp4

47.8 MB

017. Chapter 3. Implementing multi-head attention with weight splits.mp4

133.2 MB

018. Chapter 4. Coding an LLM architecture.mp4

65.1 MB

019. Chapter 4. Normalizing activations with layer normalization.mp4

88.1 MB

020. Chapter 4. Implementing a feed forward network with GELU activations.mp4

107.0 MB

021. Chapter 4. Adding shortcut connections.mp4

46.4 MB

022. Chapter 4. Connecting attention and linear layers in a transformer block.mp4

67.2 MB

023. Chapter 4. Coding the GPT model.mp4

70.2 MB

024. Chapter 4. Generating text.mp4

68.9 MB

025. Chapter 5. Using GPT to generate text.mp4

75.1 MB

026. Chapter 5. Calculating the text generation loss cross entropy and perplexity.mp4

102.3 MB

027. Chapter 5. Calculating the training and validation set losses.mp4

99.3 MB

028. Chapter 5. Training an LLM.mp4

145.6 MB

029. Chapter 5. Decoding strategies to control randomness.mp4

21.0 MB

030. Chapter 5. Temperature scaling.mp4

44.2 MB

031. Chapter 5. Top-k sampling.mp4

27.5 MB

032. Chapter 5. Modifying the text generation function.mp4

35.1 MB

033. Chapter 5. Loading and saving model weights in PyTorch.mp4

23.1 MB

034. Chapter 5. Loading pretrained weights from OpenAI.mp4

111.8 MB

035. Chapter 6. Preparing the dataset.mp4

108.9 MB

036. Chapter 6. Creating data loaders.mp4

57.0 MB

037. Chapter 6. Initializing a model with pretrained weights.mp4

44.3 MB

038. Chapter 6. Adding a classification head.mp4

77.2 MB

039. Chapter 6. Calculating the classification loss and accuracy.mp4

67.6 MB

040. Chapter 6. Fine-tuning the model on supervised data.mp4

170.6 MB

041. Chapter 6. Using the LLM as a spam classifier.mp4

37.6 MB

042. Chapter 7. Preparing a dataset for supervised instruction fine-tuning.mp4

49.5 MB

043. Chapter 7. Organizing data into training batches.mp4

83.7 MB

044. Chapter 7. Creating data loaders for an instruction dataset.mp4

33.9 MB

045. Chapter 7. Loading a pretrained LLM.mp4

25.9 MB

046. Chapter 7. Fine-tuning the LLM on instruction data.mp4

102.9 MB

047. Chapter 7. Extracting and saving responses.mp4

44.4 MB

048. Chapter 7. Evaluating the fine-tuned LLM.mp4

107.1 MB

Bonus Resources.txt

0.1 KB

 

Total files 50


Copyright © 2025 FileMood.com