Glossary Entry

Pre-Training

The initial large-scale training phase that teaches a model general patterns before narrower task-specific adaptation.

Training LLMs

Also called: pretraining

Seed source: Google ML Glossary

Pre-training is where a model absorbs broad structure from a large dataset before being adapted to a more specific task. Fine-tuning and other post-training methods build on top of this stage rather than replacing it.

That distinction matters in posts about ChatGPT, transfer learning, and LLM fine-tuning. What the model learned during pre-training strongly shapes what later adaptation can improve, preserve, or fail to change.