Large language models learn statistical structure over token sequences at a very large scale. That broad training gives them general-purpose capabilities across tasks such as generation, summarization, classification, retrieval, and chat.
Many later ideas on the blog build on this term, including prompting, fine-tuning, retrieval pipelines, and chatbot design. Naming the base model class clearly makes those follow-on discussions easier to connect.
