Understanding LLMs: A Visual Guide

You don't need to be a computer scientist to understand how large language models work. But having a basic mental model helps you use them more effectively, spot their limitations, and make better decisions about when to trust their outputs.

I created this illustrated guide to explain the key concepts in plain language. Narrated by an anthropomorphized LLM character, it walks you through how these systems are built, how they think, and why they behave the way they do.

What's Inside

The guide covers 20 illustrated concepts:

  • How LLMs gather and learn from massive text datasets
  • What tokens are and why they matter
  • The neural network "brain" made of math
  • Why LLMs predict rather than remember
  • The difference between probabilistic and deterministic systems
  • When creativity becomes a bug (and why math is hard for LLMs)
  • Hallucinations: the confident nonsense problem
  • Jagged intelligence: brilliant at some things, stumbling at others
  • How fine-tuning and RLHF make models helpful
  • Why "thinking out loud" improves accuracy
  • Tool use: calculators, web search, and the orchestra of helpers
  • Knowing when to use AI and when not to

Download the Guide

Download PDF →

Who This Is For

I originally created this for research development professionals, but it's useful for anyone who wants to understand how these systems work without wading through technical papers. Share it with your team, your faculty, or anyone who's curious but intimidated by the technology.

Credits and Inspiration

This guide draws on the work of researchers like Andrej Karpathy and the teams at Anthropic and OpenAI who have made understanding AI more accessible. The illustrations were generated with AI assistance. Any errors in simplification are my own.