What’s Actually Behind ChatGPT?
LLMs for Engineers — Part 1
When you first use ChatGPT, it feels a bit strange.
You ask a question, and it replies almost instantly. Not just with random text, but with something that feels structured, confident, and often surprisingly accurate. It explains things clearly, writes code, fixes errors, and even sounds conversational.
At some point, a natural question comes up.
👉 What is actually going on behind this?
Is it thinking?
Does it understand what I’m saying?
Is there some kind of intelligence inside?
The surprising answer is… not really.
Or at least, not in the way we usually think about intelligence.
What ChatGPT is doing at its core is much simpler than it appears. It is not searching the internet in real time. It is not looking up answers in a database. And it is not reasoning like a human sitting across from you.
Instead, it has learned patterns from a massive amount of text and uses those patterns to generate responses. In fact, one of the simplest ways to describe what it does is this:
👉 It tries to predict what should come next in a piece of text.
That’s it.
Now this might sound almost too simple.
How can something that just “predicts the next word” write essays, explain concepts, or answer questions?
The answer lies in scale.
The model has seen an enormous amount of text articles, documentation, conversations, code, and more. Over time, it has learned how language usually flows. So when you give it a prompt, it doesn’t “think” about the answer. It continues the pattern in a way that looks correct.
You can imagine it like this.
If you had read millions of examples of how people explain something, you would also get very good at continuing those explanations. You might not know the exact source of every fact, but you would still be able to produce something that sounds right.
That’s roughly what’s happening here.
This also explains something important.
Sometimes ChatGPT gives answers that sound very confident… but are not fully correct. That’s because it is not verifying facts. It is generating what is most likely to come next based on patterns it has learned.
So in a way, you are not talking to a system that “knows” things.
You are interacting with a system that is very good at producing language that looks like knowledge.
At this point, things start to look different.
It stops feeling like magic and starts feeling like a system with a very specific behavior.
And once you see it this way, a lot of things begin to make sense:
why small changes in prompts change the output
why it sometimes makes mistakes
why it can sound intelligent without actually understanding
But this is just the surface.
Because now the real questions begin:
👉 Where did it learn these patterns from?
👉 How does it even read text?
👉 What is happening inside when it generates a response?
That’s exactly what we’ll explore in this series.
We’ll break this down step by step, starting from the data, then tokens, then prediction, and finally what’s inside the model itself.
By the end, you won’t just use ChatGPT.
👉 You’ll understand how it works.
Next in this series
👉 Where ChatGPT actually learns from
Smiles :)
Anurudh
