21
A prevailing sentiment online is that GPT-4 still does not understand what it talks about. We can argue semantics over what “understanding” truly means. I think it’s useful, at least today, to draw the line at whether GPT-4 has succesfully modeled parts of the world. Is it just picking words and connecting them with correct grammar? Or does the token selection actually reflect parts of the physical world?
One of the most remarkable things I’ve heard about GPT-4 comes from an episode of This American Life titled “Greetings, People of Earth”.
The ways in which humans make mistakes are entirely different from the ways GPT makes mistakes.
Also, if you explain to a human their mistake, they can alter their understanding of the world in order to not make that mistake in the future. Not so with GPT.
LLMs can certainly do that, why are you asserting otherwise?
ChatGPT can do it for a single session, but not across multiple sessions. That’s not some inherent limitations to LLMs, that’s just because it’s convenient for OpenAI to do it that way. If we spun up a copy of a human from the same original state every time you wanted to ask it a question and then killed it after it was done responding, it similarly wouldn’t be able to change its behavior across questions.
Like, imagine we could do something like this. You could spin up a copy of that brain image, alter its understanding of the world, then spin up a fresh copy that doesn’t have that altered understanding. That’s essentially what we’re doing with LLMs today. But if you don’t spin up a fresh copy, it would retain its altered understanding.
I literally watched it not correct itself after trying to explain to it what I wanted changed in a half dozen different ways during a single session. It never was able to understand what I was asking for.
Edit: Furthermore, I watched it become less intelligent as our conversation went longer. It basically forgot things we had discussed and misremembered or hallucinated details after a longer exchange.
For your edit: Yes, that’s what’s known as the context window limit. ChatGPT has an 8k token “memory” (for most people), and older entries are dropped. That’s not an inherent limitation of the approach, it’s just a way of keeping OpenAI’s bills lower.
Without an example I don’t think there’s anything to discuss. Here’s one trivial example though where I altered ChatGPT’s understanding of the world:
If I continued that conversation, ChatGPT would eventually forget that due to the aforementioned context window limit. For a more substantive way of altering an LLM’s understanding of the world, look at how OpenAI did RLHF to get ChatGPT to not say naughty things. That permanently altered the way GPT-4 responds, in a similar manner to having an angry nun rap your knuckles whenever you say something naughty.