When we think about language understanding, we often assume that humans have a unique ability to grasp the meaning behind words. But what if I told you that this assumption might be an illusion?
Take a simple example: when an AI sees a chair and says “chair”, does it truly understand what a chair is any more than we do? Or is it just recognizing a pattern?
This made me think of a classroom scenario. A teacher points at a red object 100 times and says “this is red.” The kid learns to associate the color with the word. But is that understanding or just pattern recognition?
What if there’s no difference between the two? What if our brains are just wired to recognize patterns, and language understanding is simply a byproduct of that?
Large Language Models (LLMs) consume millions of examples and map words to meanings through patterns. We do the same thing, just slower and with less data. So, what makes human understanding special?
Maybe we’ve overestimated the complexity of language. 90-95% of language is predictable patterns that LLMs can master. The rest? Probably also patterns.
So, what’s the real question here? Is it consciousness that sets us apart? Do we need consciousness to truly understand language?
I don’t have the answer, but I’ve noticed something interesting. When kids are stuck, they say “I don’t know.” AIs, on the other hand, hallucinate answers. Maybe that’s the key. Maybe we need to give AIs real memory, curiosity, and a desire for truth to make them more than just answer-generating assistants.