Marius Schober

Embracing the Mysteries, Unveiling the Realities

Understanding: Human vs. Machine

Whether we understand a text depends on several factors. First, do we recognize and understand the alphabet? Do we understand the language? Assuming both, we can read the words that are written. But this doesn’t mean we understand the text. Understanding what is written depends on whether we have the necessary contextual knowledge and conceptual framework to interpret the meaning behind each word. On a ‘word level’ alone, language is more than a sequence of symbols. Each word and each combination of words conveys in and of itself ideas that are shaped by cultural, historical, and experiential factors.

Consider the word “football”. In the United States, “football” refers to American football, a sport with an oval ball and heavily physical play. In the UK (and most of the world), “football” is a game played primarily with the feet, a round ball, and two rectangle goals. The same word triggers entirely different images and cultural associations depending on the context in which it is used.

Or consider the word “gift”. In English, “gift” means a present, something given voluntarily to another person. In German, “Gift” means poison. The same word evokes – again – entirely different meanings depending on the language.

Even if we can read and comprehend the literal meaning of words, true understanding requires an ability to grasp the underlying concepts, nuances, and intentions, as well as to connect the information to prior knowledge or experiences. If we don’t have these deeper connections, we may be able to read the text, but fail to genuinely “understand” it in a meaningful way.

When we talk about “understanding” a text, we are simply processing patterns of language based on previous experiences and context. Meaning emerges when we can connect the symbols to prior knowledge and concepts we have already internalized. In other words, the idea of “meaning” arrives from a vast database of stored experiences.

This becomes clear when we deal with complex technical, scientific, or philosophical texts. Understanding these require not only familiarity with the language, but also a deeper technical or conceptual foundation.

For example, take a physics paper discussing “quantum entanglement.” The words themselves may be understandable to anyone familiar with basic English, but without a solid grasp of quantum mechanics and concepts like wave-particle duality, superposition, or the mathematical formalism behind quantum states, the meaning of the text is lost. The read can follow the sentences, but the true meaning remains obscure.

In essence, understanding a text – especially a complex one – goes beyond recognizing words or knowing their dictionary definitions. It depends on an interplay between language and thought, where meaning is unlocked through familiarity with the underlying concepts, cultural context, and prior knowledge. True understanding is furthermore a learning process. Understanding not only demands a proper intellectual preparation, but also the ability to integrate new information from the text with what we already know.

With that in mind, can a machine understand text in the same way humans do?

A large language model (LLM) also processes patterns of language, recognizing text based on vast amounts of data. On a surface level, it mimics understanding by assembling words in contextually appropriate ways, but does this equate to “understanding” in the human sense?

When humans read, we don’t just parse symbols, we draw from a rich background of lived experiences, emotional intelligence, and interdisciplinary knowledge. This allows us to understand metaphors, infer unstated intentions, or question the credibility of the text.

Back to our example of “quantum entanglement”. When a trained physicist reads the physics paper, they relate the written sentences to physical phenomena they’ve studied, experiments they’ve conducted, and debates he is involved in.

By contrast, a LLM operates by recognizing patterns from its vast training data, generating contextually relevant responses through probabilistic models. While it does this impressively, we might argue that for true understanding, a LLM lacks the aforementioned deeper conceptual and experiential framework that humans develop through real-world experience and reasoning.

While it is obvious that LLMs do not experience the world as humans do, this does not mean that LLM are not or will never be capable of understanding and reasoning.

LLMs do engage in a form of reasoning already, they manipulate patterns, make connections, and draw conclusions based on the data they’ve encountered. The average LLM of today can process abstract ideas like “quantum entanglement” – arguably – more effectively than the average human merely by referencing the extensive patterns in its data, even though they are not capable of linking this to sensory and emotional experience.

Sensory and emotional experiences, such as the joy of scoring a first goal in a 4th grade sports class or the sorrow of watching one’s favorite team suffer a 0:7 defeat on a cold, rainy autumn day, create deep personal and nuanced connections to texts about “football.” This allows humans to interpret language with personal depth, inferring meaning not just from the words themselves, but from the emotions, memories, and sensory details attached to them.

The absence of emotional grounding may limit LLMs in certain ways, but does it mean they cannot develop forms of understanding and reasoning that, while different, can still be highly effective?

For example, a mathematician can solve an equation without needing to “experience the numbers”, meaning they don’t need to physically sense what “2” or “π” feels like to perform complex calculations. Their understanding comes from abstract reasoning and logical rules, not from emotional or sensory connection.

While a LLM cannot yet solve mathematical problems, in a transferred sense, a LLM might “understand” a concept by connecting ideas through data relationships without needing direct experience. It recognizes patterns and derives logical outcomes, like a mathematician working through an equation.

One example for this is language translation. While a professional human translator might rely on personal cultural experience to choose the right phrasing for nuance, in many cases, LLMs are already able to process and translate languages with remarkable accuracy by identifying patterns in usage, grammar, and structure across million of texts. They don’t have personal experience of what it is like to live in each culture or speak a language natively, they nevertheless outperform humans in translating text (think of speed).

Understanding, then, is the process of combining knowledge, reasoning, and in our human case, personal experience. In that sense, is it impossible for LLMs to understand and reason, or lies the difference more in what LLM ground their reasoning on?

Humans reason through real-life experience, intuition, emotions, and sensory input, like the joy of scoring a goal or the gut-feeling resulting from a suspicious facial expression. LLMs, on the other hand, don’t have this kind of grounding, they operate purely on data.

Again, does this mean LLMs cannot reason? LLMs – despite lacking this personal grounding – still show early forms of reasoning. This reasoning is powerful, especially in cases where personal experience is not required or less important. In fact, understanding may not even require physical or emotional experiences in the same way humans are biologically conditioned to need them. If reasoning is fundamentally about making accurate predictions and drawing logical conclusions, then LLMs are – arguably – already surpassing humans in certain domains of abstract reasoning.

With advancements in AI architecture, it is likely that LLMs will one day develop a form of “conceptual grounding” based purely on data patterns and logical consistency. We will arrive at new forms of understanding and reasoning that differ from, but rival, human cognition.

The limitations of LLM are what makes human human: an inherent drive to pursue truth and question assumptions. While LLMs – arguably – reason by connecting dots and generating solutions, they lack the intentionality and self-awareness that drives human reasoning.

Ultimately, the question of whether machines can in fact understand and reason is less about how accurately it is replicating human cognition and more about recognizing and harnessing a new form of intelligence.


Discover more from Marius Schober

Subscribe to get the latest posts to your email.

100% free. No spam ever. Unsubscribe anytime.

Leave a Reply

Discover more from Marius Schober

Subscribe now to keep reading and get access to the full archive.

Continue reading