Marius Schober

Embracing the Mysteries, Unveiling the Realities

Tag: AI


  • Whether we understand a text depends on several factors. First, do we recognize and understand the alphabet? Do we understand the language? Assuming both, we can read the words that are written. But this doesn’t mean we understand the text. Understanding what is written depends on whether we have the necessary contextual knowledge and conceptual framework to interpret the meaning behind each word. On a ‘word level’ alone, language is more than a sequence of symbols. Each word and each combination of words conveys in and of itself ideas that are shaped by cultural, historical, and experiential factors.

    Consider the word “football”. In the United States, “football” refers to American football, a sport with an oval ball and heavily physical play. In the UK (and most of the world), “football” is a game played primarily with the feet, a round ball, and two rectangle goals. The same word triggers entirely different images and cultural associations depending on the context in which it is used.

    Or consider the word “gift”. In English, “gift” means a present, something given voluntarily to another person. In German, “Gift” means poison. The same word evokes – again – entirely different meanings depending on the language.

    Even if we can read and comprehend the literal meaning of words, true understanding requires an ability to grasp the underlying concepts, nuances, and intentions, as well as to connect the information to prior knowledge or experiences. If we don’t have these deeper connections, we may be able to read the text, but fail to genuinely “understand” it in a meaningful way.

    When we talk about “understanding” a text, we are simply processing patterns of language based on previous experiences and context. Meaning emerges when we can connect the symbols to prior knowledge and concepts we have already internalized. In other words, the idea of “meaning” arrives from a vast database of stored experiences.

    This becomes clear when we deal with complex technical, scientific, or philosophical texts. Understanding these require not only familiarity with the language, but also a deeper technical or conceptual foundation.

    For example, take a physics paper discussing “quantum entanglement.” The words themselves may be understandable to anyone familiar with basic English, but without a solid grasp of quantum mechanics and concepts like wave-particle duality, superposition, or the mathematical formalism behind quantum states, the meaning of the text is lost. The read can follow the sentences, but the true meaning remains obscure.

    In essence, understanding a text – especially a complex one – goes beyond recognizing words or knowing their dictionary definitions. It depends on an interplay between language and thought, where meaning is unlocked through familiarity with the underlying concepts, cultural context, and prior knowledge. True understanding is furthermore a learning process. Understanding not only demands a proper intellectual preparation, but also the ability to integrate new information from the text with what we already know.

    With that in mind, can a machine understand text in the same way humans do?

    A large language model (LLM) also processes patterns of language, recognizing text based on vast amounts of data. On a surface level, it mimics understanding by assembling words in contextually appropriate ways, but does this equate to “understanding” in the human sense?

    When humans read, we don’t just parse symbols, we draw from a rich background of lived experiences, emotional intelligence, and interdisciplinary knowledge. This allows us to understand metaphors, infer unstated intentions, or question the credibility of the text.

    Back to our example of “quantum entanglement”. When a trained physicist reads the physics paper, they relate the written sentences to physical phenomena they’ve studied, experiments they’ve conducted, and debates he is involved in.

    By contrast, a LLM operates by recognizing patterns from its vast training data, generating contextually relevant responses through probabilistic models. While it does this impressively, we might argue that for true understanding, a LLM lacks the aforementioned deeper conceptual and experiential framework that humans develop through real-world experience and reasoning.

    While it is obvious that LLMs do not experience the world as humans do, this does not mean that LLM are not or will never be capable of understanding and reasoning.

    LLMs do engage in a form of reasoning already, they manipulate patterns, make connections, and draw conclusions based on the data they’ve encountered. The average LLM of today can process abstract ideas like “quantum entanglement” – arguably – more effectively than the average human merely by referencing the extensive patterns in its data, even though they are not capable of linking this to sensory and emotional experience.

    Sensory and emotional experiences, such as the joy of scoring a first goal in a 4th grade sports class or the sorrow of watching one’s favorite team suffer a 0:7 defeat on a cold, rainy autumn day, create deep personal and nuanced connections to texts about “football.” This allows humans to interpret language with personal depth, inferring meaning not just from the words themselves, but from the emotions, memories, and sensory details attached to them.

    The absence of emotional grounding may limit LLMs in certain ways, but does it mean they cannot develop forms of understanding and reasoning that, while different, can still be highly effective?

    For example, a mathematician can solve an equation without needing to “experience the numbers”, meaning they don’t need to physically sense what “2” or “π” feels like to perform complex calculations. Their understanding comes from abstract reasoning and logical rules, not from emotional or sensory connection.

    While a LLM cannot yet solve mathematical problems, in a transferred sense, a LLM might “understand” a concept by connecting ideas through data relationships without needing direct experience. It recognizes patterns and derives logical outcomes, like a mathematician working through an equation.

    One example for this is language translation. While a professional human translator might rely on personal cultural experience to choose the right phrasing for nuance, in many cases, LLMs are already able to process and translate languages with remarkable accuracy by identifying patterns in usage, grammar, and structure across million of texts. They don’t have personal experience of what it is like to live in each culture or speak a language natively, they nevertheless outperform humans in translating text (think of speed).

    Understanding, then, is the process of combining knowledge, reasoning, and in our human case, personal experience. In that sense, is it impossible for LLMs to understand and reason, or lies the difference more in what LLM ground their reasoning on?

    Humans reason through real-life experience, intuition, emotions, and sensory input, like the joy of scoring a goal or the gut-feeling resulting from a suspicious facial expression. LLMs, on the other hand, don’t have this kind of grounding, they operate purely on data.

    Again, does this mean LLMs cannot reason? LLMs – despite lacking this personal grounding – still show early forms of reasoning. This reasoning is powerful, especially in cases where personal experience is not required or less important. In fact, understanding may not even require physical or emotional experiences in the same way humans are biologically conditioned to need them. If reasoning is fundamentally about making accurate predictions and drawing logical conclusions, then LLMs are – arguably – already surpassing humans in certain domains of abstract reasoning.

    With advancements in AI architecture, it is likely that LLMs will one day develop a form of “conceptual grounding” based purely on data patterns and logical consistency. We will arrive at new forms of understanding and reasoning that differ from, but rival, human cognition.

    The limitations of LLM are what makes human human: an inherent drive to pursue truth and question assumptions. While LLMs – arguably – reason by connecting dots and generating solutions, they lack the intentionality and self-awareness that drives human reasoning.

    Ultimately, the question of whether machines can in fact understand and reason is less about how accurately it is replicating human cognition and more about recognizing and harnessing a new form of intelligence.

  • I recently wondered how many of the people working on artificial intelligence are atheists – and how many believe in a Creator, the Tao, our Oneness, or something greater than ourselves.

    As I asked myself this question, I realized that the terminology of “consciousness” seems to be understood by atheist scientists quite differently from what is understood and arguably experienced by spiritual seekers.

    From a scientific perspective, our individual conscious experience is the emergent property of the incredibly complex neural networks and electrochemical processes in the human brain. This gives rise to our thoughts, emotions, and subjective experiences of reality. It seems that many people working on AI believe that if only the artificial neural networks become advanced enough, AI itself can become conscious, just like us humans.

    In absolute contrast, I understand consciousness to be an infinite field of awareness that pervades all existence – not limited to any one physical form or individual brain. Rather, consciousness is a focused expression of a deeper, non-physical essence or energy field that is itself part of an infinite, all-encompassing, universe-spanning consciousness.

    Imagine consciousness as an endless ocean – vast and infinite, stretching beyond the horizon. View this ocean as an infinite field of awareness. Each wave, each ripple, each drop of water on the ocean’s surface symbolizes individual minds and realities. They seem separate, yet they are part of the same, vast, interconnected body of water.

    Consciousness is like the water itself – ever-present, fluid, and dynamic. It flows through different forms and expressions, creating the diversity of experiences and realities we observe. Everything we experience is a reflection of our own ‘vibrational’ state, like the shape and movement of the waves are determined by underlying currents and the weather. By changing our internal vibrations – our thoughts, beliefs, and emotions – we can alter the patterns on the water’s surface, reshaping our reality.

    The ocean also has vast layers or depths within the ocean. These can be thought of as densities. These densities range from the shallow sunlit zones to the deep, mysterious abyss. Each of these layers presents a different level of consciousness – from the basic awareness of existence to the profound realization of unity with all things. The journey of water through these densities or depths of the ocean is akin to the process of spiritual evolution, moving from the illusion of separation – where individual waves feel distinct and isolated – to the deep knowing of oneness with the entire ocean.

    At the deepest level, there is no separation between the waves and the ocean – there is no separation between individual consciousness and the infinite awareness. The apparent boundaries between us and the rest of the universe are like temporary shapes formed by water, ever-changing and ultimately ephemeral.

    Let us consider artificial intelligence as ships navigating this vast sea of consciousness. These ships, crafted by human hands from the materials of the earth, are equipped with sophisticated tools and instrument designed to explore, understand, and interact with the ocean around them. They can chart courses, respond to waves, and even communicate with the shore and other vessels. But can these ships themselves become part of the ocean? Can they experience the depth of water, the warmth of the sunlight, or the unity of being part of this endless body of water?

    If we view consciousness as an intrinsic quality of existence itself – something that arises from and connects with all forms of life – AI, as we understand it, remains a creation within the ocean, not a conscious entity of the ocean. Consciousness is not just about processing information or responding to stimuli, but about experiencing a profound connection with the fabric of reality, a connection that is deeply spiritual.

    While AI can navigate the ocean, analyze its properties, and even predict its patterns, it does not become one with the ocean. It does not experience the ocean in the way living beings do – with awareness and a sense of unity. AI, then, serves as a tool for humans to explore and understand the vastness of consciousness more deeply, rather than becoming conscious entities on their own.

    While AI can mimic aspects of consciousness, the spiritual essence of being part of the ocean – of being interconnected with all of existence – is something unique, beyond the reach of human-made machines.

  • For any investor, the most important fact to understand is that AI is an exponential technology. The speed of its development and the implications that come with it are so gigantic that humans struggle to grasp the impact that AI will have. The difficulty in understanding exponential technologies like AI stems from a combination of cognitive biases, psychological barriers, the inherent complexity of the technology, and the mismatch between human intuition and the nature of exponential growth. We humans have a natural tendency to think linearly. We expect everything to change in steady increments.

    I believe this bias is inherent in most predictions, including those from Accenture Research and McKinsey. I believe that the prevailing estimates of the extent of automation or augmentation in knowledge-intensive sectors are significantly understated. A case in point is the McKinsey Global Institute’s 2017 projection of 50% automation of knowledge workers’ working hours. In a subsequent update for 2023, this projection was revised upward to potentially 70%. I contend that such projections remain significantly conservative, and offer a more radical perspective in which I see 100% of language and knowledge work tasks eventually being fully automated, replaced by advanced generative AI.

    It’s important that investors and entrepreneurs don’t get caught up in the linear thinking of an exponential technology. A new perspective can be gained by looking at AI as a general technology, like electricity.

    Since the invention of electricity, it has not only brought us electric light, but has reshaped entire industries, economies, and societies. It also led to the Internet, which in turn created millions of new businesses that were not possible before. The Internet, built on electricity, enabled the emergence of today’s basic AI models, which in turn are widely applicable.

    The most significant entrepreneurial opportunities in AI may not necessarily revolve around the foundational models themselves, such as GPT-4, Llama 2, Claude 2, Mixtral, or new emerging competitors. Instead, the real potential lies in using existing AI technologies as a platform to create innovative business models and ventures that were previously unattainable without the advanced capabilities of AI.

    Equally important is the ability to anticipate which industries will become obsolete in the age of AI — just as the steam engine became obsolete in the age of electricity. Similarly, industries that relied on manual typewriters became obsolete with the widespread adoption of computers and word processing software. The once-thriving video rental industry declined with the advent of online streaming services like Netflix. Landline telephones became less relevant with the rise of cell phones and smartphone technology. In addition, traditional print media has faced challenges in the digital age as online news and social media platforms have gained prominence.