• There is this book from Cal Newport called “So Good They Can’t Ignore You.” It basically says focus on what value you can offer and which rare and valuable skills you can acquire.

    I carefully predict that in order to be “so good they can’t ignore you” in the future, all you need to do is ignore using ChatGPT and any other LLM and just do intellectually challenging work every day. In 3–6 years you’ll be one of the sought out, employable, and highly-paid individuals simply because you are cognitively sovereign.

    Yet, in order to survive the next 3–6 years without falling behind competitively against those who do use ChatGPT et al., you must avoid all computer and knowledge work. In other words: have a career that allows you to provide value (and make money) without using a computer.

    Besides obvious blue-collar jobs (which are themselves antifragile if specialized), this is particularly sales and deal related work: sales, M&A, broker, leader/founder/owner.

    By pursuing such AI-resilient jobs, while spending early mornings, late evenings, or weekends writing essays, doing manual research, solving difficult math or physics problems, or simply by reading real books, you’ll be so good they can’t ignore you.

    It will require much less effort than ever before, not because less effort is required for mastery, but merely because you are now part of the control group against a massive population that will experience cognitive-atrophy and thereby become dependent on AI and thereby unoriginal, homogenous, and uncreative.

  • I always thought that the world is big. That there are many countries to choose from if you ask yourself “Where to live?” The truth is: the world is actually very small. When you have certain criteria, the # of countries that qualify collapses from hundreds to only a handful of options. For example: if you believe in home-schooling, no legally required childhood vaccination, and you do like sunshine, only 7 countries will match this specific criteria. Of these, some are shitholes, which leaves you with 5 options. If you don’t want to live completely detached from society (in the jungle or on a remote island) then you have 3 options left. If you want to own property, there are 2 left: Portugal and Panama. If you don’t like the EU, Panama is the only option with its own downsides – or perhaps the best option is to get a medical exemption to make the USA your perhaps best option.

    Have you ever defined what principles are truly important to you – and then matched where in the world these criteria are fulfilled? You will be surprised.

  • Stagnation

    Most seem to overlook that this economic stagnation is global, not specific to their country, and only getting started. Without the ambitious valuations and debt-funded capex for AI, the US economy would be just as (or even more) stagnatory as the EU.

    7 AI-focused companies now make up >30% of the entire US stock market. Banks are becoming increasingly exposed through credit lines to AI companies.

    Globally, growth rates are decelerating. Not only in Germany and more broadly the EU, but in basically all non-US developed economies, we are seeing sub 1-2% growth. Basically zero job creation. Globally, honest inflation is higher than honest GDP growth. China faces massive underreported demographic challenges, the impact of tariffs is brutal. Debt dynamics in the US, relevant EU countries, emerging economies are vicious. Mexico stalls, Brazil slows. If AI valuations and investments falter for whatever reason (unmet productivity or overinvestment), the US will be visibly in a recession, which would trigger a global one.

    Ergo: AI must work and create real productivity growth in traditional economies. That is honestly a lot of pressure on the big AI labs and researchers.

  • Warren Buffett is ending his Thanksgiving letter with timeless advice:

    • Don’t beat yourself up over past mistakes.
    • Get the right heroes and copy them.
    • Decide what you would like your obituary to say and live the life to deserve it
    • When you help someone in any of thousands of ways, you help the world.
    • The cleaning lady is as much a human being as the Chairman.

    His letter reads as though he feels his death is near. I learned a lot from everything Warren Buffett has written over his lifetime. I am extremely grateful for that. If there is one thing that he inspired me to do, it is to write publicly. While it feels as if nobody is reading what I write today, I know that there will be one single person 41 years from today who will be just as grateful that I published and not buried my thinking.

  • After spreadsheets became standard in M&A, deals closed significantly slower. M&A deals went from 2-4 months from LOI to close (1970s to 1980s) to 6-12+ months from LOI to close (1980s to 1990s). Deals didn’t improve. 50-70% of acquisitions destroyed shareholder value (same as pre-spreadsheet). Basically, deals got slower, but not any better.

    Why did that happen ? Analysis paralysis, an illusion of precision, the replacement of judgment with calculation, and accountability shifted from CEO and CFO to analysts doing spreadsheets. Nobody except some dealmakers like Warren Buffett or Peter Lynch realized it at that time (“I’ve never seen a deal that didn’t look good in a spreadsheet”).

    Perhaps you’ve recognized some parallels: With AI we are repeating the pattern, only faster and deeper. If human nature stays the same, it will result in an efficiency paradox. Everything will be analyzed and created even faster. But with more output will come slower completion. It will lead to false confidence, zero responsibility (“The AI models said so”). Also, the authority is shifting from human to AI much faster than it shifted from human to spreadsheet.

    What will happen can perhaps be called a quality collapse. The average quality will increase, but the top-end quality will decrease. Everything will be crowded by AI-generated “pretty good” but what will be missing is excellence. Then, at the same time, the AI wave is hitting a succession/retirement wave. Senior experts with real experiential intuition and judgment are retiring. Juniors completely dependent on AI have to take over.

    While it was previously a recognized truth that 30 years of experience >>>> 5 years of experience, we now live in an illusion that 5 years of experience + AI = 30 years of experience. We won’t realize until totally novel problems arise that AI can’t handle because it is not in the training data, while humans are at that stage already cognitively crippled.

    We think we can just go back and “do it without AI if needed” but it will be too late because neural pathways are atrophying right now. Organizations shift all their processes around AI. Skills are not being taught to the next generation but to AI. We are basically already in a state of dependency which looks like empowerment, and we won’t see it until the tool is removed.

    Try doing 1970s M&A deals just with pen, paper, and calculators. How many globally could do it? Same thing will happen with AI but faster. The result – I fear – is that innovation in many organizations will slow down and they will commoditize.

    AI driven productivity gains are a dangerous illusion. Not because of AI (extremely great and powerful tool) but because of how we work with it. Spreadsheets optimized for what was modelable, not what was innovative and couldn’t be seen in numbers. AI will do the same thing, but not exclusively in finance but in all domains.

    What makes AI perhaps more “dangerous” is that it has no barrier to entry. It will enable some selected (rare) individuals to really master what they do (driving real innovation), but the majority (if they are not very careful and intentional) will destroy their own personal economic value.

    With spreadsheets, you had to learn formulas, understand logic, debug errors – which was a protection against overuse. AI has none of that, nothing to learn (if you are really honest, dear AI coaches), no debugging, no logic, no barrier = instant universal adoption which we are observing.

    So, back to the original observation: with spreadsheet everyone got more productive, but deals took longer and outcomes didn’t improve. Now with AI, everyone is getting more productive, but: are projects finishing faster? Is quality improving? Is innovation in the median of corporations accelerating?

    I think: with spreadsheets, people began optimizing for “model says yes” instead of deal is actually good. Are we optimizing AI use for “AI approves” instead of actually valuable?

    We know that if measurement becomes a target, it ceases to be a good target.

    This is by no mean an anti AI stance or anti spreadsheet stance. But I hope to arise some careful thought on the relationship we have with AI and how to avoid the analysis-paralysis of the spreadsheet era.

  • Deep Work 2.0

    Deep work is a term Cal Newport uses to describe the activities performed in a state of absolute distraction-free concentration to push our cognitive capabilities to their limit. I never read the book because the idea is just so simple: schedule a time when you perform real work, no social media, no notifications, just you and the work in front of you.

    When I first heard about Deep Work, it was not a new concept to me. I had already practiced deep focus sessions regularly – usually early in the morning. But it made me definitely more serious. No matter how disciplined I attempted to be, the infinite dopamine from social media and constant notifications from my phone every more often crushed my flow state. Years ago, I tried and then purchased the blocking software Cold Turkey (for Mac and PCs) and an Android app called Digital Detox. Both apps are absolutely great (yes: one-time purchases!). What they allow you to do is block anything you want (for example social media and YouTube) and at the same time make it extremely difficult (if not impossible) to circumvent them.

    Recently, I felt quite unhappy about the lack of progress towards the goals I had set for myself. One part of the equation certainly was the birth of our daughter. Yet, I still managed to schedule at least one Deep Work session each day. What was the missing link? I realized that it is not only social media, YouTube, or news websites anymore – LLMs are now equally distracting as social media.

    Today I created a new blacklist filter in Cold Turkey where I now block all LLM apps and URLs. Could be I’m one of the first persons in the world to do so, but I realized that – for my ADHD type brain – having AI accessible non-stop is an equal distraction as social media feeds: a source of noise and cheap dopamine.

    I realized that using LLMs blindly leads to procrastination, analysis paralysis, decision fatigue, unoriginal thought, loss of free will, decline of deep thinking capacity, atrophy of overall cognitive function, writing skill decline.

    To be totally honest: Instead of working, I prompted. Instead of writing, I prompted. Instead of thinking, I prompted.

    My personal insight is that I must be just as intentional and selectively about using AI as I must be with social media. Instead of using it all the time, I limit it now to very specific tasks where it adds exponential value to the work.

    Let’s be clear: I’m not avoiding AI. I’m also not badmouthing it. I believe AI is one of the greatest technologies humans have invented. What I can tell from my personal experience and observations: AI can be a powerful lever or a heavy burden. Therefore, I believe, it is time for Deep Work 2.0: Deep focus sessions where you intentionally do not use AI at all – at least not actively (i.e., only use pre-prompted conversations or Deep Research reports that you saved as a PDF or Markdown file for your Deep Work 2.0 session).

    What if not only distractions, social algorithms, but also (pretended) AI efficiency is a deadly enemy of our flow state?

  • More than 50% of recently published website texts are now written by AI. This means that from today forward, the majority of all published texts is already synthetic. The same will hold true for any other form of content: images, video, and audio. In and of itself, AI written texts shouldn’t be such a large issue. The problem is not texts written by AI, but that we have simultaneously crossed a point where you can reliably distinguish AI-generated content from human content. I have a strong opinion that AI should sound like AI, I also think that AI chatbots should be apparent as such, and that AI-generated images and videos should have deeply embedded watermarks. This is also why I believe parts of the EU AI Act and the California AI Transparency Act are net-positive for humanity. But why do I believe so?

    The most pressing issue with AI generated content is much less about capability or alignment of AI models, but the collapse of epistemic commons before we even arrive at general-intelligent or super-intelligent AI models. Here is what I mean:

    Most text is now AI-generated, and within months the same will be true for video, images, and audio. When creation costs and efforts collapse to zero, two things vanish simultaneously: trust and meaning.

    We can no longer casually trust what we see. Every text, every video, every expert opinion becomes suspect. As social primates evolved to trust patterns and authorities, we are losing the ability to distinguish signal from noise at the exact moment we need it most.

    Perhaps the deeper crisis isn’t skepticism but meaning collapse. Scarcity and effort have always been core to how humans assign value and significance. When infinite content can be generated instantly and automated for any purpose, these anchors disappear.

    Most look at this as primarily economic disruption, but perhaps it is much more psychological and civilizational because we are eroding the foundations of shared reality before we have built alternatives.

    Then there is this slippery slope: From now on, humans will increasingly interact with and read texts written by AI systems trained on AI-generated texts. Again, soon it is also photos, videos, audio. This training-loop has (at least) the potential to create a cultural drift in directions yet unpredictable. One thing we can be quite certain about is that our human values are already being reshaped by AI systems in ways we cannot track. This in turn makes the question of “alignment” both: more important and at the same time secondary.

    The most pressing risk of human civilization is therefore not hypothesizing a possibly “misaligned” superintelligence, but rather the risk of arriving there divided – socially and epistemically.

    What must be done is certainly harder than alignment of AI systems?

    • Rebuilding trusted information infrastructure
    • Creating new forms of verifiable authenticity
    • Developing cultural “antibodies” to synthetic manipulation
    • Building meaning-making structures that aren’t dependent on scarcity or effort
    • Preserving and strengthening human coordination capacity
    • Etc.

    This is harder than “alignment”, because the more we look at these to-dos from a federal or global perspective, the more impossible they will become.

    Now, to move from the theoretical to the practical: Who are the 5 to 150 people you can still genuinely trust and coordinate with? Because everything else either emerges from functional groups, or it won’t emerge at all.

  • When looking at AI, people are fixated on surface-level effects: economic disruption (jobs disappearing), alignment risks (AI going rogue), or ethical dilemmas (bias of LLMs). While those are all real, they also seem to be distractions from the real shift. The current conversations are not about whether we achieve AGI anymore but about when – some say 10 years, I say it’s basically already here (it all depends on the definition of the term really). By definition, AGI will match and then surpass human intelligence in every single domain: strategic, creative, you name it. Once that threshold is crossed (and it’s closer than many admit), a feedback loop kicks in. AI designs better AI, which designs even better AI, ad infinitum.

    Because we are not yet there, we debate AI as a tool. But as soon we cross that threshold, AI will predict, simulate, and optimize anything logic-based with absolute precision that human input is unnecessary or perhaps counterproductive. Humans – and that means governments, corporations, and individuals – will outsource everything, from policy to life choices, because AI will present the best logical and data-backed option. And because it is so much better in logic, you stop questioning it. The “alignment” problem is therefore ultimately less about making AI safe for humans, but about preparing humans to accept their irrelevance in logical intelligence and – in my opinion – transitioning (or better: re-connecting) them to their intuitive intelligence. If we fail at this, the majority of humans will experience free will only as an illusion.

    We humans derive meaning from struggle, achievement, and social bonds. Within the next 10 to 20 years, we won’t need to struggle to achieve anymore. Achievement will be handed out (or withheld) by systems we cannot understand. What is left are social bonds. But is that really the case? We already see AI-mediated interactions replacing genuine connections (whether emails, eulogies, or even virtual AI companions). If we do not pay attention and re-connect with other humans (our tribes), we risk real psychological devastation at scale.

    If AI is centralized, it will be operated by an elite (that’s at least the current trend). Not only will this elite gain god-like power, but it will form another elite class: humans who are augmented by superintelligence through direct neural interfaces or exclusive AI enhancements. What about the rest? An underclass kept alive by a universal-basic-whatever, but without purpose or power?

    The problem really is: when we cross that threshold, it won’t be fixable. We better collectively act now, or the world will be run by a handful of super-enhanced humans and their AI overlords.

    In 2025 these thoughts will read like speculation. But based on my observations of how the majority of humans started using and adopting AI, the trajectory seems obvious (to me). AI is optimizing for efficiency. Companies adopting it as well. Individuals must – or they are no longer competitive. What is the antidote? I am divided. I don’t believe AI must lead to such dystopia. I am much more convinced that it is our best shot to achieve utopia. But there is a very thin line in-between them: us humans. In how we collectively act. And acting is much (!) less about technological adaptation (from becoming AI “experts” to Neuralink cyborgization) and indefinitely more about re-connecting to what makes us uniquely human: our consciousness, our connection to God, our one Creator, and our unity. Meaning will come from non-competitive pursuits, AI-alignment from balancing logic with consciousness, and happiness from real, deep, social human connections. Intelligent machines – no matter how superintelligent they turn out – can never be conscious. Perhaps it is a wake-up call: we lost our spiritual connection to consciousness – and we must re-connect.

  • If you exclude AI-driven investments, then the US economy mirrors Germany’s near-stagnation, with near zero GDP growth in the first half of 2025. More than 90% of US GDP growth stems from AI and related sectors. Large parts of >$375B AI investments scream “bubble”. Only a smaller percentage of companies and labs have a unique MOAT. Should it burst, due to whatever reason, the US will face a strong recession.

    The AI bubble could burst from two opposite extremes: exponential technological progress or the lack thereof. In case of exponential technological progress; imagine post-LLM architectures slashing compute needs by 100x (i.e., Mamba). This will strand GPU-heavy datacenters. It will make >$2.9T of mostly debt-financed datacenter investments obsolete. On the other side: If LLMs plateau without ROI, the hype will fade like dot-com, tanking valuations despite tangible capex.

    Whatever the case, if it pops, the US could spiral into a vicious reinforcing cycle: recession → layoffs/unemployment → consumer pullback → deflationary spiral (or stagflation if supply shocks hit) → political extremism. This reminds me of pre-WW2 Europe. The US must diversify growth beyond AI now.

    What can Germany learn from this? The obvious is to accelerate AI adoption and sovereignty. Just as the US without AI stagnates, Germany with AI could grow again. At the same time, imitating the US is a fragile lifeline. Perhaps the smartest idea is to reject the hype cycle altogether. Let Berlin based AI startups do their thing, rent US based AI software, and focus all energy on high-tech breakthroughs in the decentralized Mittelstand.

  • One of the many things we experience is the simultaneous reaching for more alongside the subconscious knowing that the little we have is all we truly need. We seek noise, though silence holds the answer. We look to the future, we look to the past, yet we forget the now.

    We live here. We live now.

    The illusion of the future and the weight of the past hold us captive. It is like a pendulum, swinging from what was to what ought to be. From what made us happy to what might make us unhappy. We cling instead of letting go. We try to force the future into submission, forgetting all the while that the future emerges with effortless grace in the here, in the now.

    Let us flow. Not blindly. With visionary intention. Instead of waiting for tomorrow, let us be today who we wish to be tomorrow. It is what is born today that shapes the morrow.