#AI horizons 25-08 – The Bubble, the Breach, and the Billions – AI next stage

[ad_1]

AI has never been more debated, more invested in, or more misunderstood. On one side, we hear constant warnings of an AI bubble: research studies show limited impact on corporate P&L, with pilots stalling and organizational maturity lagging far behind. Even Sam Altman has compared the current market frenzy to the dot-com era, warning that investors are likely to lose a “phenomenal amount of money” before the dust settles. On the other hand, trillions in capital are pouring into infrastructure, and more than a billion people now use generative AI every week. ChatGPT alone has over 700 million weekly active users; add Gemini and other competitors, and the numbers soar even higher. And that doesn’t even count the billions touched by AI passively—every product recommendation, every streaming suggestion, every algorithmic decision in the background of our lives.

The paradox is clear: AI is already everywhere yet still struggling to prove itself in balance sheets. We call this an AI bubble, but it doesn’t look like the bubbles of the past.

History shows that every new technology has its turning point, and usually, that moment is a disaster. The railways had the Versailles crash barely twenty years after the first regular passenger service. The internet had the Morris Worm in 1988, spreading chaos through thousands of UNIX systems. Cybersecurity saw its own milestone in 2024, when a flawed update from CrowdStrike triggered one of the largest outages in history. Disasters mark the transition from experimentation to reality. They show us when technology stops being just a toy and becomes something critical.

By that standard, AI has not yet crossed the threshold. Until now, the worst outcome of generative AI has been a wrong answer to a complex question. Yes, we’ve seen serious side effects of AI —the UN fact-finding mission that accused Facebook’s recommender system of fueling violence against the Rohingya should count as an AI-enabled disaster—but GenAI itself hasn’t yet triggered the kind of systemic breakdown that makes the evening news. That may be about to change.

Because chatbots are one thing, agents are another. Chatbots answer; agents act. And when agents are empowered to design their own workflows, chain tasks, generate code, and interact with other software in real time, the game changes. The Brave “Comet” case offered a glimpse of what is coming. Researchers showed they could hide malicious instructions in normal web content—an invisible line of text on Reddit, for example. When the AI browser’s “summarize this page” function was triggered, the system didn’t just summarize—it executed the hidden command. Suddenly, the AI was resetting accounts, reading one-time passwords in Gmail, and sending them back to attackers via a Reddit comment. No malware, no exploit—just text. Because for an LLM, everything is text.

This is the lethal trifecta: AI with access to untrusted external data, private user accounts, and the ability to act on behalf of the user. At that point, a large-scale AI-induced disaster is not only possible, it’s inevitable. The question is when, not if.

And timing matters because AI has reached the stage of mass intelligence. We are no longer in the experimental, early-adopter phase. AI is as accessible as a Google search, but without anything close to the governance, security, or user maturity needed. This is the most dangerous combination imaginable: vast adoption, immature controls, and technology sophisticated enough to operate autonomously.

Meanwhile, the geopolitical layer cannot be ignored. China has unleashed new open-weight models—GLM 4.5 and DeepSeek V3.1—that show competitive results in coding benchmarks (with GLM 4.5 surpassing GPT-4.1 on SWE-Bench), are designed to be “agent-native,” and are priced 9 to 35 times lower than GPT-5. On paper, it looks like a revolution. In reality, it’s a trap. Low prices and MIT-style licenses are seductive, but they obscure the deeper issue: trust. Do you really want to hand your strategic workflows to opaque models from an authoritarian state that has already perfected industrial dumping in critical sectors from EVs to solar panels? Imagine one of these ultra-cheap, barely governed agents running inside your systems, interacting not only with your data but with your partners’ and customers’ ecosystems. Benchmarks and cost advantages mean nothing if security collapses.

This is why governance is now non-negotiable. The U.S. NIST has already proposed new overlays for AI security, covering assistants, predictive systems, single agents, and multi-agent workflows. Europe has the AI Act, with its own risks of overregulation but at least recognition that AI cannot remain a regulatory wild west. The challenge for boards is not whether to adopt AI—that’s a given. The challenge is how to govern it, where to draw the red lines, and which models can be trusted in the first place.

Because AI is not just transforming industries, it’s reshaping labor markets. Stanford researchers, analyzing millions of payroll records, have already measured a 13% decline in employment for younger workers in AI-exposed fields like software development and customer service since late 2022. Older workers with tacit knowledge have been spared, for now. AI is not replacing experience, but it is making formal education less valuable at entry level. At the same time, enterprises are drowning in what I call the “shadow AI economy.” Almost 90% of employees admit to using AI tools at work, while only 40% of companies have official subscriptions. AI is everywhere inside organizations, but often invisible, unmanaged, and unbudgeted.

This disconnect is fueling the bubble dynamic. MIT’s State of AI in Business 2025 study found that 95% of enterprise pilots fail to deliver revenue acceleration. Still, the study itself is based on a relatively limited sample and mixes surveys, interviews, and public deployments, useful as a snapshot, but not conclusive scientific evidence. Its findings mirror what many of us see in practice, yet they should be read as indicative rather than definitive. Purchased tools succeed two-thirds of the time; internal builds only a third. That isn’t a failure of models, it’s a failure of organizations. Companies are throwing money at AI without understanding how to integrate it. They overspend on sales and marketing pilots because results are easy to show, while underfunding back-office and finance processes where ROI is often higher. The result is that trillions are invested, value is generated off the books, but very little hits the P&L.

Does that mean AI is a bubble? Yes and no. There is hype, speculation, and easy money chasing startups with nothing more than an “AI-powered” slide deck. Many investors will lose fortunes. But unlike the dot-com crash, the infrastructure being built today will not vanish. Data centers, GPUs, and cloud platforms are durable assets. Microsoft, Google, and even Oracle are already monetizing through record cloud revenues. They are not just selling the dream, they are billing the consumption. The bubble may burst at the startup level, but the giants will consolidate and endure.

The real risk is twofold: first, that the first AI disaster comes sooner than we think, and second, that Western firms sleepwalk into strategic dependency on untrusted Chinese models. The outcome will not just be about money, it will be about security, sovereignty, and control of the most powerful general-purpose technology since electricity.

AI is not going away. It is the defining technology of our era. But the way we adopt it will determine whether the age of mass intelligence brings prosperity or catastrophe. Governance, trust, and geopolitical awareness must now move to the center of the boardroom agenda. The first AI disaster is coming. The only question is whether we are prepared.


This entry was posted on September 2, 2025, 9:20 am and is filed under AI. You can follow any responses to this entry through RSS 2.0.

You can leave a response, or trackback from your own site.

[ad_2]

Share this content:

I am a passionate blogger with extensive experience in web design. As a seasoned YouTube SEO expert, I have helped numerous creators optimize their content for maximum visibility.

Leave a Comment