OpenAI Needs Adult Supervision


At first glance, OpenAI is on top of the world. The company recently announced that its revenues have surged past $10B / year, on track with expectations to reach $125B by 2029. That’s more than Stellantis, Nestlé or Bank of America make today. Though we’ve become accustomed to “crazy” numbers when it comes to AI companies, we do need to acknowledge that this type of growth is… extraordinary.

There are, however, clouds on the horizon. Whether from ambition, arrogance or lack of discipline, OpenAI is in trouble today. The company must overcome significant internal and market issues if it is to survive the coming years.

Table of Contents

Top-line: OpenAI’s monetization issue

Sam Altman Tweet

Even as the company struggles to monetize users amidst growing competition within the Large Language Model space, the paying customers still aren’t being monetised properly. OpenAI either misunderstands A) the company’s cost base or B) the value delivered by its products. Very unserious for a company that wishes to become a market leader.

TLDR: OpenAI rules free chatbots (500M weekly users, 70% market share) but converts just 3% to paid, while its Pro tier hemorrhages cash; “great free product” + “vibe-pricing” will not bankroll AGI.

Bottom-line: OpenAI’s cost pressure

As frontier-model training costs continue to increase towards $1B, OpenAI’s bottom line is being squeezed. The company’s Operating Expenses (OPEX if you’re dirty) rose to $8.7B in 2024, leading to a total net loss of $5B in the same year. And sure, the company is growing revenues fast at the same time, following the tried and true Silicon Valley model of breaking fast and moving things or whatever. Amazon and Uber famously took years to turn a profit.

However, three large obstacles stand on the path to long-term profitability.

Firstly, OpenAI’s 2024 losses would have been more than double without Microsoft discounts on compute. $12B is what has been estimated. Because it lacks its own GPU infrastructure, OpenAI remains firmly tied to its Redmond benefactor. But, as the relationship sours in real time, Microsoft contracts appear increasingly under threat. Pursuing AGI independently and without shackles is a worthy endeavour, but without a backup plan, it will only add pressure on OpenAI’s CFO (who joined only last year, likely to remedy an unsustainable situation).

Which brings us to the second issue. The two most successful companies of the past century (Apple, Nvidia), have focused on building railways (smartphones, GPUs) for people building and running trains (mobile apps, AI). Sam Altman knows this: his algorithms’ success is entirely dependent on data centers and the chips that power them. Lack of direct access is an existential threat. And so, OpenAI, SoftBank and Oracle have announced the Stargate project, a $500B initiative to build a network of massive AI data centers across the US. That’s a lot of money for a company still some way away from profitability. Especially so given reports that the project is struggling to get off the ground.

And sure, at a $300B valuation, OpenAI benefits from an infinite money glitch, enabled by Wall Street goodwill… but the above checks will have to be written as OpenAI’s competitive advantage versus other major players shrinks. This should worry investors, regardless of the company’s shiny vision for the future.

TLDR: With a $5B loss that would have been $12B without Microsoft GPU discounts, a struggling $500B data center moonshot, looming custom‑chip costs and $12B in CoreWeave obligations, the supposedly infinite money pile suddenly looks awfully finite as rivals narrow the gap.

External factors: increasing competition & shifting markets

The above wouldn’t be much of a problem if OpenAI had a particularly unique competitive advantage. But 2022 was a while ago, and competitors are closing the gap on algorithm quality while cutting costs. And they’re doing so fast.

Every rival that matters has already hacked one or more structural levers to drag its dollar cost per million input tokens to enable a price well below OpenAI’s $2.50/M tokens. Unless OpenAI matches or outflanks those levers, the company will bleed margin faster than it scales revenue.

First, here’s what the incumbent heavyweights are doing:

  • Google is favouring an integrated play, throwing cash from profitable business units at bespoke TPUs, trimming its bills by roughly 5X. The goal isn’t to create a chatbot, but to improve existing offers. Result: Gemini 1.5 Pro retails at $1.25/M tokens, half of OpenAI’s cost base today.

  • Meta is going full open source. By gifting Llama 3 weights and rolling out MTIA v2 accelerators, it shifted hosting costs onto customers while shrinking its own GPU tab. Public API rate: $0.30/M tokens. Meanwhile, the company is using its infinite ad money fountain to aggressively hire OpenAI researchers.

  • AWS bundles its Trainium chips and Nova models straight into Bedrock, then hides the bill in the monthly cloud invoice. List price sits at $2.50/M tokens, but “call your Account Exec” discounts routinely undercut that.

Then, the upstarts / the specialists:

  • Anthropic is following a partner-led approach, having convinced both AWS and Google to swap equity for GPU credits. Claude 3 Sonnet shows up at $3.00/M tokens. The company doesn’t have to sell its product as AWS and Google are leveraging their own sales channels to sell Anthropic’s highly specialised models.

  • Mistral runs sparse, bite‑sized models and lets the internet do distribution (torrent at launch, press release optional). Medium 3 lands at $0.40/M tokens with near‑zero go‑to‑market spend.

  • Deepseek wins the title of discount king: cache‑hit pricing bottoms out at $0.07, while even headline rates for DeepSeek Reasoner are only $0.27/M tokens. Throughput drops, but cost‑obsessed devs won’t care.

AI Price per Token

Simultaneously, a spiraling talent war is bidding up the very engineers who can close that gap; as their know‑how disperses and salaries soar, today’s frontier models are sliding into commodity territory, leaving profits only for whoever can squeeze the lowest dollars‑per‑compute‑cycle.

None of this is good news for OpenAI, a company without a particularly strong structural advantage.

TL;DR: Rivals have found cheaper ways to run large models by using their own chips, open‑source weights, or burying costs in bundles, so token prices have dropped to pennies. Without a unique moat, OpenAI could bleed cash while competitors keep racing to the bottom.

Some simple solutions (from one dummy to some of the smartest people on the market)

There are a few ways OpenAI can move forward, ranging from the quick and simple to the long-term and very difficult. But if the company cannot do hard things, then it does not deserve its $300B valuation.

A first step would be to offer “off-peak” discounts and pay-as-you-go token packs for price sensitive pro users who still want to use some of the best models out there.

OpenAI should also build a smart router that auto-picks the best / cheapest model for specific queries to limit wasting energy and compute (yes, this includes geographic arbitrage). In the same vein, using older, less efficient chips for the models used by free users seems like an obvious money-saving lever.

Finally, OpenAI needs to get serious on specialised enterprise software (for health, finance…). That’s where the money is, and Altman still has the halo necessary to single-handedly sign lucrative, “CEO to CEO” deals with major firms.

And then, there is the Big Bets cost management. If OpenAI does build and operate its own data centers, it needs to ensure it can do so at the lowest possible costs for A) chips (especially if its own), B) building ($9M per MW max, lower than most hyperscalers), C) Energy (aim for renewable to grid-revenue schemes).

Some of the above is likely being implemented already; who knows. Regardless, implementing the playbook above is not optional. Google, Meta, AWS, Anthropic, DeepSeek are already doing at least half of it. OpenAI has the brand; what it lacks is the boring, repeatable operator muscle. If Sam Altman & Co. would like to keep that $300B sticker, it’s time to: Price like an adult, Sell where the enterprise money lives, Run hardware cheaper than the next guy. Miss on any one of those and the next fundraise turns into a down‑round funeral.

TL;DR: OpenAI keeps its $300B badge only if it prices better, with off‑peak/token packs, auto‑routes queries to the cheapest models and chips, lands vertical deals in health and finance, and runs rock‑bottom‑cost data centers. Miss any one of those and the next raise is a down‑round funeral.

Conclusion

OpenAI will probably be fine. When you’ve got a trillion‑watt brand halo, capital still flows like it’s coming out of a busted hydrant. But halos flicker, and the day Wall Street decides the magic has worn off, that “infinite‑money” glitch snaps to zero in a single earnings call. The fix isn’t another demo‑day spectacle; it’s dull, grown‑up discipline: price with rigor, sell where the margins live, and run the hardware shop like your life depends on every kilowatt. In short, become the world‑class operators your valuation already assumes you are.


Share this content:

I am a passionate blogger with extensive experience in web design. As a seasoned YouTube SEO expert, I have helped numerous creators optimize their content for maximum visibility.

Leave a Comment