Skip to main content

OpenAI's Amazon-level ambitions have one big problem [Business Insider]

When Sam Altman took the stage at a recent tech conference, the message was unmistakable: OpenAI isn't just a research lab anymore. It's a platform. It's an infrastructure play. It wants to be the Amazon Web Services of artificial intelligence—the layer upon which the entire economy runs. The vision is grand, the funding is astronomical, and the ambition is borderline imperial. But for all the talk of super intelligence and trillion-dollar valuations, OpenAI has one glaring problem that nobody in the boardroom seems willing to address directly: the fundamental economics of inference don't work the way they think they do.

The Amazon playbook doesn't fit

Let's give credit where it's due. Amazon Web Services succeeded because it commoditized a previously expensive, specialized resource—compute capacity. Jeff Bezos bet that if you could spin up a server for pennies an hour, developers would flood in. And they did. The unit economics were simple: Amazon bought servers at scale, ran them efficiently, and sold the spare cycles at a margin. That model works because a server's marginal cost approaches zero once it's built and running. A single AWS instance can host a thousand customers' WordPress sites. The cost per transaction is effectively nothing.

OpenAI, on the other hand, has to run a new, expensive computation for every single query. Every time you ask ChatGPT to summarize a 50-page PDF or generate an image of a cat in a spacesuit, it burns through GPU cycles. Those cycles cost real money—electricity, cooling, hardware depreciation, and the constant need to upgrade to the next generation of chips. Worse, the demand is unpredictable. A viral tweet can send inference costs through the roof. There is no "idle capacity" to sell at a discount. Every user is a direct, variable cost.

The scaling paradox

OpenAI's solution to this problem has been to push for ever-larger models. The theory is that a bigger, smarter model will be more efficient per task—it will need fewer tokens to answer a question, need fewer retries, and hallucinate less. That's true, to a point. GPT-4 is more efficient than GPT-3 in terms of quality per token. But the underlying physics of transformer models means that the relationship between model size and cost is brutally nonlinear. The jump from GPT-3 to GPT-4 required an estimated 100x more compute for training, and the inference cost is still significantly higher per query.

And here's the kicker: the market is already pricing in these savings. Users and businesses expect better answers at the same price. If OpenAI charges $20 a month for ChatGPT Plus today, they can't suddenly raise it to $200 because the model got smarter. The competition—from Google's Gemini, Meta's Llama, and a dozen open-source alternatives—keeps the pricing pressure on. So the more OpenAI invests in bigger models, the more they have to sell them at razor-thin margins, hoping volume will make up for it.

The enterprise trap

OpenAI's real target is enterprise contracts. They want to be the backbone of customer service chatbots, internal knowledge bases, and automated coding assistants. This is the AWS dream—recurring revenue from Fortune 500 companies. But enterprise buyers are notoriously stingy. They want custom SLAs, data privacy guarantees, and integration support. They also demand predictable pricing. A company building a customer service bot cannot afford a 10x cost spike because a new model upgrade hit the market. They want fixed costs per query, and they will negotiate hard.

This creates a fundamental tension. OpenAI needs to invest billions in R&D and compute infrastructure to stay ahead of the competition. But they also need to sell their product at a price point that enterprise buyers will stomach. The only way to square that circle is to either massively scale up volume—which requires more compute, more cost—or to find a breakthrough in efficiency that nobody else has discovered. The latter is a moonshot. The former is a treadmill.

Where the money actually goes

Let's look at the numbers. OpenAI is reportedly burning through $5–7 billion a year on compute and salaries. Their revenue is growing fast—maybe $2–3 billion this year—but it's still a long way from covering costs. Investors are betting that the revenue curve will eventually outpace the cost curve. That's the classic Silicon Valley growth story. But in this case, the cost curve isn't just fixed hardware; it's variable and scaling with every new user. Amazon's AWS had a similar cost structure, but the marginal cost of serving a new customer on a shared server was near zero. For OpenAI, every new customer brings a new, real marginal cost.

The most telling sign of this problem is OpenAI's own behavior. They have started limiting free-tier usage, throttling high-volume users, and pushing for "tiered" access that charges more for premium features. These are the moves of a company that knows its unit economics are broken, not a company that is confidently scaling into dominance. It's the same pattern we saw with Uber—pricing below cost to capture market share, then scrambling to find a profitable equilibrium. Uber eventually found it by raising prices and cutting driver pay, but that's a much harder game to play when your "drivers" are million-dollar GPU clusters.

The open-source elephant

And then there's the open-source movement. Meta's Llama 2 and 3, Mistral, and a dozen other models are freely available. Any startup or large enterprise can run them on their own hardware or lease GPU time from a cloud provider. The cost of running an open-source model is often a fraction of what OpenAI charges, and the quality gap is narrowing fast. OpenAI's moat was always the quality of their model. But that moat is eroding. Once a model is good enough, businesses will choose the cheaper, self-hosted option. OpenAI is betting that their next-generation models will stay far enough ahead to justify the premium. That's a dangerous bet when the entire open-source ecosystem is collectively spending billions to catch up.

None of this means OpenAI is doomed. They have a strong brand, a talented team, and a massive head start. But the Amazon-level ambition requires solving a problem that Amazon never had: the cost of the core product itself increases with every customer you add. Until OpenAI finds a way to decouple revenue growth from compute cost growth—either through a radical efficiency breakthrough, a subscription model that heavily subsidizes heavy users, or a pivot to something entirely different—the financial math simply doesn't add up. And in the end, the market always does the math.

Ahmed Abed – News journalist

Latest

Want to hire for your robotics startup? The autonomous vehicle industry is ripe for picking. [Business Insider]

Want to hire for your robotics startup? The autonomous vehicle industry is ripe for picking. If you are trying to build a robotics startup right now, you know the pain. You are competing against the defense industry, big tech, and legacy manufacturers for the same small pool of engineers. But there is a secret patch of talent that is suddenly, and somewhat unexpectedly, available. I’m talking about the autonomous vehicle industry. For the last decade, self-driving car companies hoarded talent. They paid six-figure salaries for people who could write a sensor fusion algorithm or calibrate a LIDAR array. But the tide has turned. The hype has normalized. The "robotaxi in every driveway" promise has been pushed back a decade. And as a result, some of the most brilliant hardware and software engineers in the world are looking for their next move. This isn’t about poaching desperate people. It is about recognizing that the AV sector has matured into a perfect training ground ...

In OpenAI trial, Elon Musk points to meetings with Barack Obama and Larry Page as proof he's serious about AI risks [Business Insider]

In a California courtroom last week, the ongoing legal battle between Elon Musk and OpenAI took a turn into the realm of high-stakes geopolitics and celebrity summits. The Tesla and SpaceX CEO, testifying in a trial that could reshape the future of artificial intelligence development, pointed to two specific private meetings to underscore his long-standing warnings about unregulated AI. Musk, who co-founded OpenAI in 2015 and later left the board, is currently suing the company and its CEO, Sam Altman, alleging breach of contract and a deviation from the original non-profit mission. But in his testimony, Musk pivoted from the legal minutiae to a broader narrative: his personal, decades-long crusade to prevent an AI apocalypse. The Obama Meeting: A Warning at the Highest Level According to court transcripts, Musk recounted a private meeting with former President Barack Obama. The billionaire claimed he used this high-level audience to directly warn the 44th president about the exi...

Disney has decided to keep ESPN

It's official: Disney has decided to keep ESPN. After months of speculation, boardroom drama, and whispered rumors about spinning off the "Worldwide Leader in Sports," the House of Mouse has chosen to hold onto its most controversial—and profitable—asset. For sports fans, this is a seismic moment that deserves more than a headline. The decision, announced late Tuesday, ends a prolonged period of uncertainty. Analysts had been divided; some argued that ESPN's linear cable model was a dinosaur in a streaming world, while others insisted the brand still held immense value. Disney CEO Bob Iger, who returned to the helm in late 2022, has now made his stance clear: ESPN is staying in the family. Why the Change of Heart? To understand this, you have to look at the numbers. For all the talk about cord-cutting, ESPN still generates massive cash flow. It commands the highest affiliate fees of any cable network—around $9 per subscriber per month. That adds up to billions in...

Inside the rise of vibe coding's newest crowd [Business Insider]

In the sprawling digital landscape of 2024, a new kind of programmer is emerging. They don’t speak in Python or JavaScript. They don’t debug with breakpoints. They don’t even own a mechanical keyboard. Instead, they converse with artificial intelligence, describing their desires in plain English, and watch as code materializes before their eyes. This isn’t a dystopian future; it’s the present reality of "vibe coding," and its newest crowd is changing what it means to be a developer. Vibe coding, a term that first gained traction in niche developer forums, refers to the practice of using large language models (LLMs) like GPT-4, Claude, or specialized coding copilots to generate entire applications based on natural language prompts. The "vibe" is the key ingredient. It’s not about precise technical specifications. It’s about the mood, the aesthetic, the feeling you want the software to evoke. A user might say, "Create a retro-futuristic weather app that feels l...

Tory Burch says she would 'never trade off' being a good mom while building her company — but something had to give [Business Insider]

In a rare, candid interview that peeled back the glossy veneer of entrepreneurial mythology, fashion mogul Tory Burch admitted that building a billion-dollar brand while raising three sons required a trade-off she never publicly discussed—until now. "I would never trade off being a good mom," Burch told a small group of journalists last week in New York. "But something had to give. And that something was my own sleep, my own health, and the illusion that I could do it all perfectly." The 57-year-old designer, whose namesake company is valued at over $5 billion, has long been held up as a paragon of work-life balance. Yet in her new memoir and in conversations surrounding its release, Burch is rewriting that narrative—not as a confession of failure, but as a realistic blueprint for the compromises that define modern motherhood and ambition. The myth of 'having it all' Burch launched her company in 2004 from her kitchen table in Manhattan, with three y...

Here's what's behind oil's 8-day climb back to Iran-war highs [Business Insider]

Oil prices have surged for eight consecutive sessions, climbing back to levels not seen since the height of tensions with Iran earlier this year. The rally has caught many traders off guard, but the underlying drivers are a mix of tightening supply, geopolitical risk, and shifting market sentiment. Here’s a breakdown of what’s really behind this sustained climb. The Supply Squeeze: OPEC+ Discipline Meets Global Demand The most immediate factor is the ongoing production cuts from OPEC+ members, led by Saudi Arabia and Russia. Since late 2023, the alliance has trimmed output by roughly 2 million barrels per day (bpd). This isn't new news, but the market is now feeling the cumulative effect. Stockpiles in major consumer nations, especially the United States, have been drawing down faster than expected. The U.S. Energy Information Administration (EIA) reported a larger-than-anticipated crude inventory draw last week of 4.5 million barrels. When supply is tight, any additional bullis...

I'm glad I escaped my cult leader husband [Business Insider]

I never thought I’d be writing this from a safe house, looking out a window that doesn’t have bars on it. But here I am. Free. And I need to tell this story, because there are other women out there who might be reading this and wondering if the man they married is actually the leader of a cult. If you are one of them, please keep reading. I am glad I escaped my cult leader husband, and I want you to know you can too. How It Started: The Man Who Seemed Perfect When I met David, I thought he was the most charismatic man I had ever encountered. He wasn’t wealthy, and he didn’t drive a fancy car. But he had this way of looking at you—like he could see right through your soul. He would talk about "higher consciousness" and "the divine path." It sounded spiritual, even beautiful. I was 24, lonely, and searching for meaning. David offered me a purpose. He said I was his "chosen partner," the only one who could help him build a community of light. Within six mo...

Supreme Court sides with anti-abortion center raising First Amendment fears about state probe

In a decision that legal experts say could reshape the boundaries of state authority over anti-abortion crisis pregnancy centers, the Supreme Court on Tuesday unanimously sided with a California-based organization, ruling that the state’s investigation into its practices raised serious First Amendment concerns. The ruling, while narrow in scope, has already ignited a fierce debate about the limits of government oversight and the protection of ideological speech. The case, National Institute of Family and Life Advocates v. Becerra , centered on a California law that required licensed crisis pregnancy centers to post notices about the availability of state-funded contraception and abortion services. The centers, which typically oppose abortion and do not provide referrals for the procedure, argued that the law compelled them to deliver a message that violates their religious and political beliefs. The state countered that the requirement was a straightforward consumer protection measur...

Meta earnings updates: Stock drops 6% as capex spending expected to balloon to new heights [Business Insider]

Meta Platforms Inc. delivered its latest quarterly earnings report after the closing bell on Wednesday, and the headline numbers were strong. Revenue beat expectations, user growth remained steady, and the company’s core advertising business continued to hum. But one number stole the show—and sent shares sliding 6% in after-hours trading: the eye-popping, ballooning capital expenditure forecast for 2025. The CapEx elephant in the room Meta’s management guided for full-year 2025 capital expenditures in the range of $60 billion to $65 billion. That’s a staggering jump from the $35 billion to $40 billion range the company had projected just a few quarters ago. To put it bluntly, Meta is preparing to spend like a tech giant that sees the future—and is willing to bet the farm on it. CEO Mark Zuckerberg, during the earnings call, framed this as a necessary investment in artificial intelligence infrastructure. “We’re building for the next decade,” he told analysts. “The compute power we...

Ukraine strikesRussia's Tuapse refinery, Putin says attacks intensifying on civilian targets

The ongoing conflict between Ukraine and Russia took another significant turn this week as Ukrainian forces struck the critical Tuapse oil refinery in southern Russia, while Russian President Vladimir Putin claimed that attacks on civilian infrastructure are intensifying. The developments mark a new phase in the war, with both sides ramping up operations far from the front lines. Strike on Tuapse: A Strategic Blow In the early hours of Tuesday, Ukrainian drones and missiles hit the Tuapse refinery, located on Russia’s Black Sea coast in the Krasnodar region. The facility, one of Russia’s largest and most modern oil processing plants, has been a frequent target for Ukraine since 2022. According to local officials, the attack caused a massive fire that burned for several hours before emergency crews could contain it. The refinery processes roughly 12 million tons of crude oil annually, supplying fuel to both the Russian military and civilian markets. “This is a direct hit on Russia...