The real question behind AI revenue

A recent All-In episode had Chamath and Brad Gerstner on opposite sides of whether OpenAI and Anthropic's revenue—$6 billion in a single month—is sustainable. Chamath's argument was essentially: I'm spending more and more on AI, but my revenue isn't going up. A lot of enterprises are experimenting heavily, deploying more capital than the returns justify, and the spend is inflated. There's a bubble. Brad and Jason's counter was simpler: this is a good investment. Look at startups—they're using more AI, not less, and for more and more things. Maybe enterprises are deploying capital inefficiently, but they're still deploying too little relative to what a well-structured competitor could deploy. The direction is clear even if the current execution is messy.

Both of these can be true at the same time. One thing they did seem to agree on is that a large share of the increase in AI spend is coming from coding agents—companies using AI to produce more software, faster. Which immediately raises a question that the conversation never quite got to: how much value can software actually accrue? The answer to that shapes whether the current revenue is a blip or the beginning of something much larger. And I think that answer is dramatically bigger than most people's working estimate.

What software actually does

Strip away the buzzwords and software does three things: it stores data, it transmits data, and it computes on data.

The theoretical ceiling of that is wild. You could store the state of everything—how the world looks at any point in time. You could transmit that state anywhere, instantly. You could query any part of the world and get an answer.

But storage and transmission aren't where the real action is. The real action is computation—specifically, simulation. Once you have enough of the world's state encoded, you can start asking "what happens if?" That's prediction. And for a surprisingly large number of problems, simulating possible futures is dramatically cheaper than just trying things by brute force.

Marketing as a preview

If you want to see what this looks like in practice, look at marketing over the last two decades.

Marketing went from "put an ad in the newspaper and hope" to encoding consumer behavior in data, simulating which messages would land with which audiences, and optimizing in near-real-time. The sector absorbed more and more data, spent more and more on computation, and the people who did this well displaced the ones who didn't.

YouTube is a good concrete example. It's now a larger business than Disney. And the way marketing works inside YouTube—thumbnails, titles, A/B testing, continuous optimization—is no longer spray and pray. It's an entirely data-driven process, optimized to a degree that would have been unfathomable just a few years ago. That's what it looks like when software fully absorbs a function that used to be intuition and brute force.

What's happening now is that same pattern expanding to encompass sectors that haven't been touched yet. There are entire economies that don't exist today but will, and they'll probably absorb a large share of total cash flow. That doesn't mean OpenAI and Anthropic will capture 80% of global revenue—that's not how this works. And some of the current spend is genuinely inefficient—enterprises running experiments that won't pay off, discretionary budgets that will get pulled back once the novelty fades. That part will unwind. But the part that won't unwind is the spend from companies that figure out how to deploy AI productively, and the scale of that will likely dwarf the experimental spend we're seeing today.

The practical version

That's the theoretical ceiling. The practical version is more constrained and more interesting.

Just because it's possible to write software that predicts the future for some domain doesn't mean it's feasible to build or profitable to sell. Those are the two real gates.

On feasibility, the picture is actually getting better. What used to require hiring people of a specific skill level—and hoping you could find and afford them—you can increasingly get on tap, metered like electricity. That expands the space of what's practically buildable, which should mean more revenue flowing to the companies providing that capability, not less.

On profitability, I'm less optimistic in the near term. A few things are visibly in the way.

First, to power these new prediction economies, we're going to need more sensor data than we currently have, which means hardware needs to get cheaper and more widely deployed. That's a supply-chain problem, not a software problem, and those take time.

Second, there's a distribution problem. The current channels we use to connect products to people are about to be completely saturated with AI-generated noise. That makes it harder—not easier—for someone with a real product to cut through. At some point, we'll likely need some kind of AI-powered filtering layer that sits between the firehose and the individual, similar to how a good assistant filters your email and only surfaces what matters. But we don't have that yet, and until we do, the gap between "products that can be built" and "products that can be built and sold profitably" stays wide.

This matters because OpenAI's revenue can only keep growing if their customers are making money. If the downstream economics don't work, the upstream spending stops.

Long-term optimism, short-term honesty

I'm tremendously optimistic about where this goes over time. The amount of value that software can capture is barely scratched. It would be hard to argue otherwise with a straight face.

But unbridled optimism that doesn't name the visible constraints isn't a useful frame either. The constraints are real—sensor infrastructure, distribution saturation, downstream profitability—and they'll shape the timeline even if they don't change the destination.

Which brings it back to the All-In debate. Chamath is probably right that a lot of enterprise AI spend is inflated and won't all convert to sustained revenue. Brad is probably right that the structural direction is toward more AI, not less, and that the companies who figure out how to deploy it well will outcompete the ones who don't. Both of those conclusions fall out from the picture above: the ceiling is very high, the near-term path is messy, and the gap between what's buildable and what's profitably sellable is where the real action is.

Whether AI revenue is "sustainable" depends entirely on whether the downstream customers can turn their AI spend into returns. That's a more specific and more useful question than "is this a bubble"—and the answer is different depending on whether you're looking at the next two years or the next twenty.