The Future Has Already Happened
I keep watching the same interview. Different host, different set, same script. Demis Hassabis, Sam Altman, pick your luminary — someone asks what the future looks like, and they paint a picture of a world that's radically different from the one we live in. And they're not wrong. Things will be radically different. But they will also be radically the same — and nobody seems interested in drawing that distinction carefully.
My hypothesis is simple: the future has already happened. The massive changes AI will bring have happened before, just wearing different clothes. The world has transformed beyond recognition since the 1750s, and it has also stayed stubbornly, structurally familiar. I think if you don't hold both of those facts in your head at the same time, your predictions will be useless. I could be wrong about specifics — but the burden of proof should be on anyone claiming this time it's different.
Hypothesis 1: Credit will keep flowing to humans.
One claim that keeps coming up is that AI will "replace" humans — that there's no point in us doing anything anymore. I don't think this makes any sort of sense, and my argument is pretty simple.
When I make a presentation that includes graphs I computed from a dataset of 4,000 data points, nobody says "wow, great job, computer." They say "that's a really good presentation," and everything good about it gets projected onto me. I could never compute those averages by hand in any reasonable amount of time. The computer is vastly more capable than I am at that specific task. But I wield it, so I get the credit.
My bet is that AI will work the same way. We'll have radically more capable computers doing radically more advanced work for us. And the credit will likely still flow to the person who pointed the machine at the right problem, asked the right question, made the right judgment call. If so, this isn't a new dynamic — it's the one we've been living in for decades. Maybe something breaks this pattern, but I haven't seen a convincing argument for what that would be.
Hypothesis 2: Arbitrage will go to zero. It usually does.
Then there's the fantasy where you just tell an AI to "go make a company" and sit back collecting revenue. Sure, that might work briefly. But historically, anything anyone can produce quickly tends to go to zero value, because everyone can produce it. I don't see why this time would be different.
Consider children's books. Setting up a business that produced quality children's books used to be a genuinely hard, genuinely lucrative undertaking. Today, I can nearly outsource the entire process to an LLM. But I'm not going to get rich doing it, because everyone and their dog can do the same thing. Unless my books are meaningfully better than what the default model produces, the margin is gone.
If the models get smarter, the arbitrage in that domain likely just compresses further toward zero. So what will we do instead? I don't know. And I think that's fine — because if you went back to the 1750s and asked people what they'd do once machines could handle most of their physical labor, they wouldn't have had an answer either. That's what happened, and they figured it out.
Hypothesis 3: Hierarchy will survive. It has so far.
Here's what has been remarkably stable across every technological upheaval: the way we organize ourselves. We still live in hierarchical societies where some people make decisions and most people don't. The reason appears to be structural, not accidental — almost every meaningful decision creates unequal outcomes. Some people benefit more, some benefit less. Politics, at its core, is a question of resource distribution.
My read is that since resource distribution is the thing people care most about, we are unlikely to outsource it to an algorithm. The people who get less will say "this is a dumb algorithm, change it." And who changes it? Another algorithm? The only mechanism we've found so far for making these decisions stick is democratic process, which means people, which means government, which means incentive structures, which means work. I could imagine this changing, but the historical track record is pretty strong.
A counter argument could be: Most policy decisions are backed by computer-generated data. Researchers don't read 10,000 books — they search the literature using algorithms. We are already massively dependent on computers for what we still call "human work." Tomorrow we'll have more capable computers doing more of it. My expectation is that the structure doesn't change — the tools get better.
Hypothesis 4: We won't sit on the beach. We never have.
Sometimes these luminaries claim that in the AI future, we'll all just go sit on the beach and do nothing. This sounds profound but I think it falls apart if you think about it for thirty seconds.
If we wanted to sit on the beach more, we could already do that. We have enough productive capacity today to cover most of the essential functions in society. But we don't, because work isn't purely about survival anymore — it's about status, resource access, and the social games that flow from those. You go to work because if you don't, you lose out in the systems that determine how resources get distributed. That's been true for as long as we have records. My guess is it'll still be true in a hundred years.
The nature of the work changes. Its social function, I suspect, doesn't.
Hypothesis 5: AI is electricity — and that tells you more than you think.
The comparison of AI to electricity is apt, but not in the way most people use it. In the 1750s, people could probably imagine lighting on demand. What they couldn't imagine was a washing machine, or a refrigerator, or any of the second- and third-order applications we now consider mundane. In the same way, we can picture AI doing the tasks we currently do, but we can't picture the entirely new activities it will create — activities we'll then automate, creating newer activities, and so on.
But here's the part that gets ignored: I don't think we're going to automate everything. Not because we can't technically, but because we likely can't profitably. There are jobs today that could be automated with existing technology but aren't, because it's simpler or cheaper to have a human do them. Whether something gets automated has historically not been dictated solely by technical feasibility — it's been dictated by market forces. That was true for steam engines, true for electricity, true for software. My working assumption is it'll be true for AI.
The fact that so many technical luminaries seem to have a poor grasp of this — of how market economics interact with technological capability — is genuinely puzzling. These are people who should understand history. They should understand that "technically possible" and "actually deployed at scale" are separated by a canyon of economic and social constraints.
Hypothesis 6: New technology always feels pure. Then it doesn't.
There's one more pattern worth naming. Sam Altman has talked about how ChatGPT feels like technology that's actually serving us, unlike our ad-infested phones and search engines. And he's right — it does feel that way. Right now.
But the internet felt exactly the same way when it was new. Early platforms were built by smart, resourceful people who didn't need to play zero-sum games. The ecosystems they created were genuinely good. Then the rest of society came in, and the economics shifted. Advertising appeared — not because anyone is evil, but because advertising is genuinely useful. If you can't advertise, you can't launch new products. If you can't launch new products, innovation stalls. Ask anyone over 30 when they last recommended an app to someone — the answer is almost never. That's why ads exist: to push people past their default inertia.
Every technology I can think of has followed this arc. It starts as something that feels like it's serving you. Gradually, the economics of scale bring in advertising, monetization, and all the friction that comes with onboarding the full breadth of society. I see no reason AI would be exempt from this. It's not a new kind of technology — it's in a different stage of its life cycle. My prediction: give it five or ten years and it too will feel like it's serving someone else's interests as much as yours. I'd love to be wrong about this one.
Saying AI is "profoundly different" from prior technologies in this regard seems to misunderstand the forces that shape technology adoption. The technology is new. The forces are old.
So what do you actually do with this?
If you want to understand what the future looks like, stop thinking exclusively about technology. Understand the basics — it's fascinating and important. But if technology is all you know, you'll be blind to the patterns that have shaped every previous transition.
Go read some history. Sapiens is a good starting point. Why the West Rules — for Now is another if you want the long structural view. The title sounds political, but mostly it's about how civilizations respond to technological change over millennia. And if you can track down James Burke's Connections (BBC, 1978), it's well worth it — each episode traces how one piece of modern technology descends from a chain of seemingly unrelated earlier inventions. It can be hard to find, but it's the best thing I've seen on how technological change actually propagates. That's the lens you're missing if all you're reading is AI Twitter.
My overall bet: the future will be radically more advanced, but not radically more different. Most of the structural dynamics — hierarchy, status, resource competition, the arc of technology from purity to commercialization — have been running for centuries. I think AI accelerates what's already in motion. I don't think it rewrites the script. But these are hypotheses, not certainties.