Intelligence Is Not Prediction
In The Future Has Already Happened I argued that the social structures surrounding AI — credit, accountability, legitimacy, hierarchy — are much stickier than the technology itself, and that the strongest counterargument rests on a specific claim: that intelligence, if you get enough of it, simply solves the structural constraints. I pushed back on that there — intelligence without legibility is indistinguishable from confidence, and we don't hand power to the smartest person for good reason.
But that post left a question open: what if AI reasoning became reliably verifiable? What if a system could show its work in a way that's genuinely checkable? If that happened, the trust and legitimacy constraints might actually erode.
I want to develop why I think that condition is much harder to meet than it sounds — and why the difficulty points to something concrete about where the real work lies.
Verification requires prediction, not intelligence
For narrow computation problems, AI verification is already happening. But most decisions that matter aren't narrow computation problems. They're bets on the future in systems that respond to the bet itself.
Take a concrete policy question: should we allocate more capital to education or to the welfare state? To verify that decision, you need to predict what happens next. And what happens next depends partly on the decision you make, which changes the system you're trying to predict. The reasoning becomes self-referential.
You can try to frame the decision as preference — "more people would choose this than not" — but then you're predicting future preferences, which shift. You can frame it as welfare — "this allocation makes people better off" — but then you're making a causal claim about an enormously complex system. We've increased education budgets many times without improving outcomes. Sometimes things got worse. I'm not making an argument against education spending — I'm probably in favor of it. The point is that the decision to increase spending was backed by smart arguments, solid evidence, and reasonable logic, and it still turned out wrong for reasons that weren't even part of the debate at the time. If the reasons had been obvious, we would have fixed them. They weren't. That's the problem. You can be intelligent about a decision and still not be able to verify it, because the system you're reasoning about is too complex and too responsive to your actions.
This is the gap between intelligence and prediction. Intelligence helps you reason about a system. But in any domain where your actions change the system — which is most domains that matter — verifying the reasoning requires knowing the future state of a thing you're actively influencing. That's not a problem you solve by just being smarter. It's a categorically different kind of problem.
The world isn't mostly shaped by prediction anyway
There's another angle on this that I think gets missed. Policy decisions — capital allocation at the state level — are the domain where you'd most expect careful prediction to matter. But policy is not actually the dominant force shaping the world right now. To a much larger degree, the world is being shaped by startups and companies that are doing things, not predicting things.
The people and organizations with the most impact on how the future unfolds have largely acknowledged that chaos is an enormous part of how things play out. They don't try to predict the future in any rigorous sense. They pick a direction, move fast, and see what happens. The method is iterative, not predictive. Run an experiment. Get feedback. Adjust. Run another experiment.
That's a fundamentally different mode from "reason about the system and verify your conclusion." And I think it actually clarifies where AI can and can't help. AI can almost certainly make the iterative loop faster — better experiments, faster feedback, richer data to adjust against. That's valuable and probably automatable to a significant degree. What it can't easily do is replace the part where someone decides which direction to run in the first place, or makes the judgment call about when to pivot, or absorbs the consequences when the bet doesn't pay off.
The distinction isn't between tasks humans can do and tasks machines can do. It's between the iterative, chaotic, trial-and-error process through which the future actually gets made — and the clean, predictive, verify-then-act model that most AI discourse seems to assume.
Where the work actually goes
If that's right, it points to something concrete.
The real value of AI in complex systems probably isn't replacing judgment. It's building better infrastructure for iteration — better simulations to test against, faster feedback loops, richer models of consequence. Not so you can predict the future, but so you can run more experiments, more cheaply, with better information about what happened last time.
That's an enormous amount of work. It requires intelligence, data, simulation fidelity, the ability to model your own influence on the system, and interpretability good enough to let humans evaluate the output. All of those need to work, together, at a level we're nowhere near today.
That's what would actually change the structural picture I described in The Future Has Already Happened — not a breakthrough in any single link, but the whole chain coming together. And because the chain is that tight, I think the structural constraints hold for a long time, even as individual capabilities improve dramatically.
In the meantime, the work is building better simulations, better feedback loops, better interpretability. Not replacing the human in the loop, but making the loop faster, more informed, and more honest about what it can and can't predict.