Power and the future

I was watching Nikhil Kamath interview Yuval Noah Harari, and at some point they land on a provocative claim: maybe the most powerful entity in the world right now isn't a person. It's AI.

That stuck with me, not because it's obviously true, but because it puts a thumb on something I've been trying to articulate for a while. So let me try.

What power actually is

People intuitively understand power, but I think most definitions are either too narrow or too dramatic.

Here's mine: today the world is in a certain state. Tomorrow it will be slightly different. Power is whatever determines which version of tomorrow we get.

That definition is deliberately broad, because power is broad. Planetary motions shape tomorrow. Weather shapes tomorrow. Geological forces shape tomorrow. Humans shape tomorrow too—at least for now—by applying force to the things near them: building, writing, deciding, creating.

And power is localized. Not just geographically, but in time. One of the points from the interview: the most powerful person alive in the year 26 AD was probably Jesus—except he didn't meaningfully shape the immediate future around him. What shaped civilization was the story written down later by other people. His power was almost entirely displaced in time.

The long arc of human power

If you zoom out far enough, you can see a trajectory.

Before humans existed, humans had no power. Obviously. Then we gradually acquired some—domesticated animals, built tools, organized labor. At first, the primary mechanism was simple: humans transferring ideas to other humans, face to face.

Then we created writing, and something shifted. Humans could now encode their influence into objects—books, laws, records—that spread further and lasted longer than any individual. Some humans became extraordinarily powerful not because they could yell louder, but because their ideas could travel without them.

And as the technological ladder climbed, technology itself started contributing more and more to what tomorrow looks like.

The canvas metaphor

I think of it as a painting.

Imagine the canvas of "what determines tomorrow." Initially, almost all of it is black—the undirected forces of physics, weather, geology. The part that any individual human could paint over was tiny.

Over time, humans painted over more and more of the black. Agriculture. Infrastructure. Institutions. Science. Each technological step gave us a wider brush.

But here's the thing people don't quite register: technology didn't just extend human power. It started accumulating its own weight on the canvas. The printing press didn't just amplify human voices—it created dynamics (mass literacy, ideological movements, revolutions) that no individual controlled. The system started shaping outcomes that no single person was steering.

That pattern has been accelerating for centuries. AI is the latest step, but it's not categorically new. It's the same trajectory, moving faster.

The car analogy (and where it breaks)

When I sit in a car and drive slowly, I feel in control. I have a certain amount of power—I can move things, go places, apply force to the world around me.

If I speed up, I become more powerful in one sense. I can move more, faster. But I also start losing fine control. At a certain speed, I can no longer steer the way I could when driving slowly.

It's possible for a car to go so fast that I lose all meaningful control—and at that point, "power" stops being mine. The car is powerful. I'm just inside it.

AI has a similar dynamic, but with an extra property that cars don't have: as we shape AI, it starts to shape us back. A car doesn't change how you think about driving while you're driving it. AI does. It changes what questions you ask, what information you see, what you consider possible. And most people aren't tracking those shifts as they happen.

So who actually has the power?

Five years ago, what happened in my work tomorrow was mostly constrained by my ability to type code into a keyboard and make good deductions. The bottleneck was me.

Today, that's less true. A meaningful fraction of what determines my output is happening inside matrix multiplications on GPUs. I still frame the questions. I still make decisions. But the agency is increasingly delegated.

The question is whether that makes me more powerful or less powerful. And I think the honest answer is: it depends entirely on whether I can perceive the effects of my actions and steer accordingly.

If I can see clearly what the technology is doing and adjust, I'm more powerful than before—the car is faster and I can still steer.

If I can't, I'm just along for the ride.

Why AI might be the most powerful entity right now

I don't think this is because AI has will or intention. It's simpler than that.

If a large number of people are using a technology they don't fully understand or control—driving fast cars they can't quite steer—then the aggregate outcome is shaped more by the technology's dynamics than by anyone's deliberate choices.

You might say the people building AI are the powerful ones. But there are a lot of them now, and they're competing. If users don't like one AI, they switch. The builders are powerful, but they're also beholden to the whims of the market, which makes their power diffuse and reactive.

So who's actually steering? In a lot of cases, nobody. The system produces outcomes that are a function of the technology's properties, the users' half-understood interactions with it, and the competitive dynamics between providers. That aggregate force—undirected, emergent, not controlled by any single actor—is what's actually painting the canvas right now.

That's what it means to say AI is the most powerful entity. Not that it has agency. But that it's the largest determinant of which tomorrow we get, and no one is really driving.

The precedent that makes me cautiously optimistic

Early cars were terrible. People died constantly—drivers who ran others over, drivers who drove into ditches because the machines were uncontrollable. Cars were a technology that gave humans power while simultaneously taking it away through lack of safety and control.

Today, no one would say cars dominate humans. We figured it out. Seatbelts, traffic laws, better engineering, driver training. The technology is now firmly servile. Humans have the control in that interaction.

Social media followed a similar arc—or is still following it. For a decade, most people didn't understand how it was shaping them. Now, awareness is catching up. The dynamics are better understood, even if the solutions are still lagging.

AI, I think, is entering its chaotic phase. The phase where the technology is powerful enough to shape outcomes at scale, but most people aren't yet aware of how it's influencing them, and the safety mechanisms haven't caught up.

Where this lands

After thinking this through, I find myself cautiously optimistic—but not about the immediate future.

In the short term, I think we're in a period where humans are losing effective control faster than we're gaining awareness. The car is speeding up and the steering is not yet ready, at least not for the masses.

In the longer term, the historical pattern suggests we do eventually figure it out. Every powerful technology has gone through a chaotic phase before humans developed the understanding and infrastructure to harness it.

What decides whether that optimism is warranted is not the technology itself. It's our ability to understand what it's doing to us and to build the controls—institutional, educational, personal—that let us steer it rather than ride it.

To be clear: I love AI and I think we should keep it. We don't want to go back to horses do we?