The limits of AI

Since the ChatGPT moment many conversations has been about AI's capabilities. Can it reason? Can it code? Can it do science? These are interesting questions, but I think the answer is converging toward "yes, eventually" for most of them. Neural networks are universal approximators. Given enough data and compute, there doesn't seem to be a hard ceiling on what they can learn to do.

So let's take that as a given for the sake of argument: AI can, or will be able to, do anything a human can do cognitively. The question that follows is obvious—does that mean it replaces us? I don't think so, and the reason isn't about capability. It's about accountability.

You can't sue an AI

If an AI makes a decision that causes harm—misallocates capital, gives dangerous medical advice, crashes a system—who is responsible?

You can't sue the AI. And it's worth sitting with why that doesn't make sense, rather than just asserting it.

Suing works because there's a persistent, unique entity on the other side. When you sue a person, the ultimate threat is taking away their freedom. When you sue a company, the ultimate threat is dissolution. Both of these land because every person and every company is one-of-a-kind and continuous over time — there's something durable you can point at and say "that's the responsible party."

AI doesn't have this property. Every instance of a model is an exact clone of every other instance. If you're suing an AI interaction, you're not suing anything continuous. The weights can be updated, retrained, or replaced entirely. The only thing that persists is the model itself — and then you'd have to sue all of it. At that point you're suing a list of numbers and some inference code, which is not a concept that holds up well. For suing an AI to be coherent, you'd need some consistent, continuous being that is somehow distinct from all the other copies of it. Maybe that exists in some far-flung future, but it's not on the horizon, and it's not obvious there's any need for it.

Until then, you're not suing the AI. You're suing whoever is accountable for it.

Accountability creates a ceiling on delegation

This is where it gets interesting. If an AI can't be accountable, then a human always has to be. Wherever an AI operates, there's a person somewhere who is on the hook for what it does.

And if you're on the hook, you need to understand what's happening. Not every detail—you don't micromanage a team of humans either. But you need to understand the system well enough to know when something is going wrong, to catch problems before they become liabilities, and to make the judgment calls that the AI can't be held responsible for.

That's a real constraint. Not on what AI can do, but on how fast and how far you can scale its deployment. The span of what one person can meaningfully oversee is finite. You can widen that span with better tools, better monitoring, better abstractions—but you can't eliminate it. At some point, the person who's accountable needs to actually comprehend what's happening under them.

This is a speed limit, not a stop sign

None of this means AI won't transform how work gets done. It obviously will. But the transformation has a governor on it, and the governor is human comprehension.

You can delegate more and more work to AI systems. But the rate at which you can do that is bounded by your ability to maintain enough understanding to remain accountable. Push past that boundary and you're running a system you can't answer for—which, in any domain with real stakes, is how you get sued, regulated, or shut down.

This dynamic is familiar. It's the same constraint that limits how large organizations can get before they become dysfunctional. It doesn't work well when humans don't understand what's happening inside their own company. I don't think it's going to work much better when the employees are AIs.

What this actually means

The future probably isn't "AI replaces humans." It's "AI does the work, humans carry the accountability." And the interesting question becomes: what does it take to be a competent accountable person in a world where most of the execution is automated?

That's a question about judgment, about systems literacy, about knowing what to check and when. It's a different skill set than doing the work yourself, and we don't really train for it yet. But it's the skill set that accountability demands, and accountability isn't going away.