What our first cheating case actually taught us

We recently had our first confirmed case of a student cheating in Stillwell. A teacher flagged it, we investigated, and the verdict was clear: about 90% of the final submission was pasted in. Scroll back through the version history and it's closer to 99% pasted, then lightly edited down—so call it 95%. The exact number doesn't matter. It's a lot.

You could stop there. Student cheated, case closed, next item on the agenda.

But I didn't stop there, and I'm glad, because what showed up underneath the cheating was more interesting than the cheating itself.

The timeline

Here's what actually happened, reconstructed from the writing process data.

The assignment was a literary analysis. The student opened the document, typed about a line and a half of something that was a reasonable starting point for an analysis, and then… stopped. Paused. And then pasted in a paragraph from what appears to be an entirely different book.

I still don't fully understand that move. It's like watching someone start cooking dinner and then, mid-chop, pull a shoe out of the fridge. But okay—let's keep going.

The student then rewrote parts of that pasted paragraph, deleted it, and proceeded to write out an actual analysis of the assigned text. And here's the thing: the analysis wasn't great. But it wasn't nothing. It said something. It had structure. It was the kind of work that, if you squinted, you could coach into something decent.

Then the student took sections of that analysis—whole paragraphs at a time—out of the document, ran them through AI, and pasted the rewritten versions back in. We're talking 180 words appearing inside a single minute. Finally, a bit of formatting and massaging to make it look right for submission.

Where the student reached for help

If you map out the moments where this student left the document to get AI assistance, a pattern emerges. It wasn't random. It happened at exactly two points:

  1. The blank page. The very beginning, before anything meaningful was written.
  2. The finish line. After the analysis was drafted, when the student could read their own work and see it wasn't good enough—but didn't know how to make it better.

Those aren't arbitrary moments. Research on cognitive load in writing suggests these are exactly the points where working memory gets overwhelmed.

The blank page is a well-studied problem. Starting from nothing forces the brain to simultaneously retrieve knowledge, generate structure, and produce language—all competing for the same limited working memory. Studies using Cognitive Load Theory find that blank-page initiation measurably increases error rates and slows task completion compared to starting from any kind of scaffold or structure. Most students have never received specific training in how to handle that load—how to get from "I have nothing" to "I have a plan." We just assume they'll figure it out, which is a bit like assuming someone will figure out how to swim by being placed in a pool. Some will. Some will thrash.

The end state is arguably worse. Research on student revision—notably work out of Aalto University on written feedback and cognitive effort—shows that as cognitive load accumulates during a writing task, students revise less and less successfully. The cruel part: they can often tell their work isn't good enough. But the act of diagnosing what's wrong and figuring out how to fix it is itself a high-load task, and by that point in the process, the cognitive budget is spent. You can see the gap between what you wrote and what good looks like. You just can't cross it. So you outsource the crossing.

The workflow from the student's point of view

If you remove the context of "this is an assignment where AI use isn't allowed," the student's workflow is actually pretty reasonable. Start with your own thinking. Draft something. Use a tool to refine it. Polish for submission.

That's more or less how a lot of professional writing works now. The student didn't stumble into a bad workflow. They stumbled into a workflow that's banned in this context but normal in most others.

I'm not arguing the student didn't cheat. They did. The assignment had rules and the student broke them. But the way they cheated tells you something specific about what they're struggling with—and it's not "being lazy" or "not caring."

What you see when you actually look

This is the part that gets me excited. If all you have is the final submission, this is a cheating case and nothing more. But because Stillwell records the writing process—keystrokes, pauses, pastes, deletions—the teacher got something far more useful than a guilty verdict. They got a map of where this student breaks down.

The blank page problem and the "how do I make this better" problem are both specific and teachable. A teacher who knows this student freezes at the start can give them a scaffold, a prompt, a five-minute freewrite—something to get the first sentence on the page. A teacher who knows the student can draft but can't self-revise can teach revision strategies, or simply sit with them during that phase.

None of that is possible if all you're looking at is the output. The output just says "cheating." The process says "here's exactly where to help."

The cheating is what made us look closely this time. But the process data is there for every student, every assignment, every session. The student who cheats, the student who doesn't, the student who stares at the blank page for ten minutes before quietly writing something decent—every one of them has a process, and that process is full of moments where a teacher could step in and make a difference. We should have this level of insight everywhere.

This is what Stillwell is built to do—give teachers a window into how students work, not just what they hand in. It just so happens that our first real proof of concept was a cheating case.