Teaching 120 Students at Once: How AI Marking Changed My Friday Nights
A head of physics on what working with an AI marker is actually like — and the one thing it cannot replace.
When the school first asked us to trial AI marking, my reaction was the same as most of my colleagues: tired. We had been through MyMaths, Kerboodle, and a brief and unhappy affair with a homework app that crashed every Sunday evening. Another platform was the last thing I wanted.
I agreed because I had 120 GCSE students and a six-month-old at home. If you have ever tried to mark 120 sets of mock papers between bedtime and Newsnight, you know why.
Here is what changed for me, and what didn't.
What changed
The Saturday and Sunday marking sessions disappeared almost entirely. When my classes do practice papers on the platform, the multiple-choice and short-answer questions mark themselves accurately. The longer questions get a first-pass score and a comment. I still need to read those, but I read them on Tuesday morning with a coffee, not on Sunday evening while my partner is putting the baby to bed.
The marking is also better at flagging patterns I would have missed. Twenty-three of my students got the same wave-speed question wrong in the same way. The platform showed me that. I rewrote the next starter activity around it. The class average on that topic in the next mock went up by eleven percent.
The conversations with parents at parents' evening are different too. Instead of trying to remember whether their child was struggling with circuits or with momentum, I have a screen open with their data, and the conversation moves immediately to "here is what we will do about it".
What didn't
The AI does not give the messy, generous, encouraging feedback that good teaching depends on. It will not write "I can see this took you longer than usual today, and I'm proud of you for staying with it." It will not pull a student aside and ask why their work has dropped off. It will not notice that the boy in row three has stopped making eye contact.
That is the part of the job that I now have time for, because the marking pile has shrunk. I had assumed AI would make teaching feel less human. It has done the opposite.
One honest reservation
I do not let students see the AI feedback on long-answer questions until I have read it. The model is mostly right. But on questions where the answer is open or where the student has gone in a slightly unusual direction, I want to check first. A wrong "could be improved" comment on a piece of original thinking is the kind of small harm that compounds quickly with teenagers.
If you are a head of department thinking about a trial, my honest advice is this: pick the most overworked teacher in your team and let them be the first to try it for two weeks. If they tell you it has given them time back, you have an answer. If they tell you it hasn't, you also have an answer.
Friday nights are mine again. That, on its own, would have been enough.