Every term, a new AI-detection vendor promises to solve cheating. Every term, schools that rely on them get burned. This is an honest look at what AI detectors actually do, what their accuracy really is, and the alternative strategy that schools who handle AI well have settled on.
What AI detectors claim — and what they actually deliver
AI-detection tools (Turnitin AI, GPTZero, Copyleaks AI, and similar) analyse text for statistical patterns associated with AI generation. They produce a "this text is X% likely to be AI" score.
In demos, they look impressive. In production, three problems get in the way:
1. Accuracy is not what the marketing says
Independent academic studies through 2024–2025 have repeatedly shown AI-detection accuracy on real student work to be substantially lower than vendor-published numbers. Realistic ranges for false-positive and false-negative rates sit well above what any school should consider actionable on its own.
The benchmark question isn't "what's the detector's accuracy on clean test data?" — it's "what's the detector's accuracy on a 13-year-old's slightly polished essay, written under time pressure?". On that question, every detector underperforms its marketing.
2. False positives disproportionately hit certain students
This is the hardest part to discuss and the most important. Multiple studies have found AI detectors flag the writing of non-native English speakers at substantially higher rates than native speakers. The likely cause: non-native writers often use simpler sentence structure and more limited vocabulary — the same patterns AI generators tend toward.
In a school context, this means an AI-detection-led academic-honesty policy systematically punishes EAL/ELL students for writing in their second language.
Other student groups also produce more false positives:
- Students with autism, who may write in patterns the detector reads as "too consistent."
- Students who write very formally (often the most diligent students).
- Students who genuinely use AI tools as accessibility aids — voice-to-text, spell-checking, paraphrasing for clarity.
If your school's AI policy is "we run everything through a detector and act on the score," you have built a system that disproportionately punishes vulnerable students. That should stop today.
3. The arms race is unwinnable
Every detector improvement is followed by an AI generator improvement, followed by a "humaniser" tool that paraphrases AI output to evade detection, followed by another detector. Schools are not the right actor to win this race, and even if they could, they'd be spending teacher time on adversarial cat-and-mouse instead of on teaching.
When AI detectors are useful
Be fair: detectors are not useless. They are useful in a narrow band:
- As one signal among several in a holistic academic-honesty review, never as the sole basis for an accusation.
- In conversations with students, not as evidence but as a starting prompt: "The detector flagged this; let's talk through how you wrote it."
- For trend monitoring at department level — "how much AI-generated work is appearing in Year 9 history?" — to inform curriculum and assessment decisions, not individual cases.
That's roughly the entire useful range. Anything beyond it produces injustice.
What schools that handle AI well do instead
Schools that have moved past the detection-only approach have generally adopted some combination of the five strategies below.
Strategy 1: Redesign assessment
The single highest-impact move. If your assessment is "write a five-paragraph essay at home and submit it," AI has obsoleted that assessment. You can either fight the tide or design assessment that survives it.
What works:
- Process-visible writing. Drafts conferenced with the teacher. The journey is the assessment, not just the destination.
- Oral defences. A short conversation about the work confirms understanding in ways no detector can.
- In-class drafting. First drafts written in supervised time, then refined.
- Multi-modal work. A written piece plus a presentation, video, or artefact.
- Personalised topics. "Write about a moment from your own life" is harder to AI-generate than a topic from the curriculum.
Each of these costs teacher time. Each is more reliable than any detector.
Strategy 2: Provide a school-managed AI with full visibility
If students use a school-managed AI like Askie for Schools, the teacher dashboard shows every interaction. There is no detection problem because there is no opacity. AI-assisted work is logged; the question becomes "did the student use AI well?", not "did they use AI at all?"
This is by far the most underrated strategy. Visibility solves what detection only chases.
Strategy 3: Set an explicit disclosure norm
The new academic-honesty line is disclosure, not abstinence. "If you used AI, say so and explain how. Honest disclosure is never penalised. Hidden AI use is."
Build this into the school AI policy, into every assignment brief, into how you talk to students from day one.
Strategy 4: Teach AI literacy explicitly
Students who understand AI's failure modes use it more transparently. They cite. They verify. They edit. Students who don't understand AI tend toward the worst use cases — copy-paste, no edit, no understanding.
A weekly fifteen-minute AI literacy moment, woven into existing lessons, pays back disproportionately.
Strategy 5: Have a conversation, not a tribunal
When you suspect AI-assisted work outside disclosure norms:
- Have a conversation with the student. Not as an accusation — as a curiosity. "Walk me through how you wrote this."
- Listen for understanding. Can they explain what they wrote? Can they reproduce a paragraph in their own words?
- If understanding is clearly absent, treat it as an academic-honesty conversation, not a tribunal. The first instance is usually a teaching moment.
- Document, don't punish, on first offence. Build a pattern before escalating.
This approach is slower than running everything through a detector. It produces better outcomes — for students, for the teacher-student relationship, and for institutional fairness.
What to tell parents about AI detection
Parents often ask whether AI detectors are being used and how. A clear answer in plain language:
"We don't rely on AI-detection software to determine academic honesty. The technology has unacceptable false-positive rates, especially for students writing in English as an additional language and for students with certain learning differences. We use a combination of redesigned assessments, transparent disclosure norms, and a school-managed AI tool with full teacher visibility to make AI a productive part of learning rather than a hidden one."
Two minutes of clarity here saves hours of complaints later.
The pragmatic 2026 stance
If you're a head teacher or department lead, the defensible 2026 stance on AI detection looks like this:
- Do not make AI-detector scores actionable on their own. Ever.
- Do adopt a school-managed AI for legitimate use, with teacher visibility.
- Do redesign at least one assessment per subject this year to be AI-resilient.
- Do establish disclosure norms in writing and reinforce them every term.
- Do teach AI literacy explicitly, not just implicitly.
- Do treat suspected hidden AI use as a conversation, not a test result.
That stance survives scrutiny from parents, from students, from regulators, and from your own conscience.
Frequently asked questions
Do AI detectors work in 2026?
Imperfectly. Realistic false-positive and false-negative rates remain too high for any detector score to be the sole basis for an academic-honesty decision. They can serve as one signal in a broader review, never as the verdict.
Are AI detectors biased?
Yes. Multiple independent studies have shown AI detectors flag the writing of non-native English speakers and some neurodivergent students at substantially higher rates. Using detection-led policy in a diverse school is functionally discriminatory.
What should schools do instead of using AI detectors?
Redesign assessment to be AI-resilient, adopt a school-managed AI with teacher visibility, set explicit disclosure norms, teach AI literacy, and treat suspected hidden AI use as a conversation with the student rather than a tribunal.
Can teachers tell if a student used ChatGPT?
Sometimes — especially on shorter pieces from students whose voice the teacher knows well. Often, no. The pragmatic response is to redesign assessments so the question becomes irrelevant. See ChatGPT in schools.
Is it fair to use Turnitin AI detection?
Used as one signal in a broader review, with explicit awareness of false-positive bias, defensibly fair. Used as a verdict on its own, no — it has well-documented bias against EAL/ELL students and produces unfair outcomes.
Want to make the AI-detection problem mostly go away? Askie for Schools gives teachers full visibility into student AI use, so the question shifts from "did they use AI?" to "did they use it well?" Start a pilot →