Most school AI initiatives fail in the same way: announced district-wide in September, gathering dust by December. The schools that succeed start small, measure honestly, and scale only what's working. This is a six-week pilot playbook used by schools that have gotten AI in classrooms and kept it there.
Why pilots fail (and how this playbook avoids it)
The five failure modes that account for almost every dead AI pilot:
- No clear success criteria. "We'll see how it goes" produces no decision at the end.
- Wrong teacher. The enthusiast everyone expects is sometimes not the right pilot lead. Pick deliberately, not by default.
- Too many students, too fast. A pilot is not a small rollout. Keep it surgical.
- No parent communication until something goes wrong. Avoidable in one email.
- No decision moment. The pilot drifts into a permanent state without anyone deciding whether to scale.
The playbook below is designed around those five.
Before week 1: the prep
A handful of decisions to make before kickoff. Total time: ~4 hours over two weeks.
Pick the pilot classroom
The right pilot teacher has three properties, in order:
- Curious, not zealot. A teacher who is interested in AI but hasn't sold their soul to it is the right blend of openness and scepticism.
- Credible with colleagues. When the pilot ends, this teacher will tell other teachers what they found. Pick someone the staff room listens to.
- Year group that fits the tool. For Askie for Schools, elementary or lower secondary is ideal. For maths-only tools like Khanmigo, lean toward year groups doing maths intensively.
The wrong teacher: the loudest AI sceptic ("to convert them") or the loudest AI enthusiast ("they'll make it work"). Both produce uninformative pilots.
Define success criteria up front
Write them down. Get the head teacher to sign. Three to five metrics, each with a target.
A workable starter set:
- Teacher time saved per week (target: 60+ minutes, measured by teacher's own log).
- Student engagement (qualitative + a simple end-of-pilot survey).
- Learning outcomes on at least one specific objective (pre/post measure, even informal).
- Safety incidents (target: zero unmoderated incidents; some flagged-and-resolved is expected and fine).
- Teacher confidence (would the teacher continue? Would they recommend?).
Don't over-engineer the measurement. A teacher's weekly note in a Google Doc is more useful than a perfect dashboard nobody fills in.
Pick the platform
This is its own decision and there's no point piloting the wrong tool. We've covered this in the top 10 AI tools for schools comparison. For K–8 specifically, Askie for Schools is purpose-built for this pilot path and free for the first classroom.
Brief parents
A short letter home, one week before kickoff. Cover:
- What is being piloted, in which class, for how long.
- The safety and privacy posture in one paragraph.
- How parents can see what their child is doing.
- A named contact for questions.
Two pages, one signature. Done.
Get the IT setup done
Devices, accounts, network access, SSO if relevant. Most reputable schools AI platforms can be set up for one classroom in a day. Have it ready before week 1.
Week 1: AI literacy first, AI use second
The most overlooked week of any pilot. Don't let students use the AI yet.
Lesson 1: "What is AI? What is it not?" A 30-minute conversation. Concrete examples. Watch the AI fail on something in front of the class.
Lesson 2: "AI got it wrong" hunt. Detailed in our classroom AI guide. Pair work. Find the mistakes.
Lesson 3: Set the norms. The four classroom rules (also in the classroom guide): visibility, disclosure, verification, devices-flat-when-not-in-use.
End of week 1 check: Can students articulate "AI makes mistakes, we check"? If yes, proceed. If not, stay on AI literacy another day.
Week 2: Whole-class, teacher-driven
Students still don't touch the AI directly. Everything is whole-class, on the projector, teacher-prompted.
Three structured activities — pick from our lesson ideas list:
- The two-prompt comparison.
- The "explain it like I'm five" game.
- AI as language coach or AI as research summariser (subject-dependent).
The teacher logs:
- Did students engage? Where? Where did engagement drop?
- Did the AI surface anything pedagogically useful?
- Time saved on what would otherwise have been a manually built lesson.
End of week 2 check: Are students comfortable with the AI as a class tool? Are the norms holding? If yes, move to small-group.
Week 3: Small groups, station model
Two or three "AI stations" in the classroom. Pairs of students at each. Other groups doing parallel work. Rotation.
Suggested activities:
- The differentiated reading task. AI produces three versions of a text; pairs read; discussion follows.
- The AI rubric workshop (Years 5+).
- The historical interview (humanities subject).
The teacher logs:
- What conversations are happening at the stations? Are pairs using the AI well or poorly?
- Any safety flags or off-task use?
- What's the teacher learning from observing students' AI interactions?
End of week 3 check: Are the small groups productive? Is the teacher able to see what's happening? If yes, move to one-to-one.
Week 4: One-to-one (where appropriate)
For older students (Year 5+) with strong infrastructure: every student has access on their own device, working through teacher-defined activities. For younger students: continue with stations.
The teacher's role shifts substantially. With every student potentially in a different conversation, the teacher dashboard becomes critical — for visibility, for flagging, for catching what was previously invisible.
Three activities for week 4:
- AI as study partner. Students prepare for a small test by having the AI quiz them.
- The reframe-and-rephrase workflow. Students stuck on a problem ask the AI to "say it differently" — they don't ask for the answer.
- The creative writing partner. AI suggests alternatives; student decides what survives.
The teacher logs the same metrics, but now layered with: what does the dashboard show? Is the visibility actually useful?
End of week 4 check: Does the one-to-one mode produce learning gains? Is teacher visibility usable in practice, not just in theory?
Week 5: Differentiation week
The single most impactful AI use for most classrooms. Spend a week stretching it.
The teacher uses AI to:
- Differentiate three texts to three reading levels for every text used.
- Generate practice questions at three levels for at least two topics.
- Run targeted AI-tutor sessions with two or three students who need extra support.
- For SEND students in the class, run scoped AI sessions per the AI for special education workflows.
The pilot is now testing the most valuable use cases AI offers schools. The teacher logs the time saved and the impact on the students who needed it most.
End of week 5 check: Is the time saved real? Are the students who needed extra support actually getting it via the AI?
Week 6: Decision week
The pilot is over. Time to decide.
Step 1: Teacher writes a one-page report
Not a 20-page report. One page. Five sections:
- What we did.
- What worked (with examples).
- What didn't (with examples).
- What I'd change if scaling.
- My recommendation.
Step 2: Student survey
Five questions, anonymous, 2 minutes:
- Did the AI help you learn? (yes / sort of / no)
- What was your favourite way to use it?
- What didn't work?
- Did you ever feel unsafe or uncomfortable?
- Should other classes get to use it?
Step 3: Parent check-in
Three or four parents, ten-minute conversations. What did they hear from their child? Any concerns? Any unexpected wins?
Step 4: Decision meeting
A short meeting: pilot teacher, head, IT lead, maybe one parent rep. Walk through the criteria you set in prep. Decide:
- Scale to a year group (next safe step).
- Continue at one classroom for another half-term and revisit.
- Pause and try a different tool.
- Stop and explain why.
All four are legitimate outcomes. The mistake is no decision.
Common pilot questions
Three questions almost every school asks before week 1:
"What if a parent complains?"
Have a named contact. Have the AI's safety and privacy posture ready in writing. Most parent complaints are concern, not opposition — clear communication resolves them. Pre-pilot letter prevents the worst.
"What if a student misuses the AI?"
Misuse will happen in some form during any pilot. The right response: the teacher dashboard flags it, the teacher talks to the student, the policy is consulted, the situation is resolved. Document, communicate, and move on. This is normal.
"How much time will the teacher need?"
Realistic estimate: 4–6 hours over the six weeks of additional teacher time beyond what they'd already be doing, mostly in weeks 1–2 and week 6. Schools should plan for this — a tiny stipend or release time, depending on context.
What to scale to in week 7+
If the pilot succeeds, the next scale is not the whole school. It's another classroom — ideally with a different teacher profile, in a different year group. Two pilots' worth of data gives you confidence to expand to a year group; a year group's data gives you confidence to expand to a key stage.
This sounds slow. It's the speed that produces sustainable adoption rather than enthusiastic launches followed by quiet abandonment.
Frequently asked questions
How long should an AI pilot in a school last?
Six weeks is the sweet spot — long enough to get past novelty, short enough to maintain energy and produce a decision. Shorter pilots don't yield real signal; longer pilots drift.
What's the best class to pilot AI in?
The right pilot teacher matters more than the right class. Choose a curious-but-sceptical teacher whom colleagues respect, in a year group that fits the tool.
How much does an AI pilot cost?
With reputable vendors, the first classroom is usually free. Askie for Schools offers a free pilot for one classroom. Plan for the teacher's time — a small stipend or release time — as the meaningful cost.
What success metrics should we track in an AI pilot?
Teacher time saved (target: 60+ min/week), student engagement, learning outcomes on at least one specific objective, safety incidents, teacher confidence at end. Keep the measurement light enough that the teacher actually does it.
Should we tell parents about the pilot before or after it starts?
Before. Always before. A short letter home one week before kickoff pre-empts almost all complaints. Schools that announce after the fact spend the pilot defending it instead of running it.
What if the pilot fails?
A "failed" pilot is a successful pilot. You learned that the tool, the teacher, or the use case wasn't right — at the cost of one classroom for six weeks. That's the cheapest information you'll ever buy about AI in your school.
Ready to run a six-week AI pilot in your school? Askie for Schools gives you a free classroom pilot, full setup support, and a measurable path to decision day. Book your pilot →