HomepageISTEEdSurge
Skip to content
ascd logo
Log in to Witsby: ASCD’s Next-Generation Professional Learning and Credentialing Platform
Join ASCD
April 8, 2026
5 min (est.)
ASCD Blog

What I Learned from Training 50 Teachers in AI Literacy

author avatar
Tutorials can only go so far—what educators need to use AI effectively is confidence and time to experiment.
Artificial IntelligenceProfessional Development & Well-BeingTeaching with Technology
Collage-style illustration of a figure with a laptop displaying "Loading.." in place of a head, with a purple starburst, on a sage green background.
Credit: Mininyx Doodle / iStock
Last year, I headed into a faculty AI training meeting I was about to lead, expecting the usual: a few curious early adopters, many skeptics, and a handful of teachers convinced that AI would either steal their jobs or destroy education as we know it. 
I was right about one thing: The fear was real. But what happened over the next six months surprised me. 
As the AI solutions developer at St. Paul’s School in São Paulo, Brazil, I was tasked with something ambitious: train our entire faculty in AI literacy. This wasn’t just a training where I said, “Here’s how to use ChatGPT,” but one where I was to give teachers a genuine, deep understanding of what AI can and cannot do in education. Fifty teachers later, I learned that the biggest barrier to AI adoption isn’t technology. It’s trust.

The Problem No One Talks About

At the beginning of this training, I made a classic mistake. I assumed teachers needed technical training. I created tutorials on prompt engineering, showed them fancy AI tools, and demonstrated impressive outputs. 
The response? Polite nods followed by zero implementation. 
It took me three weeks to realize my error. Teachers weren’t avoiding AI because they didn’t understand it. They were avoiding it because they didn’t trust it, or trust themselves to use it responsibly. They were skeptical and wary. One veteran teacher put it perfectly: “I’ve spent 20 years learning how to teach. Now you’re telling me a machine can do it better?”

Teachers were not resisting the technology—they were protecting their professional identity.

Author Image

The Framework That Actually Worked

After that teacher’s comment, I started thinking about my entire approach differently. Her words revealed something I had missed: Teachers were not resisting the technology—they were protecting their professional identity. I realized that effective AI training had to begin with trust, not tutorials. I scrapped my technical curriculum and built something different: a six-module program focused on developing teachers’ confidence. These six modules approached AI literacy in a way that honored teachers’ professional identities and left room for discovery and exploration.

Module 1: Start with Ethics, Not Tools 

Counterintuitive, right? Most AI training begins with, “Look what this tool can do!” I started with, “Here are the downsides.” We discussed risks and tradeoffs around data privacy, algorithmic bias, academic integrity, and environmental impact. Teachers didn’t leave excited, but thoughtfully concerned, which is exactly where I wanted them. Why? Because teachers who understand the risks around AI use make better decisions.

Module 2: Let Them Fail Safely

I gave teachers a low-stakes challenge: Use AI to plan a lesson on a topic they had never taught. I offered no guidance on how to write their prompts. Their first attempts were vague and generic, and the AI produced lesson plans that were superficial, pitched at the wrong level, or confidently inaccurate. One math teacher simply typed, “Make a lesson on fractions” and received a plan full of algebraic notation suited to 16-year-olds, not the 11-year-olds she had in mind. She laughed and said, “This is exactly why it needs me.” That was the turning point. Teachers learned how to hone their prompts and make them more specific to get the outcomes they wanted. They also began to see that AI output is only as good as the professional judgement behind the prompt, and that their expertise in knowing their students, their curriculum, and their classroom was irreplaceable.

Module 3: Co-Create, Don’t Consume

Instead of showing teachers pre-made AI tools, I asked them to build their own. Using platforms like MagicSchool AI, teachers created custom chatbots tailored to their specific needs. A history teacher built a Socratic dialogue bot that never gives direct answers, only asks probing questions. A science teacher created a misconception detector. The shift was remarkable. Teachers stopped seeing AI as something that happens to them and started seeing it as something they control.

Module 4: Address the Integrity Elephant

Every teacher training on AI eventually hits the same wall: “But students will just cheat.” I didn’t dismiss this concern. Instead, I reframed it. Yes, students can use AI to cheat. They could also use calculators, the internet, or older siblings. The question is how to design assessments that make AI assistance either irrelevant or pedagogically valuable.
For instance, one English teacher redesigned a literary analysis task: Instead of asking students to write an essay about a novel, she asked them to critique an AI-generated essay about that same novel, identifying where the AI misunderstood tone, context, and character motivation. The AI became part of the assessment rather than a shortcut around it.

Module 5: Show Real Data

Halfway through the program, I shared something personal with the group: preliminary findings from a separate study I was conducting  as part of my doctoral research at the University of São Paulo. I had been investigating how 45 of our IB Diploma students (ages 16–18) experienced AI-assisted tools for mathematical document preparation, measuring usability and cognitive load with validated instruments. The results were encouraging overall, but they also revealed that AI sometimes hindered learning, particularly when students over-relied on it. I shared these findings with the 50 teachers to show that while AI is genuinely useful, it is not a miracle solution. Teachers responded to that honesty. Several told me it was the first time anyone had shown them real evidence of AI’s limitations alongside its strengths, and that changed the way they thought about using it in their own classrooms.

Module 6: Build a Community

The final module wasn’t about AI at all. It was about sustainability. I helped teachers form small learning communities, groups of three to four colleagues who would continue experimenting together. Six months later, those communities are still active. Teachers who once avoided AI now share prompt libraries in WhatsApp groups. 

What I’d Do Differently 

We saw great progress, but it wasn’t all perfect. Not everything worked. Here are three things I would do differently next time:
Start smaller. I tried to train everyone simultaneously. In retrospect, I should have started with 10 volunteer AI ambassadors who could then support their colleagues. Peer learning beats top-down training every time.
Use authentic examples. Teachers wanted to see examples of effective AI use from their own school, not generic case studies. I should have recorded and shared early successes immediately. For example, when one science teacher used AI to generate differentiated revision worksheets in half the usual time, her colleagues only heard about it weeks later by chance in the staff room. Had I captured and circulated that story straightaway, it could have inspired others far sooner.
Involve students sooner. Some of our best insights came from student focus groups that we organized after the final module. We gathered small groups of students and simply asked them how they were already using AI and what they wished their teachers understood about it. Two insights stood out: Students wanted teachers to acknowledge that AI tools were already part of their daily reality, and they felt more engaged when teachers used AI transparently in lessons rather than pretending it did not exist. When we shared these perspectives with the faculty, several teachers told us it fundamentally changed how they approached AI in their classrooms. In hindsight, we should have run these conversations before the training even began.

Teaching Confidence, Not Expertise

AI literacy isn’t about mastering tools. It’s about developing the judgement to know when those tools help, when they harm, and when they’re simply irrelevant. 
Our teachers didn’t need to become AI experts. They needed to become confident professionals who could integrate new technology without losing their pedagogical identity. That confidence comes from experience, community, and the freedom to fail safely.
If you’re planning AI training for your school, start with teachers’ concerns and an open dialogue, and address their fears honestly. Then build trust, one small experiment at a time.

Roney Nascimento is an AI solutions developer and IB Diploma mathematics teacher at St. Paul’s School in São Paulo, Brazil. He is the author of Generative AI for Teachers: A Practical Guide to Educational Technology (Amazon Kindle Direct Publishing, 2025).

Learn More
Related Blogs
View all
undefined
Artificial Intelligence
Is Teacher Preparation Keeping Pace with AI?
James Gurney
1 week ago