Middle and high school educators around the world are grappling with the same challenge: AI tools are here, students are using them, and educators must decide how to respond. As a consultant, researcher, and leadership coach working with schools nationwide, I’ve frequently seen one of three system-level responses to AI:
- The Status Quo Response: Block all AI tools on the network and school devices and add a prohibition against AI to academic integrity policies.
- The Transactional Response: Pilot and purchase AI tools for teachers’ and students’ use. Block non-approved tools. Modify acceptable use and academic integrity policies to allow staff and students to use the acquired tools at the discretion of each teacher.
- The Laissez-Faire Response: Allow open access to AI tools at school. Delegate rules for student use to individual teachers or departments. Keep acceptable use and academic integrity policies largely unchanged.
Each of these approaches has some merit. The first two approaches prioritize non-negotiables of ensuring safety and privacy. The third demonstrates value for teacher autonomy. However, each of these approaches fails to provide teachers and learners with the guidance and support necessary to navigate a world where AI tools are not only available, but unavoidable.
The Usage-Guidance Gap
Recent data highlights the gap between how frequently teachers and students are using AI tools and the guidance and support they’ve received to use them effectively. Consider the following:
- 81 percent of Advanced Placement coordinators view AI primarily as a cheating risk, yet only 4 percent of middle school teachers and 8 percent of high school teachers believe their school has a clear and comprehensive AI policy (College Board, 2025; Rand, 2025).
- 71 percent of teachers worry AI weakens academic skills, but only 16 percent of students report receiving guidance on how to use AI tools effectively (CDT, 2025).
- 86 percent of high school students report using AI for schoolwork and believe it can benefit their learning, but 66 percent of students also believe that over-reliance on AI will harm their learning. (College Board, 2025).
- While 69 percent of teachers report using AI for lesson planning, nearly 50 percent of parents and students feel a teacher using AI is "not really doing their job" (CDT, 2025). Meanwhile, only 17 percent of AP coordinators believe their teachers have received sufficient professional development about AI (College Board, 2025).
Acknowledging and closing these gaps requires a fourth, more intentional approach to provide the guidance and boundaries necessary for teachers and students to make informed decisions about when and how to use AI tools in ways that support, rather than undermine, effective teaching and learning.
The Intentional Response to AI
The Intentional Response includes the following elements: Articulate, clear, consistent guidelines and boundaries to support safe, effective, and appropriate uses of AI tools and to minimize inappropriate and potentially harmful uses for teachers and students. Then, schools need to align policies, professional learning, and curricula to those identified guidelines and boundaries.
AI tools are here, students are using them, and educators must decide how to respond.
This intentional response to AI can be developed and implemented across four phases.
Phase 1: Learn
- Establish a committee that includes teachers, administrators, technology specialists, parents, and students to build an understanding of the capacity, potential risks, and opportunities of AI tools to impact teaching and learning.
- Gather and analyze local data to better understand teachers’ perceptions of the potential impact of AI tools on teaching, learning, and academic integrity. Gather and analyze local data to better understand how teachers and students are using AI tools.
Phase 2: Build Capacity for Staff to Use AI Tools Effectively
- Establish guidelines, boundaries, and effective use cases for teachers to use AI tools in ways that support effective teaching and the school’s core purpose/mission.
- Guidelines articulate criteria for effective use of AI tools. For example, in my book AI with Intention, I outline three guidelines for teachers. If teachers choose to use AI tools, they must be able to 1) explain how the output has fidelity to established priorities for teaching and learning, 2) transparently document how AI tools were prompted and used, and 3) explain the meaning of any AI-generated content, including how the output was reviewed, revised, and edited to ensure accuracy and clarity.
- Boundaries clarify when and how AI tools should not be used. For example, the committee could establish a boundary that prohibits the use of AI tools to grade student work, or provide examples of poor prompts or uses that undermine effective teaching.
- Effective use cases are descriptions of exemplary uses of AI. These could include models for high-quality prompts for teachers, classroom strategies that maximize academic integrity, or particularly effective examples of classroom uses of AI.
- Provide opportunities for teachers to engage in professional learning to build their capacity to use AI tools in ways that are aligned with the guidelines and boundaries for effective use. A comprehensive approach includes modeling, guided practice, trial and error, classroom implementation, reflection, dialogue with colleagues, and modification.
Phase 3: Build Capacity for Students to Use AI Tools Effectively
- Establish guidelines, boundaries, and effective use cases for students to use AI tools in ways that support effective learning, minimize risks, and are aligned to the school’s core purpose/mission.
- Similarly to teachers, guidelines for students should focus on integrity, transparency, and explainability. This means students know they are accountable to a shared set of expectations related to 1) the integrity of evidence of their learning, 2) are required to transparently document the sources or resources they used to support their learning or to complete their work, and 3) will be expected to explain the meaning of their schoolwork as it is being completed and after it has been turned in.
- Boundaries clarify when and how AI tools should not be used. For example, Wisconsin’s Hamilton High School has established a shared set of guidelines and boundaries for use of AI tools across departments as No AI, Supported with AI, Co-Created with AI, and AI-Driven. At each level of use, students are required to meet specific guidelines for integrity, transparency, and explainability.
- Effective use cases are exemplars of students’ uses of AI. These could include exemplary prompts to use AI as a learning tool, ways to protect one’s privacy and document one’s use of AI, and how to verify AI output for accuracy.
- Use established guidelines, boundaries, and effective use cases as the basis for revisions to Academic Integrity and Acceptable Use policies.
- Provide opportunities for teachers to engage in professional learning to support students’ uses of AI tools. Telling students what not to do is not the same thing as teaching them what to do. Conversely, simply giving students permission to use AI tools is not the same thing as teaching them how to use them effectively. If we want students to use AI tools effectively, teachers need professional development opportunities to learn how to develop students’ capacity to use AI tools with agency and integrity.
- Provide opportunities for students to learn how and when to use AI tools in each discipline in ways that intentionally support their learning and align with the guidelines and boundaries for effective use. This includes modeling, guided practice, trial and error, independent practice, feedback, revision, and reflection on what has been learned.
Phase 4: Ongoing Maintenance and Review
- Each of the first three phases typically requires about three to six months of focused work. Once these components have been established, the work shifts to continuous monitoring of the effectiveness of policies and classroom practices and responding to teachers’ and students’ learning needs. Additionally, as new, more capable AI tools are released and become increasingly unavoidable, revisions to policies, guidelines, boundaries, and effective use cases will be required.
Regardless of our individual beliefs about AI tools (or whether or not they can be accessed on a school’s network), they are becoming unavoidable. We have an obligation to ensure that if they are used by students or educators, they are used in ways that intentionally support effective teaching and learning and minimize risks to safety, privacy, and academic integrity.
AI with Intention
Guiding principles and action steps that address both the issues and the opportunities that come with artificial intelligence.





