A 7th grade student finishes a science project with pride. They’ve spent weeks researching, collaborating, revising, and presenting their ideas. Their thinking is sharp, their teamwork strong, their questions thoughtful. But the next week, they sit for a standardized test—multiple-choice, time-limited, text-heavy. They freeze. The format feels unfamiliar. The stakes feel high. The score? It doesn’t reflect their effort, insight, or growth.
This story isn’t unusual. It’s the system working exactly as it was designed: to be efficient and comparable, and as a result, blind to the full complexity of how students learn. In some ways, AI is superchargingthe system—helping us move faster, but not necessarily forward.
AI can now write test questions faster than most people can read them. It can score essays, translate feedback, predict student performance, and even personalize study plans. But if all we do is supercharge the same old assessment models with newer tools, we’ll miss the real opportunity.
What if instead of asking how AI can improve our tests, we ask how it might change what we value enough to measure?
As this issue of EL suggests, we’re standing at a crossroads. One path leads to more efficiency, faster grading, and cheaper content. The other leads to something far more powerful: assessment that is embedded, continuous, multimodal, and deeply human—capable of understanding not just what students know, but how they think, communicate, create, and grow.
I’ve spent the past decade at the intersection of AI, assessment, and equity, building large-scale platforms and leading innovation teams. Through work with initiatives like TeachAI, I’ve come to believe that the most transformative uses of AI in education won’t automate old systems. They’ll help us build new systems—systems that are fairer, more holistic, and far more attuned to the real skills students need to be future-ready.
Where Are We Now?
AI is already reshaping assessment, but mostly behind the scenes. Across districts, education companies, and state agencies, AI is automating scoring, generating test items, and estimating proficiency with fewer questions. The global AI-in-education market is expected to surge from around $5.9 billion in 2024 to over $32 billion by 2030, largely driven by AI-powered assessment, adaptive learning, and analytics tools (Grand View Research, n.d.; Mordor Intelligence; n.d.).
These are real advances. I’ve helped teams build platforms that personalize learning, generate formative content on demand, and deliver near-instant feedback. Generative AI replaced six-month content development cycles with workflows that produced quality items in days, across subjects, grade levels, and languages. These systems didn’t just improve turnaround time, they expanded access, enabled differentiation, and opened the door to more responsive instruction.
And yet, despite these gains, the fundamental architecture of assessment hasn’t changed. In most classrooms, assessments remain periodic snapshots detached from daily learning and constrained by what’s easy to score.
Today’s dominant use cases—item generation, auto-scoring, adaptive testing—reflect a pragmatic starting point. But if we stop here, we risk falling into what I call the optimization trap: using powerful technology to incrementally improve outdated models, rather than reimagining what assessment can and should be.
For the future we envision, AI-enhanced multiple-choice tests won’t cut it. What’s needed now is a shift in ambition, not just from analog to digital, but from “measurement as accountability” to “measurement as growth.”
Where Are We Going?
Imagine a classroom seven years from now. A student is working through a group project on climate change. As they collaborate with peers, an AI tool quietly captures aspects of their communication, critical thinking, and creativity—not by surveilling them, but by integrating with the platforms they already use to create, reflect, and present. Their growth is tracked over time through a mix of self-reflection, teacher observation, peer input, and AI-assisted analysis. The teacher remains at the center, interpreting insights and guiding students with empathy and experience. AI becomes a quiet partner, observing, supporting, but never replacing. The system doesn’t produce a score—it produces a story.
In this future, assessment is not an event. It’s ambient, embedded, and continuous, woven into the learning experience. AI helps illuminate each student’s progress, not just in content knowledge, but in the skills that matter most: collaboration, resilience, curiosity, ethical reasoning.
Here’s what this might look like:
A multilingual student engages in a science discussion with a peer or an AI-powered platform in their home language. The AI captures and translates their ideas for the teacher, preserving nuance and honoring identity.
A student with executive functioning challenges chooses to express their understanding through audio reflections, scaffolded by prompts from their AI agent.
A teacher receives a weekly insight brief, compiled from dozens of real-time learning moments, helping them spot hidden growth, emerging misconceptions, and students ready for more challenge.
A state agency tracks aggregated, anonymized growth data to understand how empathy, reasoning, or collaboration are developing across contexts—not for ranking, but for redesigning learning systems.
This vision is not just more humane. It’s more accurate because it captures a fuller, truer picture of a learner’s abilities over time, across contexts, and through multiple forms of expression. By broadening what we observe and how we measure it, we reduce blind spots and give educators richer evidence to guide growth.
In this ideal future:
Assessment becomes formative by default. Feedback is immediate, personalized, and actionable for students and educators alike.
Measurement is multimodal. Students demonstrate understanding through speech, writing, visuals, and simulations, expanding access for diverse learners.
Growth replaces sorting. Rather than ranking students, AI-powered assessment helps them see their progress, set goals, and reflect on their learning journey.
Educators are insight-rich, not data-burdened. AI doesn’t overwhelm teachers with dashboards. It distills meaning and suggests timely interventions with humans in the loop.
Equity is foundational. Bias detection, explainability, and cultural responsiveness are built in from day one.
This future doesn’t replace teachers; it amplifies them. AI handles the repetitive and invisible so humans can focus on the work that truly matters.
If we aim to optimize what we’ve always done, that’s what AI will deliver. But if we aim higher toward equity, creativity, and deeper human understanding, AI can help us get there.
How Will We Get There?
Reaching this future isn’t about deploying more tools. It’s about reimagining the systems, policies, and mindsets that shape how we define learning. While this vision of ambient, AI-powered, human-centered assessment may feel futuristic, many of its building blocks are already taking shape. Across classrooms, districts, and research collaboratives, educators and technologists are prototyping the early architecture of what’s to come.
These examples aren’t just innovations—they’re signals of where we’re heading.
From Compliance to Curiosity
Most current assessments serve institutions, not individuals. We need assessments that generate insight, not just evidence. What is this particular student learning? Where are they stuck? What helps them thrive?
Across the country, educators are beginning to reframe assessment not as a static endpoint, but as a continuous opportunity to support learning in motion. With tools like Khanmigo, Formative, and other AI-embedded platforms, teachers are using real-time insights to guide discussion, adaptation, and reflection.
This shift reflects a broader national trend. According to the 2025 Carnegie Learning survey, the top 2 out of 3 uses of AI among educators are (1) generating teaching materials, and (2) sparking new ideas. Such resources increasingly emphasize formative feedback loops, not just final scores.
From Discrete Events to Continuous Signals
Learning is constant; assessment should be, too. AI can help us observe change over time through language, writing, expression, and behavior—that is, if systems are designed to surface those signals ethically and securely.
Emerging AI tools are beginning to capture student learning as it unfolds. In Clayton County, Georgia, a district using Amira Learning—an AI-powered reading tutor that listens to young learners read aloud, flags errors, and tracks fluency over time—early reading assessments are allowing teachers to intervene sooner.
In Sweden, Sana Labs is using AI to analyze learning patterns in educator PD, offering tailored follow-ups based on micro-signals in user interactions. This “always-on” form of assessment mirrors the continuous, feedback-rich experience we envision for K–12 environments.
From One-Size-Fits-All to Multimodal Expression
AI unlocks new modes of demonstration: audio responses, visual storytelling, simulations. Research consistently shows that flexible assessment methods are essential for access and equity.
At MIT’s Media Lab, the Multimodal AI for Education project is reimagining how students express what they know, moving beyond essays and tests to include drawings, diagrams, gestures, speech, and even physical movement. In collaboration with teachers, researchers are exploring how AI can interpret these rich modes of expression, making thinking visible in new ways.
Early classroom pilots show that when students are invited to communicate ideas through multiple channels like sketching while speaking or using body language to explain phenomena, educators gain deeper insight into student understanding. This approach holds particular promise for students often marginalized by traditional assessments, providing a glimpse into a future where every student’s way of knowing can be seen, heard, and valued.
Organizations like CAST, rooted in Universal Design for Learning, are partnering with AI developers to ensure students can demonstrate knowledge in varied, accessible ways. Tools that capture audio reflections, simulations, and collaborative dialogue are paving the way for real multimodal assessment.
These multimodal approaches work best when the tools behind them can share and build on each other’s insights. Without that connection, rich evidence of learning remains trapped in silos, limiting its impact.
AI doesn’t replace teachers; it amplifies them. AI handles the repetitive
and invisible so humans can focus on the work that truly matters.
From Tools to Ecosystems
Interoperability and open standards are essential investments for the future. The future is not one app; it’s a connected ecosystem of learning evidence that supports continuous insight across platforms.
Rather than relying on a single platform, education stakeholders are exploring ways to stitch together an ecosystem of insight. The EdSAFE AI Alliance is working to define safety, fairness, and interoperability standards for AI use in education. Their work signals a future where different tools speak to each other and where ethical guardrails travel with the data.
Similarly, districts are aligning instructional tools and data platforms through partnerships with Project Unicorn, an initiative to improve data interoperability within K–12 education so insight moves seamlessly across classrooms and systems.
From Tech-First to Teacher-First
Despite a wave of AI tools entering classrooms, the call for professional learning remains strong. According to a 2025 survey of more than 10,000 teachers, just 31 percent of U.S. educators say they received adequate guidance from their school, underscoring the urgent need for policies and training (Twinkl, 2025).
That’s why initiatives like TeachAI are focused on building teacher-centered frameworks. These initiatives offer workshops, guidance, and policy templates designed to ensure educators are co-designers, not just end users of AI in schools. If we don’t design with teachers, we’ll end up building tools that work against them.
The Opportunity Ahead
The most important lesson I’ve learned from building AI systems in education is this: technology follows vision. If we aim to optimize what we’ve always done, that’s what AI will deliver. But if we aim higher toward equity, creativity, and deeper human understanding, AI can help us get there—faster and at scale.
This isn’t about chasing hype; it’s about choosing courage. The next era of assessment won’t be defined by faster grading or cheaper content. It will be defined by how bravely we reimagine what it means to understand a learner and how urgently we build systems that reflect that.
We’re not there yet, but we could be soon. If we center students, support educators, build trust, and design with care, we can create an assessment system that’s more than a mirror of the past. It can be a lens into the future. And that future is still ours to shape.
Reflect & Discuss
Makhani envisions AI-powered assessment that is “embedded, continuous, multimodal, and deeply human.” What excites or concerns you most about this vision, and what steps would need to happen for your school or district to move in this direction?
AI can expand the ways students show what they know. How might this change outcomes for learners with different strengths, languages, or learning needs?
As AI becomes more prevalent in education, what aspects of teaching and assessment should always remain fundamentally human?