HomepageISTEEdSurge
Skip to content
ascd logo

Log in to Witsby: ASCD’s Next-Generation Professional Learning and Credentialing Platform
Join ASCD
November 1, 2025
5 min (est.)
Vol. 83
No. 3

Creating Actionable Classroom Assessments

author avatar
With clear performance-level descriptors and a little help from AI, teachers can generate the kind of insights needed to improve instruction.

premium resources logo

Premium Resource

Teaching Strategies
Four colorful hands—red, blue, yellow, and purple—interlock to form a square frame around a central target-like circle.
Credit: Michael Austin / The iSpot
Effective classroom assessment practices are about more than just giving a test or assigning a grade. They are about helping teachers form a foundation of responsive instruction; ideally, they provide actionable insights to help differentiate teaching, offer targeted feedback to students, and support each student’s growth (Drost & Levine, 2023; McMillan, 2024). Whether through an informal game, a more comprehensive project, a student self-assessment, or even classroom dialogue, effective classroom assessment practices can generate a continuous stream of evidence of student understanding.
Despite this, many classroom assessments fall short in guiding instructional decisions because they are not designed to be truly actionable (Walsh, 2023). Actionable assessments allow teachers to remediate or extend learning targets and even set new goals that are responsive to students’ needs (Drost, 2021). One way teachers can create assessments that are actionable is by combining performance level descriptors (PLDs) with performance tasks, which results in more relevant, growth-oriented assessments that inform instruction. Incorporating generative AI tools can further support developing actionable assessments by saving teachers time and energy. When teachers assess with this level of intentionality, classroom assessments can have a powerful impact on academic outcomes (Hattie, 2023).

Putting PLDs to Practice

Best practice indicates that teachers should align all assessments to standards. However, standards vary in their specificity; in some contexts, they are broad, general statements about what students should know and be able to do, rather than specific, detailed progressions of learning. Therefore, assessments based on standards alone may not be instructionally useful (Drost et al., 2025).

The problem with traditional tasks is that they don’t get at student misconceptions or understandings.

Author Image

A better tool to base assessments on is performance-level descriptors (PLDs). Developed by educators and content experts and approved by state education departments, PLDs describe the levels of knowledge, skills, and practices associated with various levels of performance within each grade and content area’s standard. They provide clarity about where students fall in the learning process (Hattie, 2023). These descriptors allow educators to get an accurate picture of student learning, as well as determine next steps for learning.
Once a teacher understands this progression, they can create actionable classroom assessment tasks that determine where students are in the learning process for a particular standard.
To look at how to effectively use PLDs in practice, let’s consider Figure 1, a detailed example of a mathematics PLD from the New York State Education Department (2023). Using this table, teachers can design assessments that will allow them to collect observable evidence of student learning at various levels of student thinking.
A table showing the four performance level descriptors for New York State's math standard cluster NY-5.NBT.6-7.
Prior to using PLDs, a teacher might have students race to solve the problem 37.23 x 5 in Kahoot! or Quizziz. While this math problem is aligned to the standard, it only addresses one level within the PLDs and doesn’t get into all the variances of the standard (decimals x decimals, for example). Using the New York PLD, it appears there are at least four additional tasks that teachers would need to create to fully understand their students’ learning. However, creating tasks for each performance level of the standard can take a lot of time. This is where generative AI can become an excellent thought partner (Drost & Shryock, 2024; 2025).
For the math standard in Figure 1, for example, I asked ChatGPT to generate four different questions aligned to the PLDs, and then I carefully reviewed them. This is a key step to using generative AI: A teacher should use their pedagogical knowledge to review any AI-generated output to make sure it is appropriate for students. My favorite responses to my prompt are shown in Figure 2.
Sample AI-generated questions aligned to the four performance level descriptors for New York State's math standard cluster NY-5.NBY.6-7.

Ready, Set . . . Action!

While these questions now get at the various performance levels, they still are not fully actionable. The problem with traditional tasks is that they don’t reveal student misconceptions or their deeper understandings. These AI-generated questions are no different, which limits their usefulness for guiding instruction. We could, of course, adjust the choices with answers students might get if they made common mistakes, making the questions a bit more helpful in diagnosing student understanding based on the answer they choose. But this is still limiting, as a multiple-choice answer selection really doesn’t show the students’ work or thought processes (Wren & Gareis, 2019).
What if there was a task that could provide a more complete picture of student learning by encouraging the student to think deeply and demonstrate creativity and application over time? What if those same assessments could also foster student engagement and ownership, giving learners more meaningful ways to demonstrate their understanding?
As Brookhart (2024) indicates, performance tasks ask students to apply what they’ve learned in real-world contexts—solving problems, analyzing scenarios, creating products, and presenting findings—so that everyone understands where the student is in the learning progression. Performance tasks show what the student is learning, how the student is learning it, and the quality of their understanding over time. A well-designed performance assessment allows a student to demonstrate their learning through complex, multistep tasks that mirror the exact types of thinking and problem solving emphasized in PLDs, while also allowing the teacher to fully know how to adjust instruction.

A well-designed performance assessment allows a student to demonstrate their learning through complex, multistep tasks.

Author Image

One structure for designing performance tasks is the GRASPS framework (Wiggins & McTighe, 2005), which stands for goal, role, audience, situation, product/performance, and standards. This structure is helpful as it makes tasks more authentic, meaningful, and instructionally useful. Here’s how a teacher can use GRASPS to create performance tasks:
  • Goal and Standards: Identify the level at which you want to check students’ understanding via the PLDs. The goal should involve a real-world challenge that requires students to apply their knowledge meaningfully.
  • Role, Audience, and Situation: Assign students a role (e.g., engineer, journalist), define their audience (e.g., city council, peers), and set up a realistic situation that creates relevance and engagement.
  • Product/Performance: Ask students to create something open-ended—such as a presentation, budget plan, or written report—that demonstrates their thinking. Use PLD-aligned rubrics to provide transparent, level-specific feedback.
For example, using the Grade 5 PLD focused on decimal operations in Figure 1, a task might be:
Your class has $50 to spend on food for a school movie event. Choose three items from the attached catering menu, calculate the total cost, and determine how much money will remain. Use a model and a written method to explain your reasoning and present your plan to the class.
This task is authentic, grounded in math content, and provides opportunities for students to demonstrate their understanding at various performance levels. However, designing separate, differentiated tasks aligned to each PLD level can be time intensive. That’s where generative AI can become that useful partner once again. I prompted ChatGPT to: “Create four authentic performance assessments for each of the performance levels using the following PLDs (fig. 1). Place these in the GRASPS format.” ChatGPT gave four options for each PLD, and I chose one to illustrate performance level 3 as shown in Figure 3.
Sample of a performance task in GRASPS format aligned to New York State's math standard cluster NY-5.NBY.6-7. The GRASPS acronym stands for goal, role, audience, situation, product/performance, and standards.
This task is actionable, as it offers diagnostic evidence of students’ decimal understanding by revealing their accuracy of computation, their alignment of strategy to context (conceptual understanding), and their ability to explain reasoning. It also helps to identify additional misconceptions in decimal placement, such as aligning digits incorrectly or confusing tenths as hundreds, as the items from the party supply store have all these mathematical situations.
Actionable performance tasks can be created in any discipline and tailored to students’ needs. A science teacher might create simple lab experiments to demonstrate the change in quantities of matter when heating, cooling, or mixing substances, tailoring the experiments to show learning with different levels of their state’s PLDs. In English class, students could play games to reinforce various PLD levels by identifying main ideas and themes in a work of literature. In each case, prompting AI for help differentiating these tasks can save time and generate new and creative ideas.
To avoid students cheating with generative AI, the teacher can create multiple assessments, have students complete the assessments in class, have students orally describe what they learned from the process, or use process documentation tools. Reflection shifts the focus from what students produced to how they produced it. Questions might include:
  • “Which part of this task was most challenging for you and why?”
  • “What strategies did you use to check your work?”
These types of questions require students to articulate their reasoning, which AI struggles to invent authentically.

Enrich and Reinforce

Based on student responses, the teacher can either reinforce the student’s current level, work with the student at the previous performance level, or extend the student’s learning. Using the Grade 5 math PLD and the task in Figure 3, I asked ChatGPT the following: “Create an enrichment task for a student who understands this task that moves them to the next level and a teacher-led reinforcement task for a student who does not understand.” I received two tasks that can help support students and the teacher as shown in Figure 4.
Examples of AI-generated enrichment and reinforcement tasks that build on the skills developed in the previous example.
The enrichment task would start to move students from level 3 to level 4, as it requires students not only to add and subtract decimals accurately using place value reasoning, but also to explain why their choices make sense in the context. Students must connect the math to their decision making, showing how their calculations support the choices they made and why those choices are reasonable. The reinforcement task, while teacher-led in a small group, breaks down abstract ideas like decimal place value and provides scaffolding, repetition, and modeling. This helps the student connect real-life context with mathematical processes.
Because there are so many possible instructional pathways for the student based on the task, it is also possible to have ChatGPT provide customized, timely, personalized feedback to students by analyzing their individual responses. (A note of caution: When uploading student responses, be careful to avoid personal identifying information such as a student’s name.) When setting this up, a teacher can give the AI tool specific parameters to encourage critical thinking with a prompt such as:
You are providing feedback to a 5th grader on their solution to this task: [Insert student’s written response here]. Point out one thing the student did well in their explanation or strategy. Ask one clarifying question that gets the student to think more deeply about their reasoning. Suggest one next step they might take to strengthen their solution. Do not provide the correct answer. Keep your tone encouraging and student-friendly.
Generative AI can also provide feedback to the student aligned to the PLD to help the student understand next steps, the hallmark of effective feedback (Hattie & Timperley, 2007). An example is found in Figure 5.
An AI-generated example of feedback to a student after completing the previous sample excercises.
This feedback helps clarify strengths, pinpoint misconceptions, and suggest next steps, all while maintaining alignment with grade-level expectations. It is important to note that throughout all these processes, AI can generate prompts, but the teacher decides if they truly align to learning objectives and fit students’ readiness levels.

Assessment in Action

As these examples show, using generative AI with PLDs and performance tasks can create actionable assessments. Through this process, educators can determine the depth of student understanding, identify gaps in learning, make targeted instructional decisions, and provide enrichment or intervention activities. These tools together provide a promising pathway for building high-quality, instructionally useful, and authentic assessments that clarify learning expectations, enhance feedback, and support both teaching and learning.
References

Brookhart, S. (2024). Classroom assessment essentials. ASCD.

Drost, B. (2021). Designing online instruction: What makes for effective learning. AMLE.

Drost, B., & Shryock, C. (2024). Using generative AI for instructional design. In M. Stevkovska, M. Klemenchich, & N. Ulutas (Eds.), Reimagining intelligent computer-assisted language education. IGI Global.

Drost, B., & Shryock, C. (2025, April 17). Collaborate with generative AI to improve classroom assessments. Phi Delta Kappan.

Drost, B., Fincher, M., Forte, E., Warner, Z., & Shryock, C. (2025, April). Connecting performance level descriptors and released items to improve understanding and communication. Paper presented at the annual meeting of the National Council on Measurement in Education (NCME), Denver, CO.

Drost, B. R., & Levine, A. C. (2023). An analysis of strategies for teaching and assessing standards-based assessment design to preservice teachers. Journal of Education, 203(3), 127–136.

Hattie, J. (2023). Visible learning: The sequel. Routledge.

Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112.

McMillan, J. H. (2024). Classroom assessment: Principles and practice that enhance student learning and motivation (8th ed.). Pearson.

New York State Education Department. (2020). New York state testing program next generation learning standards: Performance level descriptors, Grade 5.

Walsh, J. A. (2023). Questioning for formative feedback: Meaningful dialogue to improve learning. ASCD.

Wren, D. G., & Gareis, C. R. (2019). Assessing deeper learning: Developing, implementing, and scoring performance tasks. Rowman & Littlefield.

Bryan R. Drost is the executive director for instructional innovation for North Central Ohio. He is currently the co-chair of the NCME classroom assessment committee and the faculty lead for the School Improvement Through Data Analysis and Assessment graduate certificate at Ursuline College in Ohio.

Learn More

ASCD is a community dedicated to educators' professional growth and well-being.

Let us help you put your vision into action.
Related Articles
View all
undefined
Teaching Strategies
Revisiting the Rules of Gradual Release of Responsibility
Douglas Fisher & Nancy Frey
3 weeks ago

undefined
Empowering Young Scholars with Differentiation in Study Groups
Jan Hasbrouck & Douglas Fisher
2 months ago

undefined
Feedback That Teaches and Connects
Andrew Housiaux
2 months ago

undefined
Teaching for Curiosity
Zaretta Hammond
4 months ago

undefined
Learning Is Personal—Or It Should Be
Kristina Doubet
4 months ago
Related Articles
Revisiting the Rules of Gradual Release of Responsibility
Douglas Fisher & Nancy Frey
3 weeks ago

Empowering Young Scholars with Differentiation in Study Groups
Jan Hasbrouck & Douglas Fisher
2 months ago

Feedback That Teaches and Connects
Andrew Housiaux
2 months ago

Teaching for Curiosity
Zaretta Hammond
4 months ago

Learning Is Personal—Or It Should Be
Kristina Doubet
4 months ago
From our issue
Cover of Educational Leadership magazine showing a standardized test sheet split open to reveal a bright yellow background with the title “A New Era for Assessment.”
A New Era for Assessment
Go To Publication