As 2023 winds down, however, teachers have had time to consider more than concerns about plagiarism. Many of us have begun to realize how we might be able to use AI tools to make our teaching practice more efficient and effective. This is aided by the many companies, like Canva and Microsoft, who now heavily advertise how AI makes their popular education tools even more useful. While the benefits of these AI-supported hacks vary, the most consistent one is the promise of substantial time savings. These tools can draft lesson plans and project sheets. They can draw illustrative graphics for slide decks and write the accompanying copy. It used to take an entire weekend to collate all the information needed to introduce a unit. With AI's help, we can now make a gorgeous and informative slideshow in the minutes between Sunday Night Football and bedtime.
While the benefits of these AI-supported hacks vary, the most consistent one is the promise of substantial time savings.
The session began with participants asking ChatGPT to: "Generate 10 classroom discussion prompts for a [grade level] classroom discussion about [a contemporary controversial issue]." Participants could fill in the blanks, but I did offer a few examples of what I meant by controversial (topics where there is great public disagreement—like the January 6 insurrection). As the AI-suggested prompts came in, I asked participants to look for advantages and disadvantages in each prompt. How would their students react to them? After some notetaking, they discussed their answers with their neighbors. Then we repeated the process by giving ChatGPT slightly different directions: "Generate 10 classroom discussion prompts for a [grade level] classroom discussion about [a historical event or a book with controversial themes]." While the first exercise asked ChatGPT to help us discuss a hot-button current event, this one was meant to see how ChatGPT might suggest we tackle a historical event or thorny text.
The participants' responses to these exercises were fascinating in their complexity. ChatGPT suggested some prompts that blew teachers' minds and some that made them roll their eyes.
ChatGPT suggested some prompts that blew teachers' minds and some that made them roll their eyes.
I asked for questions about Chapter 1 of Trevor Noah's Born a Crime. ChatGPT suggested, "Discuss the significance of the story about the DJ named Hitler in Chapter 1. What does it reveal about the power of names and the influence of pop culture in shaping perceptions and attitudes?" The problem is that the DJ named Hitler appears toward the end of the book, not Chapter 1. And even if he did, that very complex and very sensitive passage has nothing to do with pop culture. Similarly, I asked about Chapter 1 of William Golding's Lord of the Flies. ChatGPT offered, "Analyze the interactions between the boys and their gradual descent into savagery in Chapter 1. What factors contribute to the breakdown of social norms?" This is a silly question, as the "descent into savagery" doesn't happen until later in the book. In fact, early on, the boys famously cling to societal norms, like voting.
• ChatGPT will create prompts that are academic-sounding but nonsensical.
I asked for questions about Richard Wright's Native Son. It suggested, "Explore the role of Bigger's family in Part 1. How do his relationships with his mother, brother, and sister contribute to the dynamics within the Thomas household?" This sounds nice—but it essentially asks students a circular question: "How do his family relationships contribute to his family relationships?" I am not sure how students are supposed to respond.
• ChatGPT creates many leading and/or obvious prompts.
ChatGPT also suggested this prompt about Native Son: "Bigger Thomas is often seen as the product of his environment. How does the setting of the South Side of Chicago contribute to his mindset and actions? Analyze the influence of poverty, lack of opportunities, and social inequality on Bigger's character development." This prompt starts strong, but then gives students the three "acceptable" answers in the last sentence. What if students wanted to say something else? This kind of over-guidance is a common, understandable mistake that student-teachers make often. But ChatGPT does it a lot.
• ChatGPT seems hesitant to generate prompts about specific quotations in a text.
I asked ChatGPT to generate discussion prompts to help students analyze President Donald Trump's speech on the Ellipse on January 6, 2021. It suggested, "Conduct a close textual analysis of Trump's speech on January 6. Identify key rhetorical devices such as repetition, appeals to emotion, or other persuasive techniques." I had to reprompt the AI language model multiple times before its questions began to engage direct quotes from the speech. Even then, the questions were bland. ("President Trump said, 'We love you. You're very special.' What effect does this expression of love and specialness have on the audience?")