AI is not always the best support for a student, but sometimes it’s the best available option for someone seeking assistance and guidance. Understanding how to use AI tools effectively can unlock an inhumanly patient tutor that can level the playing field for students who don’t have great resources in the off hours.
ChatGPT prompt for image: Create a lighthearted illustration of a nondescript student facing a clock and looking nervous as they wait to get into their teacher's classroom to ask them a question. The student should be looking away from the viewer, and the clock could be pointing at 7:55 am.
By Dr. Dani Kachorsky, Ph.D
Imagine it’s late at night—around 11 p.m.—and Jordan, a high school student juggling multiple subjects, is burning the midnight oil. One minute they’re wrestling with a complex math problem set, and the next they’re reviewing a draft for a history presentation. The deadline for both is looming, and while Jordan has some general notes from a previous class discussion, the specifics are fuzzy. No teacher is available, and the usual backup of a parent or friend isn’t an option either. In that moment, instead of scrolling through endless online articles, Jordan turns to AI-generated feedback—a modern twist to getting that crucial push in the right direction.
When Traditional Help Isn’t on the Clock
Students today face a variety of challenges, whether it’s troubleshooting the steps of a complicated math proof, double-checking the logic in a science lab report, or tightening up the arguments in a literary analysis essay. When traditional support isn’t available—especially during those late-night study sessions—resources like online tutorials and generic tips might not hit the mark. That’s where AI can come in handy. By offering feedback on everything from problem-solving methods and experimental design to presentation style and conceptual clarity, AI becomes an accessible, anytime tutor that fills in the gaps when teachers or peers aren’t available.
What Does “Good” Feedback Really Mean?
Now, let’s address the elephant in the room. There’s a lot of debate about whether AI can deliver “good” feedback, but that really depends on what you need. For instance, if you’re working on a computer science project, good feedback might be about the efficiency of your code and the clarity of your comments. For a science lab report, it might focus on the accuracy of your experimental data and the logic of your conclusions. Even in a creative project like a multimedia presentation, solid feedback can help ensure your visuals and narrative come together seamlessly. The point is, good feedback is context-dependent. It should match the assignment’s specific goals and criteria—and as many of us have found, AI is only as helpful as the prompt we give it.
A Journey of Experimentation and Refinement
When I first started playing around with AI for feedback, I kept things really generic—just dropped an English essay assignment into the system, asked it for feedback, and saw what came back. The early responses were decent for highlighting basic issues like organization or simple errors, but they weren’t hitting the deeper, subject-specific needs. So, I began tweaking my prompts to ask for feedback on specific aspects: the strength of a thesis statement, the coherence of an argument in a literary analysis, or the overall flow and voice of a narrative essay. This produced more targeted and nuanced feedback. Of course, this was not as targeted and nuanced as the feedback I would provide as an English teacher, but it was certainly better than nothing, especially for a student needing support at 11 p.m. From here, I experimented with feeding in the assignment description and rubric to see if that would fine-tune the AI’s response. The results of these prompts were far superior to the previous two approaches with the AI-generated feedback aligning with rubric categories and making recommendations on how to address such issues.
Through trial and error, I came to see value in this sort of feedback—especially as a first-pass review or a way for students to get a different perspective. That said, I don’t believe feedback that is wholly generated by AI is something that teachers need to spend their time perfecting. There are other ways teachers can utilize AI to enhance their feedback process, as I discussed in my previous article. Instead, I think this is a skill worth teaching students: how to craft effective prompts and critically assess the feedback they receive from AI.
An Alternative Form of Feedback, Not a Replacement
AI-generated feedback isn’t here to replace teachers. It’s more like that extra resource you can turn to when you need a quick second opinion. It works best as a tool for formative assessment—a way for students to spot potential improvements before they hand in their final work. Whether it’s breaking down a tough math problem into manageable parts, clarifying the methodology in a science experiment, or simply polishing up the structure of an essay, AI can be that just-in-time helper. When used alongside traditional feedback or AI-enhanced feedback from teachers, it offers a blended approach that can meet diverse student needs and even help students become more self-directed learners.
By combining the strengths of human insight with the analytical power of AI, we can build a more responsive feedback loop—one that supports students like Jordan when they need help the most and encourages them to learn how to seek out and refine feedback for themselves.
Try It Now: Empower Your Students with AI Feedback
Here are some practical steps for teachers to help students harness the power of AI for feedback—or even for students to try it on their own:
Start Simple: Have students input a draft or assignment into an AI tool. Ask them to request general feedback on areas of concern to them. This initial step helps them see what AI can pick up on without overcomplicating the process.
Be Specific with Prompts: Encourage students to narrow the focus. For example, if they’re working on a science lab report, they could prompt the AI to evaluate the clarity of the experiment’s methodology or the logic behind their conclusions. The more targeted the prompt, the more useful the feedback.
Include Context: Remind students to provide essential background information. This might include the assignment description, any grading rubrics, or examples of previous work. These details guide the AI to deliver feedback that aligns with the specific goals of the task.
Experiment and Iterate: Suggest that students try different types of prompts and compare the results. They might start with a broad request for feedback and then refine it by asking about specific components—like the effectiveness of a thesis in a debate or the accuracy of data interpretation in a math problem set.
Use AI Feedback as a Launch Pad: Explain that AI feedback isn’t a substitute for teacher insights but a first-pass review. Have students compare the AI’s suggestions with your own feedback or their peer reviews. This dual approach helps them critically evaluate both sources and learn to refine their work.
Encourage Reflection: After using AI, ask students to reflect on what feedback resonated with them and what they would adjust. This reflection not only improves their work but also builds their ability to self-assess and ask the right questions when prompting the AI.
By guiding students to experiment with AI in these ways, teachers are not just saving them time during those late-night study sessions—they’re also teaching students a valuable skill: how to seek out, evaluate, and apply feedback effectively.
Author’s Note: This post was created using the AI-assisted workflow I describe in a previous essay. I began by audio recording my thoughts and experiences, then used AI to transcribe and synthesize my reflections while maintaining my voice. I added and revised material through a few additional prompts in the LLM interface before copying the content into a document, where I made further revisions.



