A girl sits at her school desk as a robot shows a math equation on its display.
Back to How to Cover the Story

Covering AI in Education

How does the use of AI in education affect student outcomes? While detailing the ways AI is increasingly used in education, The Hechinger Report’s Jill Barshay explains why it’s important to keep the focus on student outcomes when covering any type of ed tech.

Photo credit: ekkasit919/Bigstock

Back to How to Cover the Story

Right now artificial intelligence is a bit like the cryptocurrency of education. For the April 2024 education investor confab, known as ASU+GSV, dozens of entrepreneurs flocked to San Diego to pitch their AI tutors, lesson planners and graders to finance investors, all eager to make a buck off the billions that U.S. public schools spend on educational materials annually. Some of these AI applications might be good financial bets, just like Bitcoin was for folks who bought it at the right time and struck it rich. 

But based on previous ed-tech hype cycles, I predict most of these so-called “AI solutions” won’t add up to much. My personal approach as a journalist is to be open to new ideas, and explain them, but remain skeptical and cautious so that schools don’t jump on the bandwagon and invest time and money in unproven technologies. There’s a lot at stake for students: hours spent on unproven AI applications could leave children worse off, with even lower reading, writing and math scores than they have today. 

Even for AI apps aimed at teachers, as many of them are, reporters should keep the focus on students. If you’re planning a story about AI in education, the question to ask in every interview is this: “That’s cool, but how do we know that this is awesome for students?” Or, more mundanely, “How do we know that students are learning more this way?” 

Every day I hear about another new idea for using AI in education, from adjusting the reading difficulty of textbooks to filling out dreary paperwork, but here are some of the ways I’m seeing AI applications in education increase the most.

1. AI Tutors

Many AI tutors are powered by Chat GPT, but instead of telling the student the answer, they drive the conversation. It’s much like a tutor might ask a student, “How did you solve that problem?” When a student is stumped, it breaks down a problem into chunks and guides a student through it, step by step. Well-designed AI tutors give each student personalized practice work, targeted to their needs, instant feedback and hints. Some are more than chatbots that text back and forth, and display human-like avatars that talk, listen and respond.

Old-fashioned human tutors are the most effective way to help students catch up academically, but they’re very expensive. AI tutors could potentially help millions of students at a fraction of the cost. 


What makes in-person tutoring so effective is the relationship that develops between the tutor and the student. It remains unclear if students will be motivated to continue logging into AI tutors after the initial novelty wears off.

“AI tutors work when students use them,” said Ken Koedinger, who invented one of the earliest versions of an AI tutor, called Cognitive Tutor, years before Chat GPT. “But if students aren’t using them, they obviously don’t work.” Human tutors are better at motivating the students to keep practicing, Koedinger said.

Former EWA Board President Greg Toppo’s April 2024 story on why IBM mothballed its Watson supercomputer in education described the same downfall for its older-generation AI tutor: Students didn’t find it engaging. 

Reporting tip: AI vendors often tout achievement gains for students who use their wares for the recommended amount of time. Ask what percentage of students actually use it that much. (It’s also important to disclose to readers if claims about student achievement were calculated by the AI company and haven’t been reviewed by outsiders.)

Quality of tutoring

The large language models that power the new generation of AI tutors can help with almost any topic. The older generation of AI or computerized tutors, by contrast,  addressed a limited set of questions that had been pre-programed and vetted by educational experts. But without this human control and oversight, the teaching practices of these AI tutors are unpredictable. The machine can err in pinpointing why a student is getting a problem wrong and what building blocks the student needs to learn first. When I kicked the tires on Khan Academy’s AI math tutor, Khanmigo, it sometimes said I had answered an algebra problem wrong when I had solved it correctly. That’s frustrating. 

Reporting tip: Try the AI tutor yourself, imagining you’re a student learning the material. Or better yet, sit with a student to test it.

2. AI Chatbots

AI chatbots are also under development to advise students like a guidance counselor would. One organization is converting its body of expert college counseling advice from blogs and webinars into a chatbot named Ava to help high schoolers navigate the college application process: Should I take the SAT or the ACT? How many colleges should I apply to? Colleges are experimenting with AI counselors that can advise on course selection, help them pick a major and make opportune decisions to graduate faster. 

The big question is whether these AI advisers will deliver answers that actually address a student’s individual circumstances. Students might grow frustrated with generic answers, prefer a real counselor and stop consulting the chatbot. Just like with tutors, students who need advice aren’t just looking for reliable information, but someone who cares about them and their future.

3. AI Lesson Planning

One of the more popular AI applications is aimed at teachers: instant lesson plans. That can save teachers oodles of preparation time, but it’s unclear how sound or effective the teaching recommendations are. 

Benjamin Riley, the founder of Cognitive Resonance, a new consulting firm that is focusing on AI in education, is concerned that AI-generated lesson plans may include debunked ideas, such as teaching to students’ learning styles, or emphasize group activities that are engaging for students, but not necessarily ones where students learn the most. 

“It’s nice to have an on-call brainstorming partner,” Riley said. “ … if you have the judgment and expertise to vet it.”

4. AI Assessments

A promising AI application is using the technology to generate quizzes and tests. Questions still need to be reviewed and vetted, but the new large language models can generate dozens of questions in an instant. 

Testing companies are even experimenting with using AI to help vet the quality of the questions that AI generates. That’s right, AI can flag when AI comes up with a confusing or poorly phrased test question. 

5. AI Grading

Marking homework, tests and papers is one of the more tedious tasks for a teacher, and having AI handle that work would certainly be a welcome time saver. Initial research is showing that the newest robo graders are doing a pretty good job, approaching the accuracy of human grading in following a scoring guide for an assignment. 

The sticky issue here is that AI graders are trained to mimic the way that other teachers have graded similar answers and essays. And human teachers have subjective, sometimes racist biases

6. AI Feedback

Feedback is critical to the learning process, no question. And it’s laborious for teachers to give individualized feedback for every student on every assignment. Whether AI can do this well remains unclear. 

Computer-generated suggestions for improvement can feel like a generic restatement of the elements of a scoring guide. Feedback, like “lacks clarity,” may not help a student understand how to express something more clearly. And just as students often ignore the painstaking hand-written feedback of a teacher on an essay, they may also ignore AI feedback. The holy grail is to find a way to motivate students to absorb the feedback, make corrections and try again. That kind of student motivation isn’t something AI has yet figured out.

I also wonder what is lost when teachers aren’t directly seeing student work, but rather delegating grading and feedback to computers. The first-hand experience of seeing a cluster of mistakes in your own classroom often prompts a teacher to rethink how to teach something. 

Of course, AI companies say they have data reports and dashboards that can summarize all this individual student work to inform teachers. But it’s not clear that studying data reports is an effective way to improve student achievement. Often, according to research, it’s not.