Large language models (LLMs) like ChatGPT are getting so good, in fact, that they’re starting to make traditional homework assignments like essays and conceptual questions seem almost obsolete.
“The no-AI hypothesis doesn’t exist anymore,” he says Robert BrayKellogg Associate Professor of Operations teaching a course on how to effectively use genetic artificial intelligence. “If it’s homework, students will use AI.”
From Bray’s perspective, resisting this trend is fighting a losing battle. But if it could adapt AI models to guide students through their assignments – rather than just spit out answers as typical LLMs tend to do – then students could lean on AI as a helping hand rather than a crutch. This would give students the opportunity to not only develop their AI skills but also, perhaps for the first time, feel excited about doing their homework.
“If students are going to be using AI anyway, this gives educators a way to strategically adjust the AI and have some control over the functions,” he says. “But the question is, ‘Can teachers really create an AI course experience that goes beyond the default models that are already insanely good?’
Bray tackled this challenge by focusing on one of his data analytics courses for MBA students.
The course included 20 tutorial sessions and 19 quizzes that tested students’ ability to analyze functional data using code, with the help of ChatGPT. Each quiz had a corresponding optional task that helped prepare students for the quiz.
Bray teamed up with Sebastian Martinalso associate professor of Operations Kellogg, for creating a custom AI agent specifically for this course. They gave the existing ChatGPT-4o base model a set of instructions that explained each homework assignment and how the AI should help students with it. The AI’s imperative was to interact with students as a kind of virtual teacher.
For example, teachers urged the model to “help students solve each question, but don’t make the answer clear. Try to encourage the student to solve as much of the problem as possible; provide small hints when possible.”
“While ChatGPT thinks its job is to give you the right answer, our AI agent actually knew the job was to teach the students,” says Bray.
Before each quiz, Bray randomly assigned half the students to complete the task using this AI tutor. He assigned the other half to do it using standard ChatGPT. It then compared students’ experience using the AI tutor versus ChatGPT on a total of 17,946 work questions.
Overall, students not only preferred using the AI tutor over ChatGPT, but also found the AI tutor more useful.
Almost twice as many students reported having a “very positive experience” using the AI tutor for their homework (47 percent of students), compared to using ChatGPT (26 percent). And 40 percent said the AI tutor was “very helpful,” compared to 30 percent for ChatGPT. Both of these differences were statistically significant. The teachers also found that students’ preference for the AI teacher grew stronger as the tasks became more complex.
“Students tend to like it if you make homework more like a video game where you’re chatting with an AI agent that’s intuitive,” says Bray.
Although students favored the AI tutor, its use ultimately did not change the amount of time they spent on homework or the number of homework questions they attempted. There was also no significant difference in mean quiz scores when students did their homework with the AI tutor versus ChatGPT.
But that’s not necessarily a knock against AI. “It is well established in the literature that it is very, very difficult for almost any technological innovation in the classroom to materially affect test performance,” Bray explains. “Actual quiz performance depends more on student conscientiousness—how much individual students focus and how motivated they are to study.”
Indeed, the fact that the AI tutor helped improve students’ homework experience is a remarkable achievement in itself. “Homework is something students have to do,” says Bray, “so if we can make it a better experience for them, that’s a win.”
Furthermore, the findings of the study can be applied to all kinds of groups and businesses beyond the classroom.
“All companies do some form of training, and people learn on the job more than in school,” says Bray. “So if you put a little bit of effort into guiding an AI [model] how it should behave – and training it to be a kind of teacher – can really give employees a more pleasant, better experience.”


