Last November, when ChatGPT was released, many schools felt like they had been hit by an asteroid.
In the middle of an academic year, without warning, teachers are forced to confront new, seemingly alien technology, which allows students to write college-level essays, solve challenging sets of problems and ace standardized tests.
Some schools responded — unwisely, I argued at the time — by banning ChatGPT and tools like it. But those bans didn’t work, in part because students could only use the tools on their phones and computers at home. And over the years, many of the schools that have restricted the use of generative AI — as the category that includes ChatGPT, Bing, Bard and other tools is called — have quietly roll back their prohibitions.
Before this school year, I spoke with many K-12 teachers, school administrators and university faculty members about their thoughts on AI today. There was a lot of confusion and panic, but also a fair bit of curiosity and excitement. Mainly, educators want to know: How are we actually using this stuff to help students learninstead of just trying to catch them cheating?
I’m a tech columnist, not a teacher, and I don’t have all the answers, especially when it comes to the long-term impact of AI on education. But I can offer some basic, short-term advice for schools trying to figure out how to handle generative AI this fall.
First, I encourage educators — especially in high schools and colleges — to assume that 100 percent of their students are using ChatGPT and other generative AI tools in every assignment, in every subject, except if they are physically handled within a school building.
In most schools, this is not entirely true. Some students will not use AI because they have moral qualms about it, because it is not helpful for their particular assignments, because they lack access to the tools or because they fear getting caught.
But the assumption that everyone uses AI outside of the classroom may be closer to the truth than many educators realize. (“You have no idea how much we use ChatGPT,” read the title of a recent essay by a Columbia undergraduate in The Chronicle of Higher Education.) And it’s a useful shortcut for teachers trying to figure out how to adapt their teaching methods. Why would you assign a take-home test, or an essay about “Jane Eyre,” if everyone in the class — except, perhaps, the strictest rule-followers — is going to use AI to complete it? Why don’t you switch to proctored exams, blue-book essays and in-class group work, knowing that ChatGPT is as ubiquitous as Instagram and Snapchat to your students?
Second, schools should stop relying on AI detector programs to catch cheaters. There are dozens of these tools on the market today, all claiming to see AI-generated writing, and none of them work can be trusted. They form a lot false positive, and are easily misled by techniques such as paraphrasing. Don’t believe me? Ask OpenAI, the maker of ChatGPT, that stopped its AI writing detector this year because of a “low accuracy rate.”
It’s possible that in the future, AI companies may label the outputs of their models to make them easier to spot — a practice known as “watermarking” — or that better AI detection tools may emerge . But for now, most of the AI text should be considered nondescript, and schools should spend their time (and technology budgets) elsewhere.
My third piece of advice — and the one that probably gets me the angriest emails from teachers — is that teachers should focus less on warning students about deficiencies. of generative AI rather than knowing what the technology does well.
Last year, many schools tried to scare students away from using AI by telling them that tools like ChatGPT were unreliable, prone to gibberish and generic-sounding prose. These criticisms, while true of early AI chatbots, are less true of today’s upgraded models, and smart students are figuring out how to get better results by giving the models of more sophisticated signals.
As a result, students at many schools are ahead of their instructors when it comes to understanding what generative AI can do, if used correctly. And the warnings about flawed AI systems issued last year may ring true this year, now that GPT-4 is capable of gaining passing grades at Harvard.
Alex Kotran, the chief executive of the AI Education Project, a nonprofit that helps schools use AI, told me that teachers need to spend time with generative AI themselves to appreciate how useful it is — and how quickly it improves.
“For most people, ChatGPT is still a party trick,” he said. “If you don’t really appreciate how deep a tool this is, you won’t take all the other steps that will be required.”
There are resources for educators who want to get up to speed on AI. Mr. Kotran’s organization has several focuses on AI lesson plans available for teachers, as does the International Society for Technology in Education. Some teachers have even begun to assemble recommendations for their peers, such as a website created by faculty at Gettysburg College providing practical advice on generative AI for professors.
However, in my experience, there is no substitute for hands-on experience. So I advise teachers to start experimenting with ChatGPT and other generative AI tools themselves, with the goal of becoming as tech-savvy as many of their students.
My final advice for schools troubled by generative AI is this: Treat this year — the first full academic year of the post-ChatGPT era — as a learning experience, and don’t expect everything to go smoothly.
There are many ways AI can transform the classroom. Ethan Mollick, a professor at the University of Pennsylvania’s Wharton School, thinks technology will lead more teachers to adopt a “flipped classroom” — having students learn material outside of class and practice it in class — with the advantage of being more resistant to AI cheating. Other educators I spoke with said they were experimenting with making generative AI a collaborator in the classroom, or a way for students to practice their skills at home with the help of a personalized AI tutor .
Some of these experiments will not work. Some are. It’s okay. We are all still adjusting to this strange new technology in our midst, and the occasional stumble is to be expected.
But students need guidance when it comes to generative AI, and schools that treat it as a passing fad — or an enemy to be defeated — are missing out on helping them.
“A lot of things will break,” said Mr. Mollick. “And so we have to decide what we do, instead of fighting a retreat against AI”