[interactive_quiz]
Across college campuses in the United States, the arrival of generative artificial intelligence has ignited what increasingly resembles an academic arms race.
The swift uptake of AI tools among students has triggered widespread concern that coursework could be completed through cheating rather than learning. In response, many professors have begun submitting student papers to AI-detection software designed to assess whether large language models were used in their creation. Some institutions report having identified hundreds of students through these methods.
Yet, since their introduction a few years ago, AI detectors have faced sustained criticism for their unreliability. Studies and student accounts suggest they disproportionately flag non-native English speakers and routinely misidentify original work as AI-generated. An increasing number of students claim they have been falsely accused, with several pursuing legal action against universities over the emotional distress and disciplinary consequences they allege followed.
In interviews with NBC News, ten students and faculty members described themselves as casualties of a rapidly escalating conflict between competing AI technologies.
Amid accusations of AI-assisted cheating, some students have turned to a new class of generative tools known as “humanizers.” These applications analyze essays and recommend stylistic changes intended to make text appear more human-authored. While some versions are free, others charge subscription fees of roughly $20 per month.
While certain users admit relying on humanizers to evade detection after using AI, others insist they do not use AI at all and merely seek protection against being wrongly accused by detector programs.
Meanwhile, as conversational AI continues to advance, companies such as Turnitin and GPTZero have upgraded their detection systems to identify writing that has been modified by humanizers. They have also introduced tools that allow students to document their writing process—such as tracking keystrokes or browser activity—to demonstrate authorship. However, some humanizers now replicate natural typing behavior, enabling users to paste text without triggering such safeguards.
“Students are now being forced to prove that they are human, even when they may never have used AI in the first place,” said Erin Ramirez, an associate professor of education at California State University, Monterey Bay. “We’ve entered a spiral that has no clear end.”
This escalating competition between detection software and writing-assistance tools reflects a broader anxiety about academic dishonesty. It also illustrates how deeply embedded AI has become in university life—even for students who prefer not to use it and faculty who resent being cast as enforcers.
“If we write well, we’re accused of using AI—it’s absurd,” said Aldan Creo, a graduate student from Spain studying AI detection at the University of California, San Diego. “In the long run, this is going to be a serious problem.”
Creo recounted being accused by a teaching assistant in November of using AI to write a data science report. He explained that his habit of methodically outlining each step of his reasoning resembled the style commonly associated with ChatGPT. Although his grade was eventually restored, the experience altered his approach. To avoid further disputes, he now deliberately simplifies his writing, occasionally leaving misspellings or adopting Spanish sentence structures, and routinely runs his work through AI detectors before submission.
“I have to do everything possible just to show that I actually write my own assignments,” he said.
At its most severe, the strain of repeated accusations has driven some students to abandon their studies altogether.
Brittany Carr, a remote student at Liberty University in Virginia, received failing grades on three assignments after they were flagged by an AI detector. She provided extensive evidence of original authorship, including revision histories and notes handwritten in a notebook. In one email dated Dec. 5, she wrote to her professors, “How could AI make any of that up? I spoke about my cancer diagnosis, my depression, and my journey—and you believe that is AI?”
Her explanations proved insufficient. The school required her to complete a “writing with integrity” course and sign a statement apologizing for AI use, despite her denials.
“It’s an unsettling irony,” Carr said. “The school is using AI to accuse us of using AI.”
The experience caused intense stress. Carr feared that further allegations might jeopardize her veterans’ financial aid. To prevent future accusations, she began running all her assignments through Grammarly’s AI detector, revising any highlighted passages until the software confirmed the work was human-written.
“But at that point, my writing stops meaning anything,” she said. “I’m just writing to avoid triggering a machine.”
After the semester concluded, Carr withdrew from Liberty University. She has yet to decide where—or whether—she will continue her education.