top of page

Guilty Until Proven Human

Nov 4

2 min read

6

0

Credit: Pexels, cottonbro studio

“Your work has been flagged for potential AI assistance.”


In 2024, this is the sentence all high schoolers dread. As tools like Turnitin’s new detection feature arise in schools, questions about ethics and the future of academic integrity have followed. Platforms such as Schoology and Google Classroom have allowed schools to go digital while the advent of ChatGPT and other artificial intelligence applications has allowed students to submit assignments at 11:57pm with only two minutes left to spare—or if they’ve simply become too lazy to do their own work. When such a problem was noticed, AI ‘detectors’ were developed and claimed to be an all-knowing solution, analyzing text patterns to find writing that could indicate AI usage. While the detectors aim to uphold academic integrity, sole reliance on these inherently flawed algorithms raises concerns upon the accuracy of accusations and the stifling of student creativity.


Are these programs 100% effective? Simply, no. The programs claiming to analyze text patterns and identify potential cheating, studies highlight inconsistent performance. Programs like Turnitin often struggle with accuracy because they rely on statistical models to judge text originality, which can mistake naturally written content for AI-generated content. The problem is that humans are not statistics. These detectors may flag unusual language patterns–common in creative or complex writing–as AI-generated simply because they deviate from conventional norms. Why is the stifling of creativity allowed?

Mistrust. Mistrust. Mistrust. What are the implications on the student-teacher relationship? The responsibility placed on teachers to investigate these cases can be damaging, as students may feel they are viewed as not individuals, but as data points within an impersonal system. Educators can inadvertently become enforcers of strict and algorithm-driven standards, thus making students feel alienated rather than encouraged to learn and grow in a trusting environment free from judgment. With this advent, such a beautiful world looks less and less likely; a system of suspicion runs counter to the values of education.


False positives can extremely harm students’ reputation, especially with increasing pressure and competition to get into a top school. A singular zero could change a transcript and reshape a future. For students aiming for selective schools, such incidents are especially concerning, as admissions committees often think about conduct and academic integrity. Not only does this risk their academic standing but may also affect scholarship eligibility, future internships, and professional opportunities. Ultimately, the possibility of one algorithmic error derailing years of hard work adds an immense layer of stress for students already facing pressure to excel.


A more balanced approach to addressing concerns involves incorporating manual reviews and engaging students in discussions about their writing processes to verify authenticity. Experts advocate that AI detection should serve as a supplementary tool rather than a replacement for traditional evaluation methods, as human judgment plays a crucial role in interpreting the nuanced language patterns that algorithms may overlook. One way this can be done is through seminars explaining the use of AI and how it can be utilized in a beneficial manner.


The stakes are high. Like, really high. What are WE going to do about it?

Comments

Share Your ThoughtsBe the first to write a comment.
bottom of page