My Shingetsu News Agency Visible Minorities column 72: “Confronting AI in Higher Education”, with decent primary source data on the harm being done to universities — by enabling students in the Social Sciences to cheat.
Excerpt: I teach Political Science, and have an express zero-tolerance policy towards the use of AI in students’ submitted assignments. Two semesters ago, my policy was to give zeros on assignments in the first instance and Fs in the course for repeat offenders. But last semester, this became untenable as AI reached the event horizon. AI went from something students were still discovering to being a regular part of their toolbox. Colleges were suddenly even encouraging them to use it. What this has signaled to students nationwide that they now have an alternative to doing the fundamental work of doing research—i.e., formulating a research question, gathering evidence to answer it, and presenting it for review in a coherent and convincing manner. Now you could ask a computer to do all that. And then copy-paste.
I have a strict rule against using AI in classes. Zero tolerance. Why? Because if I don’t enforce that, the flood gates open. Students who actually do their own work will grumble about being graded via demanding rubrics, while students who cheat their way through college will get away with high grades on something they didn’t create. This gives incentives for everyone to cheat, because why bother putting the effort in? And that in term moots the development of fundamental college skill sets of researching and writing. It also cheapens students’ degrees. Like getting a degree from a ‘party school,’ if the college becomes known as a degree from an ‘AI school,’ employers and the academy will discount student credentials even if they put in the work to get them. Guilt by association. Despite some colleges short-sightedly adopting AI as a tool, I see AI technology now undermining the very act of getting an education.
AI is not seen as cheating in all fields. For example, Math and Computer Sciences are all-in and don’t see it as a threat—more as a competitive advantage. AI saves them a lot of time and work, especially for computer programmers writing code or training to be cybersecurity analysts. But for us in the Humanities and Social Sciences, where we are trying to teach skills essential to basic critical thinking, AI is generally seen as a short-circuit. This divisiveness is why it’s been difficult for universities to come up with an official policy regarding AI use. So professors are left to decide their own policies. And zero tolerance is mine. This column gives statistics from teaching hundreds of students a semester to assess how effective this policy is at stemming the tsunami of cheating.