I grew up watching Star Trek episodes (in syndication) and movies in the 1980s. Captain James T. Kirk was one of my boyhood heroes, who along with his fellow space pioneers, made traveling the universe a spiritual reality for me and many others in my generation. And in one of my many trips to the local movie theatre to encounter the latest adventure aboard the U.S.S. Enterprise, I was introduced to an obviously fictional though philosophically intriguing Starfleet Academy test known as the Kobayashi Maru. Notwithstanding the perennially cunning Captain Kirk's defeat of this test during his tenure as a Starfleet cadet, the Kobayashi Maru is touted in the annals of Trekdom as an unsolvable moral conundrum involving the choice between saving a starship full of innocent civilians from certain death (yes, women and children included) and maintaining interstellar peace between Starfleet and its archenemy the Klingons. Both options are good (i.e., preservation of life and peace are both laudable goals), but in the scenario, they are mutually exclusive. The moral question is, "Which do you choose, and more importantly, how do you rationalize achieving one whilst eschewing the other?" Each choice--as presented in the test--is morally right and morally wrong at the same time . . . hence, the dilemma.
In many ways, I believe modern teachers are in a similarly hopeless (though hopefully temporary) crucible when it comes to the proliferation of artificial intelligence or AI. Especially for online teachers who require writing assignments, the difficulty becomes how one resolves the higher incidence of AI-generated cheating/plagiarism in the contemporary classroom with the dearth of reliable AI-detection software. That is, we know "AI cheating" is happening among students today, but we don't have a reliable means of detecting and therefore "proving" it, so what are teachers to do? The answer: we don't yet know. This may sound a bit defeatist, but it is an honest assessment of the current status of things. Again, for online classes that involve student-produced writing assignments, there are few ways to ensure originality. AI-detection software too often reports false positives thus rendering it highly unreliable, and human intuition and assessment aren't much better, which leaves us with something akin to the Kobayashi Maru. Teachers can either turn a veritable blind eye to "AI cheating," thus protecting themselves, their institutions, and their students from erroneous charges of academic dishonesty, or they can confront this latest educational plague with abandon, thus safeguarding the academic integrity of their profession and the classes they teach at all costs, including, in some cases, the costs of their own emotional and occupational wellbeing. Most would say there is certainly a middle ground or compromise between these two extremes, and I would agree that in many cases this is true. But in scenarios where the teacher and student are uncompromising in their disparate positions concerning a case of suspected "AI cheating," it often comes down to these two unfortunate polarities, and thus, teachers are left to protect either (self/student) reputation or academic rigor--what I have come to call the R.O.A.R. decision. And the "right" choice is almost never clear.
Nevertheless, like the fictional Kobayashi Maru, the real test isn't about the particular choice one makes but rather the personal character demonstrated during the endeavor to choose. As long as the teacher weighs all evidence available, factors in empathy and grace, and leaves room for his/her own fallibility, the R.O.A.R. decision remains secondary to one's relational ethic (in this author's mind, leastways). Of course, it falls upon educational leaders to recognize the tremendous stressors levied upon their faculty in the current AI landscape and to empathize with their plight. The internet with its buffet of easily accessible cheating modalities has placed teachers at all levels in a crux. Yet, even in this age of magna difficultas, we expect our educators to continue demonstrating professional excellence, personal motivation, and institutional loyalty. Needless to say, they can only do so if leaders "have their backs" and keep searching for ways to respond to the thunderous R.O.A.R. of the AI revolution. I am hopeful that AI detection will catch up with proliferation in the coming years and thus mitigate the current realities. Until that time, however, we must realize that the line between right and wrong in AI response is often blurred or altogether absent.
Comments