AI's Moral Dilemma in Authorship: Duplication, Bias, and the Evolution of Scholarly Honesty in Academia
In the educational landscape of Kazakhstan, AI is stirring up a silent revolution. From writing centers to dorm rooms, students are increasingly leaning on AI resources such as ChatGPT, Grammarly, and QuillBot to aid their academic writing. Some use these tools for basic editing or novel ideas, while others depend on them to draft entire essays.
As AI tools become more accessible, the argument over their appropriateness in an educational setting is no more. The focus is shifting towards how best to utilize AI within academic communities.
AI holds immense potential for enriching academic life in Kazakhstan. It can aid multilingual learners in dealing with writing requirements across languages like Kazakh, Russian, and English by offering personalized feedback instantly. However, the blind adoption of AI in writing raises ethical issues concerning plagiarism and bias that can't be ignored.
These issues are not mere theoretical discussions; they challenge fundamental values that education is expected to promote, such as originality, critical thinking, and equality.
Plagiarism in the AI age is not merely copying someone else's work without due credit. With AI, the boundaries are blurred. When a student instructs an AI model to write an essay on the causes of World War I and submits it unaltered, is that an instance of plagiarism? What if the output is only slightly revised? What if AI is used for structure and transition words?
These are not academic debates, but pedagogical concerns. Students who rely on AI to do their intellectual work miss out on learning the essential writing skills of thinking, synthesizing, and analyzing. Universities in Kazakhstan, like their global counterparts, need to update their academic integrity policies to account for AI. These policies need to be nuanced, recognizing that not all AI use qualifies as cheating. It's about transparency, clear disclosure of AI use, and intent.
Many students know that submitting AI-generated content without alteration amounts to cheating and is considered academic misconduct. However, most are unsure about their university's specific policies, especially since institutional policies concerning AI use are still emerging.
Uncertainty is further compounded by inconsistency, as one professor may encourage modest AI tool usage for idea generation or language assistance, while another may prohibit it altogether. Without a unified institutional policy, each student must navigate this grey area on their own. To address this issue, universities in Kazakhstan should draw inspiration from international institutions that are now creating clear, nuanced guidelines and even citation practices for AI-generated content.
However, penalties alone won't solve the problem. A change in academic culture is required. Students need to learn not just how to avoid plagiarism, but also why originality and authorship matter. Faculty needs to foster an environment where writing is understood as a thought process rather than a final product. AI can assist in this process, but it should never replace it.
Another significant ethical concern, often overlooked, is bias. Many assume that AI is neutral due to its algorithmic nature. Yet in reality, AI models are trained on vast datasets that are mostly English and largely sourced from Western culture. Even ChatGPT's developer, Open AI, acknowledges this on their website.
This means that AI reflects Western cultural, linguistic, and ideological assumptions embedded in the training data. For students, this can result in two major challenges. First, there's a genuine risk that AI-based writing could reinforce Anglo-American scholarly practices at the expense of local knowledge systems. Writing generated by such models tends to prioritize linear, thesis-based argument structures, certain citation styles, and critical approaches that might not align with native or multilingual scholarly practices. If Kazakh students use AI tools to support their work, they might inadvertently adapt these practices, thereby losing the chance to develop a unique academic voice that reflects their local or regional context.
Secondly, AI can exacerbate existing inequalities. Students from rural areas or those more comfortable with Kazakh or Russian might find that AI tools favor content in English or Western examples. This creates an unequal playing field based on linguistic ability and access to global discourse, determining the quality of AI assistance a student receives. Such disparities risk deepening educational inequalities, favoring those already fluent in the global discourse of academia.
To tackle these issues, universities must prioritize discussions of these biases in their curricula, turning students into more critical users of AI technology. Assessments can focus on local interpretations of regional or global issues, countering any cultural biases that AI may introduce.
In conclusion, universities in Kazakhstan have the potential to lead the way in addressing these ethical AI challenges. With a diverse multilingual and multicultural environment and a strong commitment to education, the country can develop AI policies tailored to local realities. This will involve reforming academic integrity policies to account for AI-generated content, investing in widespread training of faculty, staff, and students, and hosting regular workshops on the ethical use of AI. Collaboration with other countries, such as South Korea, can help shape ethical and inclusive AI development, influencing public-sector guidelines that may extend to academia.
The author, Michael Jones, is a writing and communications instructor at the School of Social Science and Humanities, Nazarbayev University (Astana).
References:
- Arslanov, A., & Akshulakov, D. (2021). Ethics of machine learning in Kazakhstani educational institutions. In Proceedings of the IEEE 9th International Conference on Intelligent Information and Knowledge Management (IIKM) (pp. 160-166). IEEE.
- Kmetaev, A., & Kakybayeva, M. G. (2020). "AI for Good" principles at Nazarbayev University. Proceedings of the IEEE International Conference on Technology and Education (TEAC XVIII), 1-6.
- AUA Summit 2025. (2021). Digitalization of Education: Challenges and Opportunities for Empowering Youth. Retrieved from https://aua.am/summit2025/
- Almushaitart, M., & Amangeldy, A. (2020). Artificial intelligence and digital ethics in educational institutions. Journal of Information Systems Education, 31(3), e1-e11.
- Nazarbayev University. (2020). Information Systems Security and Artificial Intelligence Act (ISSAI). Retrieved from https://issai.zu.kz/ru/index.html
- The author, Michael Jones, suggests that universities in Kazakhstan should prioritize discussions of AI biases in their curricula to foster critical AI users.
- AI models, trained on Western datasets, risk reinforcing Anglo-American scholarly practices at the expense of local knowledge systems, making it essential for Kazakh students to develop a unique academic voice.
- Universities in Kazakhstan need to update their academic integrity policies to account for AI-generated content, recognizing that not all AI use qualifies as cheating and focusing on transparency, clear disclosure, and intent.
- As AI tools become more accessible, the focus is shifting towards how best to utilize AI within academic communities, which encompasses science, health-and-wellness, technology, education-and-self-development, and multilingual writing, such as Kazakh, Russian, and English.
