Published: Apr 28, 2025

The Dual Edges of AI in Education: Challenges, Ethics, and Solutions

Note: We gratefully acknowledge the valuable input we received on an earlier draft of this article from Mr Darren Coxon, founder of Coxon AI (a consultancy specializing in helping schools make sense of AI in their context), and Dr David Santandreu Calonge, Head of Educational Program Development at Mohamed bin Zayed University of Artificial Intelligence.

Artificial intelligence, or AI, is changing many areas of our life, and education is no exception. There are a variety of definitions available, but for present purposes we will use the term “AI” to refer to the simulation of human intelligence processes by machines, especially computer systems. These processes include learning (acquiring information and rules for using it), reasoning (using rules to reach approximate or definite conclusions), and self-correction. “Generative AI”, being a form of AI, is an AI system capable of generating new content, such as text, images, or music, by learning patterns from existing data. Examples include language models that can write essays or create art.

The core of an AI system typically involves a model, which is a representation of the problem the AI system is trying to solve, and an algorithm, which is a set of instructions for how the model should be trained and used to make predictions. In simple terms, AI systems are fed with large amounts of data whether the data is a (text, video or image). “Machine learning” is a subset of AI that enables computers to learn from data and improve their performance over time without being explicitly programmed. It involves algorithms that identify patterns in data and make predictions or decisions based on it. Using machine learning, AI combines large amounts of data with fast computing power and advanced algorithms to recognize patterns, make predictions, or perform actions. These AI models are trained on huge datasets so that it can “learn” from examples rather than follow fixed rules. There is a general understanding that the more data an AI model is trained on, the better it becomes at generating accurate predictions and decisions.

In recent years, AI has made its way into classrooms, online courses and educational establishments around the world. Tools like Deepseek, Magnus, Perplexity, Elicit, Scispace, Leonardo, Heygen and ChatGPT are widely known examples of generative AI, commonly used in education and beyond, with many more emerging in today’s rapidly changing digital environment. The education sector has witnessed significant adoption of AI, with many institutions starting to use AI systems in various ways and for multiple purposes. Students have become more aware of “AI agents” designed to help them with writing, research and other tasks, allowing them to explore ways that help strengthen their skills and succeed in their educational pathways. Recent large-scale research analyzing over one million anonymized student conversations on chatbots like Claude.ai revealed that students primarily use generative AI tools to create and improve educational content, such as designing practice questions, summarizing academic materials, and editing essays. They also rely on AI for technical explanations and solving academic assignments, particularly in STEM (Science, Technology, Engineering, and Mathematics) fields.[1]

One of the main advantages of generative AI in education can be seen in its ability to offer personalized learning experiences to students. For example, if a student is struggling with a particular subject, an AI tool can provide additional resources, an explanation of complex topics, techniques on problem solving, and exercises specially designed to help the student understand the subject. Tutoring tools can be linked to approved texts enabling personalized, automated assisted learning for students.

At the national level, some countries have begun embedding AI into education reform strategies. In 2025, China announced a sweeping initiative to integrate AI across all levels of its school system including in textbooks, teaching methods, and curricula. The Ministry of Education emphasized that AI should foster innovation and equip students with skills such as critical thinking, problem-solving, and collaboration.[2] AI systems are no longer just research tools. They are becoming everyday academic companions across a range of disciplines, especially in Computer Science and the Natural Sciences.[3]

Recent survey data shows that faculty use of AI is expected to grow significantly by 2025, especially in areas like course content development (from 34% in 2023 to 45% in 2025), lesson planning (28% to 42%), and lecture support (30% to 42%).[4] Additionally, at a local level, schools are also adapting to AI in meaningful ways. For example, The Ladies’ College in Guernsey revised its curriculum to include AI, using a custom-built AI agent to help Year 7 students set learning targets. Teachers at the school are using a secure, internal version of Microsoft Copilot to ensure students interact with AI in a safe and controlled environment.[5] Schools are proactively incorporating AI not just into learning outcomes but into classroom infrastructure itself. Despite these advantages, whether on learners or educators, the integration of AI into education is also accompanied by certain challenges.

Challenges of Student Use of AI in Education

A major concern about students’ use of AI in education is the question of academic integrity. With the availability of AI tools that are designed to generate essays, responses and other assignments, students can be tempted to use these resources inappropriately. This raises concerns about plagiarism and the authenticity of students’ work and may result in acts of cheating. Additionally, depending on how AI tools are used, there may be questions about the use of copyrighted content, particularly if students input or reproduce material without proper attribution. While this is unlikely to result in direct legal liability for the student, it may raise ethical concerns or lead to disciplinary consequences within the institution. Furthermore, with the use of artificial intelligence that generates text, it can be difficult to detect if a student is producing his own ideas or if he is largely relying on AI agents. The question of plagiarism arising from the use of AI in education is difficult to assess, as it is often unclear how much of the AI-generated content is incorporated into a student’s academic work. A recent study by Anthropic found that nearly 47% of student AI conversations were “Direct,” conversations meaning students were seeking ready-made answers or content with minimal interaction. While some of these exchanges may serve legitimate educational purposes, the study identified concerning examples such as requests to rewrite content to avoid plagiarism detection or solve test questions, which highlight the blurred lines between legitimate assistance and academic misconduct.[6]

Recent findings also suggest that students may be underreporting their use of chatbots due to fears of being accused of academic misconduct. According to EDUCAUSE’s 2025 Students and Technology Report, this caution stems from the fact that the use of generative AI in classrooms is not yet universally accepted. Many students (52%) reported that most or all their instructors prohibit the use of generative AI in their courses.[7] These perceptions may discourage transparency and increase the stigma around AI use, even when it might be used responsibly.

Artificial intelligence can contribute to the overload of information, so students are bombarded with an overwhelming amount of data and resources available online. The concept of academic integrity must be respected in educational institutions and students must utilize these AI systems carefully when completing their academic work and ensure they avoid any form of plagiarism in their assignments. Alongside this overload, students should also be cautious of relying too heavily on AI systems, as these systems are not always technically reliable. AI chatbots are powered by complex algorithms and machine learning models that can occasionally produce errors, generate misleading information, or experience functional glitches. These issues can disrupt both the student’s learning experience and the educator’s ability to rely on such tools for consistent educational support.[8] Dr David Santandreu Calonge, in his article “Enough of the Chit-Chat: A Comparative Analysis of Four AI Chatbots for Calculus and Statistics“, highlights some of these limitations through the example of ChatGPT. While ChatGPT has gained immense popularity and can produce outputs tailored in tone, detail, and length, it is not without flaws. Dr Calonge notes that Chat GPT-3.5 is prone to producing “hallucinations” (plausible but incorrect responses) and may reflect biases stemming from its training on unfiltered online data.[9] These risks serve as a reminder that students must take responsibility for verifying AI-generated content before integrating it into their academic tasks.

The growing dependence on technology is another challenge. While generative AI can provide answers and help with activities, it risks making students excessively dependent on these tools. When students begin to rely entirely on chatbots, they may fall into a comfort zone where instant answers are always available for their academic work. This overdependence can distance them from developing critical thinking and creative problem-solving skills. In fact, certain research highlights that even when students engage in more interactive or back-and-forth conversations with AI tools, the process can still result in limited cognitive effort on the student’s part. These interactions, although seemingly collaborative, may hinder the student’s ability to think independently or engage in deeper reflection, depending on how the chatbot is used.[10] While AI platforms offer a relaxed environment with immediate responses and solutions, they can also reduce students’ motivation to study, engage deeply with subjects, think outside the box, and develop their own unique voices. Teachers in Guernsey reported that some students were becoming over-reliant on AI to complete their homework, which hindered actual learning.[11] Additionally, if students strongly rely on these AI tools without verifying the accuracy of the information or engaging in critical thinking about the topic, they may find themselves accepting false or misleading ideas. AI tools cannot guarantee the accuracy of their outputs and can produce incorrect information, which can mislead learners in their academic work and negatively affect their academic performance and grades. Heavy reliance on AI tools should not create an environment where students passively consume information. Instead, students should use AI systems wisely and strive to actively engage with and learn from the information provided.

Another significant challenge with the use of AI for students is the protection of personal data and privacy. Although reputable generative AI tools do not explicitly request sensitive information, students may inadvertently share personal details such as names, academic records, or behavioral patterns when interacting with these systems. This raises valid concerns about how such data is processed, stored, and potentially used. If institution’s privacy policies are unclear or security measures are lacking, this information could be vulnerable to misuse, unauthorized access, or data breaches. In some cases, student data may also be exploited for purposes like marketing or research without proper consent. Dr Calonge observes that ethical considerations and student data privacy are integral to the successful integration of AI tools such as chatbots in higher education. Institutions must ensure that data collected through these platforms is handled transparently and responsibly, with appropriate safeguards in place to protect students’ personal information. In parallel, a balance should be maintained between AI-driven support and opportunities for real human engagement, particularly in more complex educational contexts.[12] These concerns make privacy and data safety important challenges to consider in the responsible use of AI in education.

Some students encounter additional difficulties when it comes to accessing AI tools. It is a common fact that not all students have easy access to advanced technology. Students who lack reliable internet, devices and digital infrastructure struggle to engage with AI-enhanced learning. This generally creates uneven learning opportunities in educational institutions, with some groups of students unable to benefit from AI systems while observing their peers do so. Consequently, there is a risk of creating a gap between students who have easy access to advanced technology and those who do not. This gap may exacerbate the existing digital divide, a long-standing issue where unequal access to technology deepens social and educational inequalities.[13] These disparities are not limited to low-income countries; even in high-income nations, systemic issues like rising tuition fees and lack of digital infrastructure further widen the divide. Globally, fewer than 1% of adults from the poorest households attain a bachelor’s degree, compared to over 10% from affluent backgrounds, reflecting how access to quality education and tools like AI is often shaped by socio-economic status.[14] This inequality can widen existing educational gaps, as privileged students are more likely to access and effectively use AI tools, gaining advantages in learning and future career prospects, while others cannot. Unequal access can also worsen existing inequalities, leaving disadvantaged students further behind in both education and future opportunities. To bridge this gap, it is important for educational institutions to ensure all learners have equal access to AI technology.

In addition to inequality concerns, students also face the challenge of understanding how to effectively use AI tools. Without proper guidance or training, students may struggle to navigate these technologies, limiting their ability to fully and ethically benefit from AI in their learning journey. A recent study by Turnitin, based on a survey of 3,500 students, educators, and administrators across seven countries including the US, UK, and India, found that while a majority of students occasionally use AI tools for assignments, 67% believe this may be shortcutting their learning, and half admit they do not know how to get the most benefit from such tools.[15] This highlights the need for structured guidance in schools and universities to promote responsible, informed AI use in education.

Challenges of Educators Using AI in Education

As the academic landscape evolves, more educators are integrating AI into their teaching processes and programs. While the benefits of adopting AI systems are significant, several challenges arise for educators in this transition. Teachers must adapt their teaching methods as AI becomes more common in classrooms, requiring them to develop new ways to evaluate students and design assignments that AI cannot easily solve such as highly personalized projects or oral assessments. This often results in additional effort and time spent redesigning teaching models during the academic year.

In addition to these structural challenges, many educators must also confront a steep learning curve when engaging with new technologies. The need to adapt and learn new technologies has become essential for educators. They must invest time and energy in training to effectively use AI tools in their classes, which is a process that can feel overwhelming, particularly for those less experienced with technology or resistant to shifts away from their traditional teaching methods. Research has shown that when instructors lack technical proficiency or fail to integrate digital tools effectively, it can disrupt learning, cause confusion around assignments, and even undermine the instructor’s credibility in the classroom[16]. For instance, poorly adapted course materials or overreliance on unnecessary technology have been linked to disorganized coursework, grading issues, and students feeling disconnected from their instructors.

There are growing concerns about the displacement of work while the AI continues to evolve. Some educators can fear that the AI can replace certain teaching roles, reducing the need for human instructor in various educational contexts. This fear can lead to the insecurity of the work among the educators, who influence their work and commitment to the profession. For example, some educators embrace AI for administrative support, others express concern about its growing role. A media teacher in Guernsey admitted AI helped reduce his marking workload but also voiced discomfort, stating, “I’m terrified to say that I think it marks better than I do “.[17] This emotional response highlights a broader fear among educators that AI could devalue the human aspect of teaching, particularly when AI agents are seen as more efficient at repetitive academic tasks. In addition to concerns about job replacement, another challenge lies in AI’s inability to fully replicate the nuanced, empathetic experience of interacting with a human teacher. AI systems may struggle to provide the same level of encouragement, adaptive feedback, and emotional intelligence that educators naturally bring into the classroom.[18] This potential loss of personal connection can affect the quality of support that students receive and may lead to a less engaging learning experience.

Furthermore, teachers generally face significant challenges in AI driven education, including the difficulty of detecting whether students’ work is genuinely their own or AI generated. This blurring of academic integrity complicates assessments and requires educators to adopt new verification methods. Also, teachers may face the added challenge of training students to use AI ethically and responsibly. This requires educators to not only teach technical skills, but also address critical issues like plagiarism prevention, bias detection in AI outputs and the importance of original thought.

Addressing the Challenges of AI in Education

To overcome the challenges posed by AI integration, educational institutions must prioritize the development of comprehensive policies that safeguard both educational values which AI systems could interrupt. These policies should clearly outline AI implementation protocols, ensuring transparency in data usage and privacy protections for students. For example, students must retain the right to understand how their data is collected, stored, and processed, while teachers require safeguards to protect their professional integrity and intellectual property when using AI tools in their teaching. By attacking these rights, educational institutions can provide a more equitable environment where students and teachers feel secure and supported.

Furthermore, educational institutions must update their guidelines on the use of AI to ensure data is managed safely and ethically. These guidelines should focus on compliance, promote responsible use of AI and should specify the consequences of violations of academic integrity, such as plagiarism facilitated by AI tools. These legal measures will help maintain the credibility of educational establishments and support ethical educational practices.

Additionally, universities have begun modifying their honor codes and misconduct definitions to explicitly address AI. For instance, Cambridge University revised its academic integrity policies in 2023 to classify any unacknowledged use of AI as “false authorship,” creating a new category of AI-related academic misconduct.

Educational establishments must implement comprehensive training programs and workshops that equip both staff and students with the knowledge and skills needed to use AI responsibly and effectively. These programs should cover best practices for AI applications, including ethical considerations, data privacy, and critical evaluation of AI-generated content, ensuring that each party involved can utilize AI’s benefits while minimizing potential risks.

There is a growing need to embed Continuing Professional Development (CPD) and structured upskilling and reskilling pathways into faculty development. As AI rapidly transforms the job market and educational systems alike, institutions must support their staff in adapting to new technologies through targeted training initiatives. A recent article emphasizes the importance of upskilling and reskilling in the UAE as a strategic response to AI-driven disruption, highlighting the role of educational institutions in future-proofing careers and equipping the workforce with AI-related competencies.[19]

Parents also have a critical role to play in addressing the challenges of AI in education, particularly in guiding responsible use at home. In Guernsey, one parent, Gazz Barbe, explained that he uses parental controls to restrict his daughter’s phone use and does not allow the use of AI technology.[20] He expressed concern that if students are given unrestricted access to AI, they may become over reliant on it, particularly in subjects where they already struggle, using it to complete homework assignments rather than developing their own understanding. While acknowledging AI’s positive potential, he emphasized that it should be used as a tool to assist teachers, not to replace them. Parental involvement and the application of controls or restrictions over students’ use of AI can contribute meaningfully to addressing some of the challenges associated with AI in education, particularly in promoting healthier learning habits and preventing overreliance.

Conclusion

In conclusion, schools and universities are starting to use artificial intelligence tools to improve learning experiences, customize education and manage administrative activities more efficiently. This growth indicates a change in traditional educational methods, in which technology plays a central role both in teaching and learning. The integration of the AI in education offers a mixture of opportunities and challenges: students face risks to academic integrity, struggle with information overload, and may become overly reliant on AI tools, thereby diminishing their critical thinking abilities. Meanwhile, teachers confront the pressure to adapt to new technologies and the fear of being displaced by AI-driven systems. Addressing these concerns is essential to undertake a balanced approach to educational innovation and safeguard the interests and rights of both students and educators.

 

 

 

 

_____________________

[1] Anthropic. How University Students Use Claude: Anthropic Education Report. 2024. https://www.anthropic.com/news/anthropic-education-report-how-university-students-use-claude

[2] China to rely on artificial intelligence in education reform bid, Reuters, April 17, 2025. https://www.reuters.com/world/china/china-rely-artificial-intelligence-education-reform-bid-2025-04-17

[3] Anthropic. How University Students Use Claude: Anthropic Education Report. 2024. https://www.anthropic.com/news/anthropic-education-report-how-university-students-use-claude

[4] Cengage Group, “How Faculty Uses AI?”, 2024

[5] Connor Belford, Guernsey Headteachers Adapt to AI Use in Education, BBC News, April 17, 2025. https://www.bbc.com/news/articles/cp668v6yep8o

[6] Anthropic. How University Students Use Claude: Anthropic Education Report. 2024. https://www.anthropic.com/news/anthropic-education-report-how-university-students-use-claude

[7] EDUCAUSE, 2025 Students and Technology Report: Generative AI in the Classroom, https://www.educause.edu/content/2025/students-and-technology-report#GenerativeAIintheClassroom

[8] David Santandreu Calonge et al., Enough of the Chit-Chat: A Comparative Analysis of Four AI Chatbots for Calculus and Statistics, Journal of Applied Learning & Teaching, Vol. 6 No. 2 (2023). https://doi.org/10.37074/jalt.2023.6.2.22

[9] As above.

[10] Anthropic. How University Students Use Claude: Anthropic Education Report. 2024. https://www.anthropic.com/news/anthropic-education-report-how-university-students-use-claude

[11]  Connor Belford, Guernsey Headteachers Adapt to AI Use in Education, BBC News, April 17, 2025. https://www.bbc.com/news/articles/cp668v6yep8o

[12] Hultberg, P. T., Santandreu Calonge, D., Kamalov, F., & Smail, L. (2024). Comparing and assessing four AI chatbots’ competence in economics. PLOS ONE, 19(5), e0297804. https://doi.org/10.1371/journal.pone.0297804

[13] Cornelia Walther, The Future Of Education: Will AI Be The Great Equalizer?, Forbes, March 16, 2025. https://www.forbes.com/sites/corneliawalther/2025/03/16/the-future-of-education-will-ai-be-the-great-equalizer

[14] As above.

[15] Turnitin study finds students more concerned than educators about AI’s impact on learning, EdTech Innovation Hub, March 2024, https://www.edtechinnovationhub.com/news/turnitin-study-finds-students-more-concerned-than-educators-about-ais-impact-on-learning

[16] EDUCAUSE, 2025 Students and Technology Report: Generative AI in the Classroom, https://www.educause.edu/content/2025/students-and-technology-report#GenerativeAIintheClassroom

[17] Connor Belford, Guernsey Headteachers Adapt to AI Use in Education, BBC News, April 17, 2025. https://www.bbc.com/news/articles/cp668v6yep8o

[18] David Santandreu Calonge et al., “Enough of the Chit-Chat: A Comparative Analysis of Four AI Chatbots for Calculus and Statistics,” Journal of Applied Learning & Teaching, Vol. 6 No. 2 (2023). https://doi.org/10.37074/jalt.2023.6.2.22

[19] David Santandreu Calonge et al., Upskilling and Reskilling in the United Arab Emirates: Future-proofing Careers with AI Skills, Power and Education, 2025. https://doi.org/10.1177/14779714251315288

[20] Connor Belford, Guernsey Headteachers Adapt to AI Use in Education, BBC News, April 17, 2025. https://www.bbc.com/news/articles/cp668v6yep8o

Key Contacts

David Yates

Partner, Head of Digital & Data

d.yates@tamimi.com