Skip to content

School leaders grapple with the emergence of AI-generated explicit images involving students, questioning their response strategies.

Promoting early awareness and measures against GenAI risks, educators create safeguards designed to shield students from potential hazards and harm.

School leaders grapple with the issue of AI-generated explicit images of students: Steps to tackle...
School leaders grapple with the issue of AI-generated explicit images of students: Steps to tackle the problem?

School leaders grapple with the emergence of AI-generated explicit images involving students, questioning their response strategies.

In the digital age, schools are faced with a new challenge: AI-generated deepfakes, including explicit images of minors. These images, often created without the knowledge or consent of the individuals involved, can have a devastating impact on students' lives.

According to a UN Research, AI has been found to exhibit bias, and this is a concern when it comes to the creation of explicit content. It's important to note that 88% of such images shared and misused online are youth-created, highlighting the need for schools to take action.

To address this issue, the Online Child Exploitation Prevention Initiative (OCEPI) was established in 2023. This collaborative group, consisting of law enforcement, researchers, educators, and child protection organisations, aims to keep children safe online. In June 2025, OCEPI published "Guidance for School-Based Professionals and School Leaders", offering simple and actionable steps for schools to manage this issue.

The use of Generative AI can lead to the creation of explicit images involving students. However, it's crucial to understand that these models do not "know" or intend to create such content. Instead, if such images are generated, it is because harmful or illicit content was present in the training data or the AI was prompted to produce such images.

Responsible AI development includes safeguards to detect and prevent generating or sharing illegal content featuring minors. Major AI researchers, companies, and regulations prohibit the use, creation, and dissemination of explicit content involving minors. Reputable AI models and platforms incorporate content filtering and detection mechanisms to prevent the generation of illegal or harmful content.

Schools can update their student and teacher handbooks with this guidance to help inform everyone and shape school protocols. School leaders should share these updates with school administrators, school counselors, student support specialists, teachers, and parents.

If deepfakes are passed around a school community or peer groups, the victim may be bullied, teased, and harassed. The victim may experience humiliation, shame, anger, violation, and self-blame. These incidents can have a lasting, traumatic impact on a student's life that goes well beyond their school years.

Encouraging the school community to read the OCEPI guide can help everyone understand the issue and organisations they could connect with. Addressing the risks of GenAI now, while the technology is still emerging, can help build proactive systems that protect students and prevent harm in an increasingly digital world.

It's unclear how many of those youth-created images are deepfakes or generated using GenAI. Regardless, schools should put protocols in place for potential deepfake explicit imagery to protect students and support them if they become victims.

Stephanie Jones, the Global Prevention and Education Specialist for A21.org, emphasises the importance of addressing this issue. "Guard your district against AI deepfakes, including porn," she says. "Protecting our students is our responsibility, and we must be proactive in understanding and addressing the risks posed by emerging technologies."

[1] Brown, M., et al. (2020). Language models are few-shot learners. Advances in Neural Information Processing Systems.

[2] Radford, A., et al. (2019). Language models are unsupervised multitask learners. International Conference on Learning Representations.

[4] European Commission (2019). Ethics Guidelines for Trustworthy AI. Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions.

  1. In the realm of education-and-self-development, schools are utilizing Learning Management Systems (LMS) to enhance learning experiences for students through stem subjects.
  2. The challenge of AI-generated deepfakes, including explicit images of minors, call for immediate attention from school teachers and school leaders.
  3. These deepfakes, largely created without students' knowledge or consent, can have a detrimental impact on students' lives, highlighting the need for digital safety education.
  4. To tackle this issue, the Online Child Exploitation Prevention Initiative (OCEPI) offers guidance, urging schools to implement proactive measures to keep students safe in a virtual environment.
  5. As the technology advances, it's essential for schools to continually update their resources, possibly incorporating digital finance or donations, to stay informed about the latest trends in AI development and its implications for education.
  6. Teachers, parents, and school administrators can tap into digital resources like the OCEPI guidelines for education-and-self-development, ensuring they are knowledgeable about AI ethics, risks, and safeguards, thus equipping them to protect students and address potential AI-related issues.

Read also:

    Latest