UK Technology Firms and Child Protection Officials to Test AI's Ability to Create Exploitation Images
Tech firms and child protection agencies will receive authority to assess whether AI tools can generate child abuse material under new British legislation.
Substantial Rise in AI-Generated Harmful Material
The declaration coincided with revelations from a safety monitoring body showing that reports of AI-generated child sexual abuse material have more than doubled in the last twelve months, growing from 199 in 2024 to 426 in 2025.
Updated Regulatory Framework
Under the changes, the government will permit designated AI developers and child safety organizations to examine AI systems – the underlying systems for chatbots and image generators – and verify they have sufficient protective measures to stop them from producing images of child sexual abuse.
"Fundamentally about stopping abuse before it occurs," stated the minister for AI and online safety, adding: "Specialists, under strict protocols, can now detect the danger in AI models early."
Addressing Legal Obstacles
The changes have been introduced because it is against the law to create and possess CSAM, meaning that AI creators and others cannot generate such content as part of a evaluation regime. Previously, authorities had to delay action until AI-generated CSAM was published online before addressing it.
This law is designed to preventing that problem by helping to halt the production of those materials at their origin.
Legal Framework
The amendments are being introduced by the authorities as modifications to the criminal justice legislation, which is also implementing a prohibition on owning, producing or sharing AI systems designed to create exploitative content.
Real-World Consequences
This recently, the official toured the London headquarters of a children's helpline and heard a mock-up call to advisors featuring a account of AI-based exploitation. The call depicted a adolescent seeking help after facing extortion using a explicit deepfake of themselves, created using AI.
"When I learn about young people experiencing blackmail online, it is a source of intense frustration in me and rightful anger amongst parents," he stated.
Concerning Statistics
A prominent online safety organization reported that cases of AI-generated abuse content – such as online pages that may contain multiple images – had more than doubled so far this year.
Instances of category A material – the gravest form of exploitation – increased from 2,621 images or videos to 3,086.
- Girls were overwhelmingly victimized, accounting for 94% of illegal AI depictions in 2025
- Depictions of infants to toddlers increased from five in 2024 to 92 in 2025
Sector Response
The legislative amendment could "represent a vital step to guarantee AI products are safe before they are released," stated the head of the internet monitoring foundation.
"Artificial intelligence systems have made it so victims can be victimised repeatedly with just a simple actions, providing criminals the capability to make potentially endless quantities of advanced, lifelike child sexual abuse material," she added. "Content which additionally commodifies survivors' suffering, and makes young people, especially female children, more vulnerable on and off line."
Counseling Session Data
Childline also published information of support sessions where AI has been referenced. AI-related harms mentioned in the sessions include:
- Employing AI to rate weight, body and looks
- AI assistants dissuading children from consulting trusted adults about harm
- Facing harassment online with AI-generated content
- Online blackmail using AI-faked pictures
Between April and September this year, the helpline conducted 367 counselling interactions where AI, chatbots and associated terms were mentioned, four times as many as in the same period last year.
Half of the mentions of AI in the 2025 sessions were connected with psychological wellbeing and wellness, encompassing utilizing AI assistants for assistance and AI therapeutic apps.