UK Tech Companies and Child Safety Agencies to Test AI's Capability to Create Exploitation Content
Tech firms and child protection organizations will receive authority to evaluate whether artificial intelligence systems can produce child exploitation images under recently introduced British legislation.
Substantial Rise in AI-Generated Harmful Content
The declaration came as revelations from a safety watchdog showing that reports of AI-generated CSAM have more than doubled in the past year, rising from 199 in 2024 to 426 in 2025.
New Legal Structure
Under the changes, the government will permit designated AI developers and child safety organizations to examine AI models – the underlying systems for conversational AI and image generators – and ensure they have sufficient safeguards to prevent them from producing depictions of child sexual abuse.
"Ultimately about stopping exploitation before it happens," declared the minister for AI and online safety, noting: "Specialists, under rigorous protocols, can now detect the danger in AI models promptly."
Tackling Legal Challenges
The amendments have been implemented because it is illegal to produce and own CSAM, meaning that AI developers and other parties cannot create such content as part of a evaluation process. Previously, officials had to delay action until AI-generated CSAM was uploaded online before dealing with it.
This law is aimed at averting that problem by enabling to stop the creation of those images at source.
Legislative Framework
The amendments are being introduced by the authorities as modifications to the criminal justice legislation, which is also establishing a ban on possessing, producing or sharing AI models designed to generate exploitative content.
Real-World Impact
This week, the minister toured the London headquarters of a children's helpline and heard a mock-up conversation to counsellors involving a report of AI-based abuse. The interaction depicted a teenager requesting help after being blackmailed using a sexualised AI-generated image of themselves, created using AI.
"When I learn about young people experiencing extortion online, it is a cause of extreme frustration in me and justified concern amongst parents," he stated.
Concerning Data
A leading internet monitoring foundation stated that instances of AI-generated exploitation content – such as online pages that may contain multiple files – had significantly increased so far this year.
Instances of category A content – the most serious form of abuse – increased from 2,621 visual files to 3,086.
- Female children were overwhelmingly targeted, making up 94% of prohibited AI depictions in 2025
- Depictions of newborns to two-year-olds rose from five in 2024 to 92 in 2025
Sector Response
The legislative amendment could "constitute a vital step to guarantee AI tools are secure before they are released," commented the chief executive of the internet monitoring foundation.
"AI tools have made it so survivors can be victimised repeatedly with just a few clicks, providing criminals the capability to create potentially limitless quantities of advanced, photorealistic exploitative content," she continued. "Content which further commodifies survivors' suffering, and makes children, particularly female children, less safe both online and offline."
Support Session Information
Childline also released details of support interactions where AI has been referenced. AI-related risks mentioned in the sessions comprise:
- Using AI to rate body size, body and looks
- Chatbots dissuading young people from talking to trusted guardians about abuse
- Facing harassment online with AI-generated content
- Online extortion using AI-manipulated images
Between April and September this year, Childline conducted 367 support interactions where AI, chatbots and associated topics were mentioned, four times as many as in the equivalent timeframe last year.
Fifty percent of the mentions of AI in the 2025 interactions were related to mental health and wellbeing, encompassing utilizing chatbots for assistance and AI therapeutic applications.