British Technology Companies and Child Safety Officials to Test AI's Capability to Generate Exploitation Content

Technology companies and child protection organizations will be granted permission to assess whether artificial intelligence tools can produce child abuse images under recently introduced British legislation.

Significant Rise in AI-Generated Illegal Material

The declaration coincided with revelations from a safety monitoring body showing that cases of AI-generated CSAM have more than doubled in the past year, growing from 199 in 2024 to 426 in 2025.

New Regulatory Structure

Under the changes, the authorities will allow designated AI developers and child protection organizations to inspect AI systems – the foundational systems for chatbots and visual AI tools – and verify they have sufficient safeguards to prevent them from producing depictions of child exploitation.

"Fundamentally about stopping exploitation before it happens," stated Kanishka Narayan, noting: "Experts, under rigorous protocols, can now identify the danger in AI systems promptly."

Addressing Regulatory Obstacles

The amendments have been introduced because it is illegal to produce and own CSAM, meaning that AI creators and other parties cannot create such images as part of a evaluation regime. Previously, authorities had to delay action until AI-generated CSAM was published online before addressing it.

This law is aimed at preventing that issue by helping to stop the production of those images at source.

Legislative Structure

The changes are being added by the authorities as revisions to the crime and policing bill, which is also establishing a prohibition on owning, producing or sharing AI models designed to create child sexual abuse material.

Practical Impact

This recently, the official visited the London base of a children's helpline and listened to a mock-up conversation to counsellors involving a report of AI-based abuse. The interaction portrayed a teenager seeking help after being blackmailed using a sexualised deepfake of themselves, constructed using AI.

"When I hear about young people facing extortion online, it is a source of extreme frustration in me and justified concern amongst parents," he stated.

Alarming Data

A prominent internet monitoring organization reported that instances of AI-generated exploitation material – such as online pages that may include numerous images – had significantly increased so far this year.

Cases of category A content – the gravest form of abuse – increased from 2,621 visual files to 3,086.

  • Female children were predominantly victimized, accounting for 94% of prohibited AI depictions in 2025
  • Portrayals of newborns to two-year-olds rose from five in 2024 to 92 in 2025

Industry Reaction

The law change could "constitute a crucial step to guarantee AI products are safe before they are released," commented the chief executive of the online safety foundation.

"Artificial intelligence systems have enabled so survivors can be victimised all over again with just a simple actions, providing criminals the capability to make potentially endless amounts of advanced, photorealistic exploitative content," she added. "Material which further commodifies survivors' suffering, and renders children, particularly girls, more vulnerable both online and offline."

Counseling Interaction Information

Childline also published details of counselling interactions where AI has been mentioned. AI-related harms mentioned in the conversations include:

  • Using AI to evaluate body size, physique and looks
  • AI assistants dissuading young people from consulting trusted guardians about abuse
  • Facing harassment online with AI-generated content
  • Online blackmail using AI-faked images

Between April and September this year, Childline delivered 367 support sessions where AI, conversational AI and associated terms were mentioned, significantly more as many as in the same period last year.

Fifty percent of the references of AI in the 2025 interactions were related to psychological wellbeing and wellness, encompassing using chatbots for assistance and AI therapy apps.

Helen Tucker
Helen Tucker

Elara is a historian and leadership coach with over a decade of experience in guiding individuals through transformative strategic journeys.