UK Technology Firms and Child Protection Agencies to Test AI's Capability to Create Exploitation Images

Tech firms and child protection organizations will receive authority to assess whether AI tools can produce child exploitation images under new UK legislation.

Significant Increase in AI-Generated Illegal Material

The announcement came as revelations from a safety monitoring body showing that reports of AI-generated CSAM have increased dramatically in the last twelve months, growing from 199 in 2024 to 426 in 2025.

New Legal Structure

Under the amendments, the government will allow designated AI developers and child protection groups to inspect AI systems – the foundational systems for conversational AI and visual AI tools – and verify they have adequate safeguards to stop them from creating images of child sexual abuse.

"Ultimately about stopping exploitation before it happens," stated Kanishka Narayan, noting: "Experts, under strict conditions, can now detect the danger in AI systems early."

Addressing Regulatory Obstacles

The amendments have been implemented because it is illegal to create and possess CSAM, meaning that AI developers and other parties cannot generate such content as part of a testing process. Previously, officials had to wait until AI-generated CSAM was uploaded online before dealing with it.

This law is designed to preventing that issue by helping to stop the creation of those materials at source.

Legal Structure

The amendments are being added by the authorities as revisions to the crime and policing bill, which is also establishing a prohibition on owning, producing or distributing AI systems developed to create exploitative content.

Practical Impact

This recently, the official visited the London base of Childline and heard a simulated call to counsellors involving a account of AI-based abuse. The call depicted a teenager requesting help after facing extortion using a sexualised AI-generated image of himself, constructed using AI.

"When I learn about children facing blackmail online, it is a source of intense anger in me and justified concern amongst families," he stated.

Concerning Statistics

A prominent internet monitoring foundation stated that cases of AI-generated exploitation content – such as webpages that may contain multiple files – had significantly increased so far this year.

Cases of category A material – the most serious form of exploitation – increased from 2,621 images or videos to 3,086.

  • Girls were predominantly targeted, accounting for 94% of prohibited AI images in 2025
  • Depictions of infants to two-year-olds rose from five in 2024 to 92 in 2025

Sector Response

The law change could "represent a vital step to guarantee AI tools are secure before they are launched," commented the chief executive of the online safety organization.

"AI tools have made it so victims can be targeted repeatedly with just a few clicks, providing criminals the capability to make possibly limitless quantities of advanced, photorealistic exploitative content," she added. "Content which additionally commodifies victims' suffering, and renders children, especially girls, less safe on and off line."

Support Interaction Data

Childline also released details of support interactions where AI has been mentioned. AI-related risks discussed in the conversations comprise:

  • Employing AI to rate body size, body and appearance
  • AI assistants dissuading young people from consulting trusted adults about harm
  • Being bullied online with AI-generated material
  • Online blackmail using AI-faked pictures

During April and September this year, Childline delivered 367 counselling interactions where AI, conversational AI and related topics were discussed, significantly more as many as in the same period last year.

Half of the references of AI in the 2025 sessions were connected with mental health and wellness, including utilizing chatbots for assistance and AI therapeutic applications.

Michael Jones
Michael Jones

A passionate writer and digital storyteller, Elara shares her expertise on creative living and innovative trends.

February 2026 Blog Roll

Popular Post