UK Tech Firms and Child Safety Officials to Test AI's Capability to Create Abuse Images

Tech firms and child safety organizations will receive authority to assess whether AI tools can generate child exploitation images under recently introduced British legislation.

Substantial Increase in AI-Generated Illegal Content

The declaration coincided with revelations from a safety monitoring body showing that reports of AI-generated child sexual abuse material have more than doubled in the past year, rising from 199 in 2024 to 426 in 2025.

Updated Legal Framework

Under the amendments, the authorities will allow approved AI companies and child safety organizations to examine AI models – the underlying systems for conversational AI and image generators – and verify they have adequate safeguards to prevent them from producing depictions of child sexual abuse.

"Fundamentally about preventing exploitation before it occurs," declared the minister for AI and online safety, noting: "Specialists, under strict conditions, can now detect the danger in AI models early."

Addressing Legal Obstacles

The amendments have been introduced because it is illegal to create and own CSAM, meaning that AI developers and other parties cannot create such content as part of a evaluation process. Until now, authorities had to wait until AI-generated CSAM was published online before dealing with it.

This legislation is aimed at averting that problem by enabling to stop the creation of those images at source.

Legislative Framework

The changes are being added by the authorities as revisions to the criminal justice legislation, which is also implementing a prohibition on owning, producing or sharing AI models designed to generate exploitative content.

Practical Consequences

This recently, the official visited the London headquarters of Childline and listened to a simulated conversation to advisors featuring a report of AI-based exploitation. The interaction depicted a teenager seeking help after facing extortion using a explicit AI-generated image of himself, constructed using AI.

"When I learn about children facing blackmail online, it is a cause of extreme anger in me and justified anger amongst families," he stated.

Concerning Statistics

A leading online safety foundation stated that instances of AI-generated exploitation material – such as online pages that may include multiple images – had significantly increased so far this year.

Cases of the most severe content – the gravest form of exploitation – rose from 2,621 visual files to 3,086.

  • Female children were overwhelmingly targeted, accounting for 94% of prohibited AI depictions in 2025
  • Depictions of newborns to toddlers increased from five in 2024 to 92 in 2025

Industry Reaction

The legislative amendment could "represent a vital step to guarantee AI products are secure before they are launched," stated the chief executive of the internet monitoring foundation.

"AI tools have enabled so survivors can be victimised all over again with just a few clicks, providing offenders the ability to create possibly endless amounts of sophisticated, lifelike child sexual abuse material," she continued. "Material which further exploits victims' trauma, and renders young people, especially girls, more vulnerable on and off line."

Support Session Information

Childline also released details of counselling interactions where AI has been referenced. AI-related harms discussed in the conversations comprise:

  • Employing AI to rate body size, physique and appearance
  • Chatbots discouraging young people from talking to trusted guardians about abuse
  • Being bullied online with AI-generated content
  • Digital blackmail using AI-faked pictures

During April and September this year, the helpline delivered 367 counselling interactions where AI, chatbots and related topics were discussed, significantly more as many as in the equivalent timeframe last year.

Half of the mentions of AI in the 2025 sessions were connected with mental health and wellness, including using chatbots for support and AI therapeutic apps.

Patricia Castillo
Patricia Castillo

A tech enthusiast and writer passionate about exploring how technology shapes our daily lives and future innovations.