British Tech Companies and Child Safety Officials to Examine AI's Capability to Create Exploitation Images

Tech firms and child protection agencies will be granted permission to assess whether artificial intelligence tools can produce child exploitation images under recently introduced UK legislation.

Substantial Rise in AI-Generated Harmful Material

The announcement came as revelations from a safety watchdog showing that reports of AI-generated child sexual abuse material have more than doubled in the past year, growing from 199 in 2024 to 426 in 2025.

New Regulatory Structure

Under the changes, the government will allow approved AI companies and child safety organizations to inspect AI models – the foundational technology for chatbots and image generators – and ensure they have sufficient protective measures to prevent them from creating images of child exploitation.

"Ultimately about stopping exploitation before it happens," stated the minister for AI and online safety, noting: "Specialists, under strict conditions, can now identify the risk in AI models early."

Tackling Regulatory Challenges

The changes have been introduced because it is illegal to produce and possess CSAM, meaning that AI developers and other parties cannot create such content as part of a evaluation process. Previously, authorities had to delay action until AI-generated CSAM was uploaded online before addressing it.

This legislation is designed to preventing that problem by enabling to stop the creation of those images at their origin.

Legal Framework

The changes are being added by the government as revisions to the crime and policing bill, which is also establishing a prohibition on owning, producing or sharing AI systems designed to generate child sexual abuse material.

Practical Impact

This week, the official visited the London base of Childline and listened to a simulated call to counsellors featuring a report of AI-based exploitation. The call portrayed a teenager requesting help after facing extortion using a sexualised deepfake of himself, created using AI.

"When I hear about young people facing blackmail online, it is a source of intense anger in me and rightful concern amongst families," he said.

Concerning Statistics

A leading internet monitoring organization reported that instances of AI-generated abuse content – such as online pages that may include numerous files – had significantly increased so far this year.

Cases of the most severe content – the gravest form of abuse – rose from 2,621 images or videos to 3,086.

  • Girls were overwhelmingly targeted, making up 94% of prohibited AI images in 2025
  • Portrayals of newborns to two-year-olds increased from five in 2024 to 92 in 2025

Sector Response

The legislative amendment could "constitute a crucial step to ensure AI products are secure before they are launched," commented the head of the online safety foundation.

"AI tools have made it so survivors can be targeted repeatedly with just a few clicks, providing offenders the capability to create potentially limitless quantities of advanced, photorealistic child sexual abuse material," she added. "Material which further commodifies victims' trauma, and renders children, especially girls, more vulnerable both online and offline."

Support Session Data

Childline also published information of support interactions where AI has been referenced. AI-related harms discussed in the sessions comprise:

  • Employing AI to evaluate body size, body and looks
  • AI assistants discouraging children from consulting trusted adults about abuse
  • Being bullied online with AI-generated content
  • Digital blackmail using AI-faked pictures

Between April and September this year, Childline delivered 367 counselling interactions where AI, chatbots and associated topics were discussed, four times as many as in the same period last year.

Half of the references of AI in the 2025 sessions were related to mental health and wellbeing, encompassing utilizing chatbots for support and AI therapy apps.

Karen Moreno
Karen Moreno

A seasoned casino strategist with over a decade of experience in roulette and probability analysis.