UK Government Introduces Tougher Testing for AI Child Abuse Imagery
The UK government is implementing new measures to ensure artificial intelligence tools cannot generate child sexual abuse imagery. This initiative aims to enhance child safety and hold tech firms accountable.
At a glance
New Measures for AI Testing
The UK government has announced an amendment to the Crime and Policing Bill that will enable tech firms and child safety charities to proactively test artificial intelligence tools. This initiative is designed to prevent the creation of child sexual abuse material (CSAM) by assessing AI models before their release. Technology Secretary Liz Kendall emphasized that these measures aim to ensure AI systems are safe from the outset. The Internet Watch Foundation (IWF) reported a significant increase in AI-related CSAM reports, with numbers doubling over the past year. Between January and October 2025, the IWF removed 426 pieces of reported material, a sharp rise from 199 in the same period of 2024. This alarming trend highlights the urgent need for stricter regulations and testing protocols.
Support from Child Safety Advocates
Kerry Smith, chief executive of the IWF, welcomed the government's proposals, stating they would enhance ongoing efforts to combat online CSAM. She noted that AI tools have made it easier for criminals to create sophisticated and photorealistic abuse material. Smith believes that the new measures could be a crucial step in ensuring the safety of AI products before they reach the public. Rani Govender, policy manager for child safety online at the NSPCC, also expressed support for the measures. However, she stressed that these testing protocols should not be optional. Govender called for a mandatory duty for AI developers to incorporate child safety into their product design, emphasizing the need for accountability in the industry.
Legal Framework for AI Development
The proposed changes to the law will empower AI developers and charities to implement safeguards against extreme pornography and non-consensual intimate images. Child safety experts have raised concerns that AI tools, which often rely on vast amounts of online content, can produce highly realistic abuse imagery. This complicates efforts to identify and police such material, as distinguishing between real and AI-generated content becomes increasingly difficult. Researchers have noted a growing demand for these images, particularly on the dark web, with some being created by children. Earlier this year, the Home Office announced that the UK would be the first country to make it illegal to possess, create, or distribute AI tools designed for CSAM, carrying a potential prison sentence of up to five years.
Commitment to Child Safety
Kendall reiterated the government's commitment to ensuring that child safety is integrated into AI systems from the beginning. She stated that empowering trusted organizations to scrutinize AI models is essential to keeping children safe. Safeguarding Minister Jess Phillips added that these measures would prevent legitimate AI tools from being misused to create harmful material, ultimately protecting more children from predators. The UK government's proactive approach to regulating AI tools reflects a growing recognition of the potential risks associated with technological advancements. As the landscape of online safety continues to evolve, these new measures aim to strike a balance between innovation and the protection of vulnerable populations.