The youngsters’s commissioner for England is looking at the govt to prohibit apps which use synthetic intelligence (AI) to create sexually specific photographs of youngsters.
Dame Rachel de Souza stated a complete ban used to be wanted on apps which enable “nudification” – the place pictures of actual individuals are edited via AI to lead them to seem bare – or can be utilized to create sexually specific deepfake photographs of youngsters.
She stated the federal government used to be permitting such apps to “go unchecked with extreme real-world consequences”.
A central authority spokesperson stated kid sexual abuse subject matter used to be unlawful and that there have been plans for additional offences for growing, possessing or distributing AI gear designed to create such content material.
Deepfakes are movies, photos or audio clips made with AI to seem or sound actual.
In a record printed on Monday, Dame Rachel stated the era used to be disproportionately concentrated on women and younger ladies with many bespoke apps showing to paintings most effective on feminine our bodies.
Girls are actively averting posting photographs or attractive on-line to cut back the danger of being focused, in step with the record, “in the same way that girls follow other rules to keep themselves safe in the offline world – like not walking home alone at night”.
Children feared “a stranger, a classmate, or even a friend” may just goal them the use of applied sciences which may well be discovered on standard seek and social media platforms.
Dame Rachel stated: “The evolution of these tools is happening at such scale and speed that it can be overwhelming to try and get a grip on the danger they present.
“We can not sit down again and make allowance those bespoke AI apps to have any such unhealthy hang over youngsters’s lives.”
Dame Rachel also called for the government to:
- impose legal obligations on developers of generative AI tools to identify and address the risks their products pose to children and take action in mitigating those risks
- set up a systemic process to remove sexually explicit deepfake images of children from the internet
- recognise deepfake sexual abuse as a form of violence against women and girls
Paul Whiteman, general secretary of school leaders’ union NAHT, said members shared the commissioner’s concerns.
He said: “This is a space that urgently must be reviewed because the era dangers outpacing the legislation and schooling round it.”
It is illegal in England and Wales under the Online Safety Act to share or threaten to share explicit deepfake images.
The government announced in February laws to tackle the threat of child sexual abuse images being generated by AI, which include making it illegal to possess, create, or distribute AI tools designed to create such material.
It said at the time that the Internet Watch Foundation – a UK-based charity partly funded by tech firms – had confirmed 245 reports of AI-generated child sexual abuse in 2024 compared with 51 in 2023, a 380% increase.
Media regulator Ofcom published the final version of its Children’s Code on Friday, which puts legal requirements on platforms hosting pornography and content encouraging self-harm, suicide or eating disorders, to take more action to prevent access by children.
Websites must introduce beefed-up age checks or face big fines, the regulator said.
Dame Rachel has criticised the code saying it prioritises “industry pursuits of era firms over youngsters’s protection”.
A government spokesperson said creating, possessing or distributing child sexual abuse material, including AI-generated images, is “abhorrent and unlawful”.
“Under the Online Safety Act platforms of all sizes now have to take away this type of content material, or they may face important fines,” they added.
“The UK is the primary nation on the planet to introduce additional AI kid sexual abuse offences – making it unlawful to own, create or distribute AI gear designed to generate heinous kid intercourse abuse subject matter.”