Illegal trade in AI child sex abuse images exposed


Nov 1, 2015

Artistic image showing the shadow of a small child in the background and an adult's hand on a computer keyboard in the foreground.

By Angus Crawford and Tony Smith
BBC News

Paedophiles are using artificial intelligence (AI) technology to create and sell life-like child sexual abuse material, the BBC has found.

Some are accessing the images by paying subscriptions to accounts on mainstream content-sharing sites such as Patreon.

Patreon said it had a "zero tolerance" policy about such imagery on its site.

The National Police Chief's Council said it was "outrageous" that some platforms were making "huge profits" but not taking "moral responsibility".

The makers of the abuse images are using AI software called Stable Diffusion, which was intended to generate images for use in art or graphic design.

AI enables computers to perform tasks that typically require human intelligence.

The Stable Diffusion software allows users to describe, using word prompts, any image they want - and the program then creates the image.

But the BBC has found it is being used to create life-like images of child sexual abuse, including of the rape of babies and toddlers.

UK police online child abuse investigation teams say they are already encountering such content.

Octavia Sheepshanks

Journalist Octavia Sheepshanks says there has been a "huge flood" of AI-generated images

Freelance researcher and journalist Octavia Sheepshanks has been investigating this issue for several months. She contacted the BBC via children's charity the NSPCC in order to highlight her findings.

"Since AI-generated images became possible, there has been this huge flood… it's not just very young girls, they're [paedophiles] talking about toddlers," she said.

A "pseudo image" generated by a computer which depicts child sexual abuse is treated the same as a real image and is illegal to possess, publish or transfer in the UK.

The National Police Chiefs' Council (NPCC) lead on child safeguarding, Ian Critchley, said it would be wrong to argue that because no real children were depicted in such "synthetic" images - that no-one was harmed.

He warned that a paedophile could, "move along that scale of offending from thought, to synthetic, to actually the abuse of a live child".


Abuse images are being shared via a three-stage process:

  • Paedophiles make images using AI software
  • They promote pictures on platforms such as Japanese picture sharing website called Pixiv
  • These accounts have links to direct customers to their more explicit images, which people can pay to view on accounts on sites such as Patreon


Some of the image creators are posting on a popular Japanese social media platform called Pixiv, which is mainly used by artists sharing manga and anime.

But because the site is hosted in Japan, where sharing sexualised cartoons and drawings of children is not illegal, the creators use it to promote their work in groups and via hashtags - which indexes topics using key words.

A spokesman for Pixiv said it placed immense emphasis on addressing this issue. It said on 31 May it had banned all photo-realistic depictions of sexual content involving minors.

The company said it had proactively strengthened its monitoring systems and was allocating substantial resources to counteract problems related to developments in AI.

Ms Sheepshanks told the BBC her research suggested users appeared to be making child abuse images on an industrial scale.

"The volume is just huge, so people [creators] will say 'we aim to do at least 1,000 images a month,'" she said.

Comments by users on individual images in Pixiv make it clear they have a sexual interest in children, with some users even offering to provide images and videos of abuse that were not AI-generated.

Ms Sheepshanks has been monitoring some of the groups on the platform.

"Within those groups, which will have 100 members, people will be sharing, 'Oh here's a link to real stuff,' she says.

"The most awful stuff, I didn't even know words [the descriptions] like that existed."

Different pricing levels​

Many of the accounts on Pixiv include links in their biographies directing people to what they call their "uncensored content" on the US-based content sharing site Patreon.

Patreon is valued at approximately $4bn (£3.1bn) and claims to have more than 250,000 creators - most of them legitimate accounts belonging to well-known celebrities, journalists and writers.

Fans can support creators by taking out monthly subscriptions to access blogs, podcasts, videos and images - paying as little as $3.85 (£3) per month.

But our investigation found Patreon accounts offering AI-generated, photo-realistic obscene images of children for sale, with different levels of pricing depending on the type of material requested.

One wrote on his account: "I train my girls on my PC," adding that they show "submission". For $8.30 (£6.50) per month, another user offered "exclusive uncensored art".

The BBC sent Patreon one example, which the platform confirmed was "semi realistic and violates our policies". It said the account was immediately removed.

Patreon said it had a "zero-tolerance" policy, insisting: "Creators cannot fund content dedicated to sexual themes involving minors."

The company said the increase in AI-generated harmful content on the internet was "real and distressing", adding that it had "identified and removed increasing amounts" of this material.

"We already ban AI-generated synthetic child exploitation material," it said, describing itself as "very proactive", with dedicated teams, technology and partnerships to "keep teens safe".

The NPCC's Ian Critchley

The NPCC's Ian Critchley said it was a "pivotal moment" for society

AI image generator Stable Diffusion was created as a global collaboration between academics and a number of companies, led by UK company Stability AI.

Several versions have been released, with restrictions written into the code that control the kind of content that can be made.

But last year, an earlier "open source" version was released to the public which allowed users to remove any filters and train it to produce any image - including illegal ones.

Stability AI told the BBC it "prohibits any misuse for illegal or immoral purposes across our platforms, and our policies are clear that this includes CSAM (child sexual abuse material).

"We strongly support law enforcement efforts against those who misuse our products for illegal or nefarious purposes".

As AI continues developing rapidly, questions have been raised about the future risks it could pose to people's privacy, their human rights or their safety.

The NPCC's Ian Critchley said he was also concerned that the flood of realistic AI or "synthetic" images could slow down the process of identifying real victims of abuse.

He explains: "It creates additional demand, in terms of policing and law enforcement to identify where an actual child, wherever it is in the world, is being abused as opposed to an artificial or synthetic child."

Mr Critchley said he believed it was a pivotal moment for society.

"We can ensure that the internet and tech allows the fantastic opportunities it creates for young people - or it can become a much more harmful place," he said.


Nov 1, 2015

Attorneys General from all 50 states urge Congress to help fight AI-generated CSAM​

An open letter states we are in a ‘race against time to protect the children of our country.’​

Lawrence Bonk

Contributing Reporter

Tue, Sep 5, 2023, 2:49 PM EDT·3 min read


Unsplash/Elijah Mears

The attorneys general from all 50 states have banned together and sent an open letter to Congress, asking for increased protective measures against AI-enhanced child sexual abuse images, as originally reported by AP. The letter calls on lawmakers to “establish an expert commission to study the means and methods of AI that can be used to exploit children specifically.”

The letter sent to Republican and Democratic leaders of the House and Senate also urges politicians to expand existing restrictions on child sexual abuse materials to specifically cover AI-generated images and videos. This technology is extremely new and, as such, there’s nothing on the books yet that explicitly places AI-generated images in the same category as other types of child sexual abuse materials.

“We are engaged in a race against time to protect the children of our country from the dangers of AI,” the prosecutors wrote in the letter. “Indeed, the proverbial walls of the city have already been breached. Now is the time to act.”

Using image generators like Dall-E and Midjourney to create child sexual abuse materials isn’t a problem, yet, as the software has guardrails in place that disallows that kind of thing. However, these prosecutors are looking to the future when open-source versions of the software begin popping up everywhere, each with its own guardrails, or lack thereof. Even OpenAI CEO Sam Altman has stated that AI tools would benefit from government intervention to mitigate risk, though he didn’t mention child abuse as a potential downside to the technology.

The government tends to move slowly when it comes to technology, for a number of reasons, as it took Congress several years before taking the threat of online child abusers seriously back in the days of AOL chat rooms and the like. To that end, there’s no immediate sign that Congress is looking to craft AI legislation that absolutely prohibits generators from creating this kind of foul imagery. Even the European Union’s sweeping Artificial Intelligence Act doesn’t specifically mention any risk to children.

South Carolina Attorney General Alan Wilson organized the letter-writing campaign and has encouraged colleagues to scour state statutes to find out if “the laws kept up with the novelty of this new technology.”

Wilson warns of deepfake content that features an actual child sourced from a photograph or video. This wouldn’t be child abuse in the conventional sense, Wilson says, but would depict abuse and would “defame” and “exploit” the child from the original image. He goes on to say that “our laws may not address the virtual nature” of this kind of situation.

The technology could also be used to make up fictitious children, culling from a library of data, to produce sexual abuse materials. Wilson says this would create a “demand for the industry that exploits children” as an argument against the idea that it wouldn't actually be hurting anyone.

Though the idea of deepfake child sexual abuse is a rather new one, the tech industry has been keenly aware of deepfake pornographic content, taking steps to prevent it. Back in February, Meta, OnlyFans and Pornhub began using an online tool called Take It Down that allows teens to report explicit images and videos of themselves from the Internet. This tool is used for regular images and AI-generated content.

Toe Jay Simpson

May 12, 2012
Carmel City
This stuff is opening up too many doors. I don’t think you can properly regulate it. There’s going to always be some unforeseen loophole that’s going to come up. It kinda makes you think of certain sci fi books like Dune and Foundation where computers/robots are just straight up banned cause it’s too much trouble to deal with


I walk around a little edgy already
Jul 1, 2012
Anti-c00n nikkas Worldwide
This stuff is opening up too many doors. I don’t think you can properly regulate it. There’s going to always be some unforeseen loophole that’s going to come up. It kinda makes you think of certain sci fi books like Dune and Foundation where computers/robots are just straight up banned cause it’s too much trouble to deal with