Just a heads up, if you buy something through our links, we may get a small share of the sale. It’s one of the ways we keep the lights on here. Click here for more.
AI-generated child sexual abuse material (CSAM) is rising rapidly and flooding the internet, according to a new report from The New York Times.
Experts warn this new form of illegal content is becoming increasingly difficult to distinguish from real abuse material, and it’s growing at an alarming pace.
The Internet Watch Foundation (IWF), a UK-based nonprofit that monitors CSAM, has found 1,286 AI-generated videos so far in 2025, a huge jump from just two identified in the first half of 2024.
Similarly, the US National Center for Missing & Exploited Children reported receiving 485,000 AI-generated CSAM reports this year alone, compared to 67,000 in all of 2024. That’s a massive increase.
Derek Ray-Hill, interim CEO of the IWF, described the situation as a “tsunami” of disturbing content.
He and other experts say that AI technology is improving so fast that the images and videos now look nearly identical to real ones. (Via: Engadget)
In fact, some online forums are praising how realistic these fake abuse videos look, further complicating efforts to track and stop offenders. One major concern is how this content is created.
AI image generators are trained on large amounts of real data, and in many cases, this includes real CSAM or publicly available photos of children taken from school websites and social media.
This means even if the final product is fake, it may still be based on real, identifiable children.
Companies are starting to report this content more frequently. Amazon took down 380,000 pieces of AI-generated CSAM in the first half of 2025. OpenAI reported 75,000 cases.
Still, legal systems are struggling to catch up. Only a few arrests have been made so far involving AI-generated CSAM, with one man in the UK sentenced to 18 months in jail.
Although AI-generated CSAM currently makes up a small percentage of all CSAM identified, experts fear this is just the beginning.
As AI becomes more advanced, the threat is expected to grow, and fast. The US Department of Justice has called it a serious and emerging threat.
Are current laws strong enough to handle AI-generated child abuse material? Or do we need entirely new approaches to combat this growing threat? Tell us below in the comments, or reach us via our Twitter or Facebook.
Follow us on Flipboard, Google News, or Apple News

