Premium

Hacker Uncovers Shocking Child Abuse Scandal on 'Uncensored' Adult AI Chatbot Site

AP Photo/Michael Dwyer

When a hacker infiltrated the website Muah.ai, which allows users to create “uncensored” AI-powered sexual partners, they inadvertently exposed a troubling trend involving AI technology and child sexual abuse material (CSAM). The discovery highlights a growing problem of individuals using AI to create virtual child porn – often using images of real children.

The hacker gained access to the site and found a massive database of user interactions. Many of these interactions involved prompts intended to create chatbots to generate sexual content involving kids. Upon discovering this, he contacted 404 Media about the matter.

The actual purpose of the site is to allow users to explore sexual fantasies using artificial intelligence. One can interact with pre-made chatbots or create their own as a form of adult entertainment.

Many of the interactions revealed explicit prompts created by users trying to generate child sexual abuse scenarios using AI chatbots. “I started poking around and found some vulnerabilities relatively quickly,” the hacker told 404 Media.

The administrator of Muah.ai, who used the name Harvard Han, told 404 Media in an email that “the data breach was financed by our competitors in the uncensored AI industry who are profit driven, whereas Muah AI becomes a target for being a community driven project.” The site’s operators detected that it was hacked last week. Han didn’t provide 404 Media with any evidence for their claim, and the hacker said they do work in the tech industry but not on AI.

“We have a team of moderation staff that suspend and delete ALL child related chatbots on our card gallery, discord, reddit, etc,” Han added, with “card gallery” referring to a list on the Muah.ai website of the community’s bot creations.

Troy Hunt, a cybersecurity expert and founder of HaveIBeenPwned.com, looked at the data and found tens of thousands of prompts requesting child abuse material. “I looked at [one user’s] email address, and it’s literally, like, his first name dot last name at gmail.com,” Hunt told The Atlantic.

Hunt had also been sent the Muah.AI data by an anonymous source: In reviewing it, he found many examples of users prompting the program for child-sexual-abuse material. When he searched the data for 13-year-old, he received more than 30,000 results, “many alongside prompts describing sex acts.” When he tried prepubescent, he got 26,000 results. He estimates that there are tens of thousands, if not hundreds of thousands, of prompts to create CSAM within the data set.

Hunt was surprised to find that some Muah.AI users didn’t even try to conceal their identity. In one case, he matched an email address from the breach to a LinkedIn profile belonging to a C-suite executive at a “very normal” company. “I looked at his email address, and it’s literally, like, his first name dot last name at gmail.com,” Hunt told me. “There are lots of cases where people make an attempt to obfuscate their identity, and if you can pull the right strings, you’ll figure out who they are. But this guy just didn’t even try.” Hunt said that CSAM is traditionally associated with fringe corners of the internet. “The fact that this is sitting on a mainstream website is what probably surprised me a little bit more.”

The breach has drawn attention to the growing issue of AI-generated content being misused by pedophiles. Federal prosecutors have been grappling with how to address this disturbing trend, according to Reuters. “AI makes it easier to generate these kinds of images, and the more that are out there, the more normalized this becomes. That’s something that we really want to stymie and get in front of,” said James Silver, head of the Justice Department’s Computer Crime and Intellectual Property Section.

This normalization is a key concern for law enforcement, as the increasing availability of generative AI makes it easier to create explicit material without the need for real images of minors. This development has prompted prosecutors to navigate new legal territory in applying existing laws to crimes involving AI-generated material. Current laws protect real children. But those involving artificial intelligence tend to fall under obscenity laws.

Han admitted that his team does not have the resources to fully monitor the site to prevent the creation of CSAM. He told The Atlantic that his team doesn’t check for this type of content, but claimed requests for CSAM are “probably denied, denied, denied.”

However, he acknowledged that tech-savvy users can find ways around the filters.

Han’s protestations aren’t much of a comfort, in light of the ease with which sick people can create child sexual abuse material and the difficulties in prosecuting these individuals.

As AI technology continues to evolve, the line between free expression and harm prevention is becoming even more difficult for the authorities to suss out. The Muah.ai breach is one of several examples of the types of challenges law enforcement, developers, regulators, and advocates can expect to face going forward.

Recommended

Trending on RedState Videos