US prosecutors are increasingly recognizing the growing danger posed by AI-generated child sexual abuse imagery.
U.S. federal prosecutors are intensifying their efforts to target individuals who utilize artificial intelligence tools to manipulate or generate child sex abuse images, amid concerns that this technology could lead to a surge in illegal content.
This year, the U.S. Justice Department has initiated two criminal cases against individuals accused of employing generative AI systems—capable of producing text or images based on user inputs—to create explicit images involving children.
James Silver, head of the Justice Department’s Computer Crime and Intellectual Property Section, indicated that more cases are anticipated. “There’s more to come,” he stated, emphasizing the department's proactive stance.
Silver expressed concern over the potential normalization of such activities, noting, “AI makes it easier to generate these kinds of images, and the more that are out there, the more normalized this becomes. That’s something that we really want to stymie and get in front of.”
The emergence of generative AI has raised alarms within the Justice Department regarding its potential use in cyberattacks, the enhancement of cryptocurrency scams, and threats to election integrity.
Child sex abuse cases represent some of the initial instances where prosecutors are attempting to apply existing U.S. laws to crimes involving AI. Even if convictions are achieved, they may face challenges as courts consider how this new technology could reshape the legal framework surrounding child exploitation.
Prosecutors and child safety advocates warn that generative AI systems enable offenders to alter and sexualize ordinary images of children, complicating law enforcement's ability to identify and assist actual victims of abuse.
According to Yiota Souras, chief legal officer of the National Center for Missing and Exploited Children, the nonprofit organization receives approximately 450 reports each month related to generative AI.
The figure represents a small portion of the average 3 million monthly reports concerning online child exploitation that the organization received last year.
UNTESTED GROUND
Cases involving AI-generated sexual abuse imagery are expected to navigate uncharted legal territory, especially when no identifiable child is depicted.
Silver indicated that in such cases, prosecutors may pursue obscenity charges when child pornography statutes do not apply.
In May, prosecutors charged Steven Anderegg, a software engineer from Wisconsin, with offenses including the transfer of obscene material. Anderegg is alleged to have utilized Stable Diffusion, a widely used text-to-image AI model, to create images of young children in sexually explicit situations and to have shared some of these images with a 15-year-old boy, as detailed in court documents.
Anderegg has pleaded not guilty and is attempting to have the charges dismissed, claiming they infringe upon his rights under the U.S. Constitution, according to court filings.
He has been released from custody while awaiting trial, and his attorney was unavailable for comment.
Stability AI, the developer of Stable Diffusion, stated that the case pertains to an earlier version of the AI model released prior to the company's acquisition of its development. The company noted that it has invested in measures to prevent the misuse of AI in creating harmful content.
Additionally, federal prosecutors charged a U.S. Army soldier with child pornography offenses, partly for allegedly using AI chatbots to alter innocent photographs of children he knew, resulting in the creation of violent sexual abuse imagery, as indicated in court documents.
The defendant, Seth Herrera, has pleaded not guilty and has been ordered to remain in jail while awaiting trial. Herrera’s attorney did not respond to a request for comment.
Legal experts indicate that while laws regarding child pornography encompass sexually explicit representations of real children, the regulations surrounding obscenity and entirely AI-generated content remain ambiguous.
In 2002, the U.S. Supreme Court deemed unconstitutional a federal statute that prohibited any depiction, including computer-generated images, that appeared to show minors in sexual situations.
Jane Bambauer, a law professor at the University of Florida specializing in AI's implications for privacy and law enforcement, noted, “These prosecutions will be challenging if the government relies solely on moral outrage to justify them.”
In recent years, federal prosecutors have successfully obtained convictions against individuals possessing sexually explicit images of children that also met the legal definition of obscenity.
Advocacy groups are also working to prevent AI technologies from producing harmful content.
In April, two nonprofit organizations, Thorn and All Tech Is Human, secured pledges from major AI companies, including Alphabet’s Google, Amazon, Meta Platforms (the parent company of Facebook and Instagram), OpenAI, and Stability AI, to refrain from training their models on child sexual abuse imagery and to actively monitor their platforms to curb its creation and dissemination.
Rebecca Portnoff, Thorn’s director of data science, emphasized, “I don’t want to paint this as a future problem, because it’s not. It’s happening now.”
“As far as whether it’s a future problem that will get completely out of control, I still have hope that we can act in this window of opportunity to prevent that.”
