US prosecutors vow to step up fight against fake AI child sex images

Estimated read time 2 min read



Reuters noted that these child exploitation cases are an early test that could soon reveal how hard it is for federal prosecutors to apply existing laws to combat emerging AI harms. But due to anticipated legal challenges, prosecutors may hesitate to bring AI cases where children have not been identified, despite the DOJ declaring in May that “CSAM generated by AI is still CSAM.”

The earliest cases being prosecuted are targeting bad actors using both the most accessible tools and the most sophisticated.

One involves a US Army soldier who allegedly used bots to generate child pornography. The other led a 42-year-old “extremely technologically savvy” Wisconsin man to be charged with allegedly using Stable Diffusion to create “thousands of realistic images of prepubescent minors,” which were then allegedly shared with a minor and distributed on Instagram and Telegram. (Stability AI, maker of Stable Diffusion, has repeatedly denied involvement in the development of the Stable Diffusion model used while promising to prevent other models from generating harmful materials.)

Both men have pleaded not guilty, seemingly waiting to see how the courts navigate the complex legal questions that generative AI has raised in the child exploitation world.

Some child safety experts are pushing to hold app makers accountable, like California is, advocating for standards that block harmful outputs of AI image generators. But even if every popular app maker agrees, the threat will likely still loom on the dark web and in less-moderated corners of the Internet.

Public Citizen democracy advocate Ilana Beller in September urged lawmakers everywhere to clarify laws so that no victim ever has to wonder if there’s any defense against the barrage of harmful AI images rapidly spreading online. Only criminalizing AI CSAM like lawmakers have done with actual CSAM will ensure the content is promptly detected and removed, the thinking goes, and Beller wants that same shield to be available to all victims of AI-generated nonconsensual intimate imagery.

“The rising tide of non-consensual intimate deepfakes is a threat to everyone from A-list celebrities to middle schoolers,” Beller said. “Creating and sharing deepfake porn must be treated like the devastating crime that it is. Legislators in numerous states are making progress, but we need legislation to pass in all 50 states and Washington, DC in order to ensure all people are protected from the serious harms of deepfake porn.”



Source link

You May Also Like

More From Author

+ There are no comments

Add yours