California Governor Signs Legislation to Shield Minors from AI-Generated Deepfakes

California Governor has enacted legislation aimed at safeguarding children from the dangers posed by AI deepfakes.

California Governor Signs Legislation to Shield Minors from AI-Generated Deepfakes
California Governor Gavin Newsom signed two proposals on Sunday aimed at protecting minors from the rising misuse of artificial intelligence tools that generate harmful sexual imagery of children.

These measures are part of California's broader initiative to enhance regulations within an industry that increasingly impacts the daily lives of Americans and has previously operated with minimal oversight in the U.S.

Earlier this month, Newsom also endorsed some of the most stringent laws to combat election deepfakes, although these laws are currently facing legal challenges. California is positioned as a potential leader in AI industry regulation in the United States.

The newly enacted laws, which garnered significant bipartisan support, close a existing legal loophole concerning AI-generated imagery of child sexual abuse, explicitly stating that child pornography is illegal, even if created through AI.

Supporters noted that current laws prevent district attorneys from prosecuting individuals who possess or distribute AI-generated child sexual abuse images unless they can demonstrate that the materials depict a real person. The new legislation will classify such an act as a felony.

"Child sexual abuse material must be illegal to create, possess and distribute in California, whether the images are AI-generated or of actual children," Democratic Assembly member Marc Berman, who authored one of the bills, stated. "AI that is used to create these awful images is trained from thousands of images of real children being abused, revictimizing those children all over again."

Additionally, Newsom recently enacted two other bills focused on revenge porn, aimed at better protecting women, teenage girls, and others from sexual exploitation and harassment facilitated by AI tools. It will now be illegal for adults to create or share AI-generated sexually explicit deepfakes of any individual without their consent under state law. Social media platforms are also mandated to implement systems for users to report such materials for removal.

However, some critics argue that the new laws do not go far enough. Los Angeles County District Attorney George Gascon, whose office supported some of the proposals, expressed that new penalties for sharing AI-generated revenge porn should encompass those under 18. The legislation was limited by state lawmakers last month to apply exclusively to adults.

"There has to be consequences, you don't get a free pass because you're under 18," Gascon remarked in a recent interview.

These laws follow San Francisco’s groundbreaking lawsuit against over a dozen websites that use AI tools promising to "undress any photo" uploaded within seconds.

While issues surrounding deepfakes are not new, experts warn that the situation is escalating as the technology becomes more accessible and easier to utilize. Researchers have raised concerns over the last two years regarding the surge of AI-generated child sexual abuse material, often depicting real victims or virtual characters.

In March, a Beverly Hills school district expelled five middle school students for creating and sharing fake nude images of their classmates.

This growing problem has led to swift bipartisan efforts in nearly 30 states aimed at curbing the spread of AI-generated sexually abusive materials. Some states are implementing protections for everyone, while others only criminalize materials involving minors.

Governor Newsom has highlighted California's role as an early adopter and regulator of AI technology, suggesting that the state could soon utilize generative AI tools to manage highway congestion and offer tax guidance, even as his administration explores new regulations against AI discrimination in hiring practices.

Frederick R Cook for TROIB News