South Korea Passes Legislation to Combat Deepfake Sexual Exploitation
South Korea has enacted legislation that mandates prison sentences for those caught possessing or viewing deepfake sexual content. This decision comes as a reaction to growing public alarm regarding digital sex crimes, especially following the revelation of Telegram chat rooms disseminating AI-generated pornography.
The amended law introduces stringent penalties for those who possess, purchase, store, or view deepfake sexual content, with potential prison terms reaching up to three years and fines up to 30 million won. Lawmakers from both the ruling and opposition parties included a clause to exempt individuals who "unknowingly" encounter such content from penalties.
Additionally, the committee has advanced revisions to the act aimed at protecting children from sexual crimes, implementing harsher penalties for those who exploit materials to blackmail or coerce minors. The new provisions stipulate that sentences for blackmail will now start at a minimum of three years, while coercion can lead to sentences of five years or more.
These revisions also amend the sexual violence prevention act, establishing the government's duty to remove illegally recorded content and support victims in their reintegration into society.
This push to tackle sexual deepfakes comes alongside the investigation into Pavel Durov, the founder of Telegram, who is currently under formal investigation in France due to alleged organized crime activities tied to the messaging platform.
Deepfake technology involves artificial intelligence that creates convincing fake images, videos, and audio recordings. It can manipulate existing content, swap one individual for another, or fabricate entirely new scenarios where people seem to say or do things they did not actually do. The most significant risk associated with deepfakes is their capacity to disseminate false information appearing to originate from credible sources.
"I've always been careful when it comes to personal information or photos," said Ines Kwon, a university student. "But with so many crimes these days involving AI and deepfake technology, I find myself being even more guarded."
Last week, South Korea's National Police Agency announced a plan to invest 2.7 billion won annually over the next three years to develop deep-learning technology aimed at detecting digitally fabricated content, including deepfakes and voice cloning. In addition, the agency will enhance its existing software for monitoring AI-generated videos.
The urgent need for these measures is highlighted by concerning statistics: cases of sexual cyberviolence have increased 11-fold this year compared to 2018, yet only 16 arrests have been made out of 793 reported deepfake crimes since 2021, according to police data.
"Six years have passed since I was a victim of AI-generated deepfake pornography. As I transitioned from a student to a working professional, it would be a lie to say the incident hasn't left its mark on me," remarked an anonymous deepfake crime victim.
"However, when I think about other women who might suffer the same or even more severe harm, a fire ignites within me to become stronger and stand up against it."
President Yoon Suk-yeol emphasized that deepfakes constitute a serious crime that threatens social harmony, urging relevant ministries to take decisive action. In a joint statement in late August, 84 women's organizations underscored that the deepfake crisis is rooted in "structural gender discrimination," calling for gender equality as a crucial remedy.
Oh Kyung-Jin, general secretary of Korean Women's Associations United, pointed out that the underlying issue is the "culture of misogyny," where women are regarded merely as objects of sex, targets of crime, and forms of entertainment, rather than as equal citizens, particularly among teenagers.
Debra A Smith for TROIB News