‘Deepfake’ AI technology being used to cyberbully, sexually harass students

The advent of open-source artificial intelligence technology has given rise to a new frontier in cyberbullying: “deepfake” child sexual abuse material from across the states indicate that…

The advent of open-source artificial intelligence technology has given rise to a new frontier in cyberbullying: “deepfake” child sexual abuse material from across the states indicate that adolescents are increasingly being victimized by deepfakes of a sexual or defamatory nature.

A deepfake is a form of video or audio manipulation that uses AI and machine learning to insert an individual’s image or voice into a product that he or she didn’t create or even cooperate in creating. According to one media monitor, 96% of deepfakes are sexually explicit, and feature females who did not consent to the creation of the material – including underage girls.

Recent reports from across the country indicate adolescents are increasingly being victimized by sophisticated AI deepfakes of a sexual or defamatory nature.

In Beaver County, Utah, the sheriff’s office is investigating multiple instances of sexual deepfakes of adolescents being created and posted to pornographic websites. The deepfakes are manipulated versions of innocent images pulled from “yearbook photos, Instagram pages, Snapchat, VSCO, person-to-person texting, etc.” 

In March, the Beverly Hills Unified School District in California expelled five middle school students who had been using AI to create nude images depicting 16 of their classmates.  

“Any student found to be creating, disseminating, or in possession of AI-generated images of this nature will face disciplinary actions, including, but not limited to, a recommendation for expulsion,” BHUSD officials affirmed in a statement shared with parents. 

Elsewhere, the Ottawa Area Superintendents Association in Michigan has launched an awareness initiative called #StopThinkVerify. The campaign seeks to educate adolescents and parents about the dangers of deepfakes and how to deter them from spreading.  

The initiative provides recommendations for recognizing a deepfake, including, “Examine foreheads and other facial areas for signs of unnatural smoothness or excessive wrinkles.” 

Its website also suggests community members “act swiftly and utilize a combination of legal resources, support organizations and reporting mechanisms to mitigate the impact” if they believe they are a victim of a deepfake.” 

“We’re fighting for our children,” said Dorota Mani of Westfield, New Jersey, whose daughter was victimized by deepfake pornography spreading among a group of friends on Snapchat. “They are not Republicans, and they are not Democrats. They don’t care. They just want to be loved, and they want to be safe.” 

Though children and adolescents are easy and frequent targets of deepfake attacks, adults are not immune either. In March, Democrat Rep. Alexandria Ocasio-Cortez, D-New York, introduced bipartisan legislation that would make it easier for victims of AI pornography to sue publishers, distributors and consumers of the material. Ocasio-Cortez says the bills are based on her own experience coming across deepfake pornography depicting herself.