A recent study reveals that a significant portion of the public does not view the creation or sharing of non-consensual sexual content generated through artificial intelligence as problematic.
According to research conducted on behalf of the UK police, one in four respondents believes there is nothing wrong with producing and distributing sexual deepfakes — digitally altered images or videos created using AI — without the consent of the individual depicted.
Among 1,700 participants, 13% stated they saw no ethical issue in making or sharing sexual deepfakes, while 12% were uncertain about the legality or morality of the practice.
Claire Hammond, Head of Research at the UK’s National Centre for Violence Against Women and Girls (VAWG), emphasized that “sharing personal images without consent, whether genuine or fabricated, constitutes a profound violation.” She further warned that AI accelerates the rise of gender-based violence and highlighted that technology companies “have facilitated the creation and dissemination of such material and must take responsibility.”
The creation of non-consensual sexual deepfakes is now a criminal offense under the new Data Act, legislation governing the collection, processing, and sharing of personal data.
The report by consulting firm Crest Advisory found that 7% of respondents have been targets of sexual deepfakes. Of these, only half (51%) reported the incident to authorities. Those who did not cite feelings of shame and doubts about whether their case would be taken seriously.
Why Sexual Deepfake Images Are a Cause for Concern
Deepfakes are images, videos, or audio files manipulated or generated with AI to appear authentic. They typically feature real individuals and often involve sexual content. Such materials can cause significant harm to those depicted, being used for defamation, blackmail, deception, or harassment.
Who Is Most Likely to Create Sexual Deepfakes?
Data indicates that men under 45 are more inclined to view the creation and sharing of sexual deepfakes as acceptable. This demographic is also more likely to consume pornography, hold misogynistic attitudes, and express positive views about AI. However, researchers caution that further study is needed to understand the correlation between age, gender, and these attitudes.
One in twenty admitted to having created a deepfake in the past, while over one in ten expressed willingness to do so in the future. Two-thirds of participants have seen or believe they have seen deepfakes.
Callyane Desroches, Head of Policy and Strategy at Crest Advisory — a UK-based consultancy specializing in justice, policing, and public safety — noted that deepfake production “is rapidly becoming normalized as the technology becomes cheaper and more accessible.” She stressed that most deepfakes involve sexual content, with women predominantly targeted.
Activist Cally Jane Beech, who campaigns for better victim protection in the UK, warned that “future generations are at risk if clear digital regulations are not established soon.” She added that ongoing education and open discussion starting at home are essential to curb the spread of such practices, according to The Guardian’s report.
Legal Measures in the UK
Since January, UK law has criminalized the creation of fake sexual images or videos using AI, with penalties including up to two years imprisonment.
Women and Children as the Primary Victims
The vast majority — nearly 99% — of nude deepfakes involve women and girls. According to the 2024 Internet Matters study, over half a million children (13%) have encountered nude deepfakes. Fifty-five percent of teenagers regard abuse via nude deepfakes as more severe than sexual abuse involving real images, while only 12% disagreed. Boys and vulnerable children are significantly more likely to have been exposed to such content.
Ask me anything
Explore related questions