Grok AI, Misogyny and Canada's Delayed Regulatory Response to Sexual Deepfakes
- thedsvsc
- Jan 20
- 8 min read
The rapid permeation of Grok produced sexual deepfakes is sparking international backlash and showcasing the immediate consequences of regulatory delay on women and girl survivors, for the Canadian federal government.
This article was written by Natasha Dixon, who is the Founder and Executive Director of the Digital Sexual Violence Support Centre (DSVSC). Natasha holds a master’s degree in public and international affairs and a bachelor’s degree in political science.
**Please note that some of the themes mentioned in this article may cause some emotional discomfort.

Entering into January 2026, there has been a surge of reported incidents of the misuse of X’s generative AI tool known as Grok, to create sexual deepfakes of young women, girls and femme folks. The widespread distribution of these deepfakes have reached the fore of public discourse, causing an uproar from women across social media platforms and incremental backlash from various governments. While sexual deepfakes are not a brand new phenomenon, their normalization on the platform X is emblematic of a mass culture of enabling and supporting technology-facilitated sexual violence (TFSV). Moreover, the situational context of men targeting women and children through deepfakes is representative of the thriving salience of misogynistic beliefs, attitudes and behaviours on X and is foretelling an escalation of TFSV from online to very plausible physical manifestations. Within a Canadian context, this platform-specific case presents another prime opportunity for the federal government to implement regulations for social media platforms, to deter incidents of perpetration and to protect those who’ve been targeted.
This article will discuss the misogynistic core of these sexual deepfakes, social media platform failures, a brief history of online sexual violence and then offer policy and funding recommendations for the Government of Canada to tackle the deterrence of sexual deepfakes and related online sexual harms.
Background
Grok AI is marketed as a chatbot and an AI assistant on X (formerly Twitter) and a tool that’s capable of generating images, drafting documents, debugging codes and providing related outputs. The main output of Grok which has reached the forefront of public dialogue are sexual deepfakes and especially their “nudifying” function. The Digital Sexual Violence Support Centre (DSVSC) defines sexual deepfakes as AI-generated images, videos and audios that are created to manipulate a person’s (physical and/or vocal) likeness and depict them as partially or fully nude, or engaging in sexual activities.
The “nudifying” function of Grok works by copying an original image of someone and then using AI to produce images of said person as nude or in their undergarments. In a more nefarious manner, Grok is capable of producing “child-like” images which further encourages users to explore their potential pedophilic imaginations and therein normalizes the sexualisation of children and particularly girls. A parallel function of Grok AI includes their chatbot “girlfriends” which are promoted as an opportunity to access “robot-like” depictions of human women but in a manner that codes them as docile, sexually flirtatious, socially compliant and emotionally submissive. Some of the paramount concerns of these sexual deepfakes is that they violate the target’s consent, privacy, the integrity of their personhood and their right to remain clothed. Moreover, these particular outputs of Grok demonstrate the intentionality behind X in their promotion of a virtual ecosystem that welcomes and fosters sexual violence, which is deeply rooted in rape culture, misogynistic attitudes and beliefs, which are further cemented by intersecting axes of racism, homophobia, ableism, ageism, classism, xenophobia etc.
Misogyny, the Manosphere and Its Impacts

Regarding the platform known as X, media outlets are reporting that the perpetrators of these sexual deepfakes are predominantly men, while the targets include women, girls and children . While the geographic residences of the targeted women and children vary greatly, the unifying factor that’s motivating the male perpetrators’ selection of women and girls is misogyny. In recent years, misogynistic attitudes and beliefs have been promoted, sensationalized, and uplifted as a central pillar of the manosphere and red-pill ideologies. These schools of thought promote violence against women through varying forms of abuse (emotional, physical, psychological, financial etc.). These shared beliefs and understandings are upheld by the deep-seated belief that men are entitled to and “deserving” of accessing women’s emotional, household and sexual labour at all times and without having to demonstrate respect or care for their well-being. Moreover, the growing salience of these ideologies in virtual spaces is creating an echo chamber, wherein the men who subscribe to these notions are further cementing their shared commitment to harming people and more specifically women and children through TFSV.
The misogynistic essence of sexual deepfakes is demonstrative of rape culture, which is a phenomenon that attests that there are institutions, systems, and attitudes that create conditions which are conducive for normalizing and accepting sexual violence. Research indicates that the central tenents of rape culture are increasingly permeating incel, manopshere and red-pill adjacent dialogues on X, which include gendered stereotypes, sexism, hostility towards women, and the promotion of violence as a medium for control. On the intersectional axes of age and gender, the disregard for children' s rights as evidenced by the intentional targeting of girls is deeply disturbing, as children’s prematurity can render them at a higher risk of exposure to online sexual exploitation. This is further substantiated by the recognition of the heightened vulnerability of children who may struggle with their mental well-being, live in abusive households or not have a reliable and trustworthy adult in their life. Moreover, the targeting of girls is dismaying and signals the long overdue need for expedient regulations on the use of AI, digital safeguards to protect minors, direct fines for users and the platforms that actively permit the sexual exploitation of minors.

As a collective, the male perpetrators’ selective targeting of women and girls through sexual deepfakes showcases the intentionality behind leveraging technology-facilitated sexual violence (TFSV) to inflict emotional, physiological and psychological harms, plus reputational and social damages on their targets. Their selection of TFSV as an intimidation tactic is not through ignorance but rather a coordinated and a visceral disdain for the safety and wellbeing of women and girls. While this particular case of mass TFSV on a social media platform is devastating, it is one facet of the history and working chronology of online sexual violence.
A Brief and Recent History of Online Sexual Violence
The proliferation of sexual deepfakes across X and other online platforms is an unsurprising and an unfortunate criminal phenomenon. In acknowledgement of the countless survivors, community advocates and female trailblazers in the artificial intelligence space, these groups have forewarned governments, corporations and educational systems for decades about the perilous nature of online sexual harms. Considering that the invention and widespread public access to the internet is a relatively recent phenomenon in international history, the primary and emerging forms of online sexual violence in internet culture were online sexual harassment, virtual sexual predation against minors, and the spread of child sexual abuse materials (CSAM). While the prevalence of these harms have increased in prevalence over time, the evolution and accessibility of artificial intelligence tools have presented harm-doers with a new medium to violate their targets through TFSV. Similar to sexual deepfakes, other forms of TFSV which are being enabled through AI include sextortion via chatbots, doxxing via chat groups, coordinated sexual harassment through multiple burner accounts and much more. Moreover, the absence of platform regulations, fines for violating platforms’ guidelines and related accountability measures in the virtual space, leaves harm doers free to explore the depths of their imaginations of sexual violence, with relative anonymity.
The Continuum of Sexual Violence (From Online to In-person Harms)

The cumulative spread of sexual deepfakes across X among other social media platforms is not only deeply concerning but signals that the physical manifestations of sexual violence are a tangible and plausible next step. Adaptations of the sexual violence pyramid indicate that rape culture commences through attitudes, beliefs and thoughts which eventually escalate to enacting physical harms such as in-person stalking, physical abuse, assault, sexual abuse and more. Within the context of the international digital landscape, various cases of the non-consensual distribution of intimate images, sextortion and AI-chatbots manipulating humans alike, showcase how online sexual harms have real life consequences that are perilous for vulnerable and marginalized communities. Moreover, the intentional absence of online safety and privacy safeguards for vulnerable and at-risk communities highlights that X can and most likely will become an online space for fostering perpetrators’ initial imaginations and plannings of in-person sexual harms. While related platforms have begrudgingly implemented content safeguards, such as Youtube implementing more parental controls to hinder minor accounts from accessing “for adults only” content, X does not demonstrate a desire to follow suit.
Current Status in Canada: Policy and Funding Recommendations

Now that we’ve considered the misogynistic foundations of sexual deepfakes and the high potential for physically violent incidents, it’s time to discuss some recent policy developments and emerging recommendations for the Government of Canada.
On December 9, 2025, the Protecting Victims Act was introduced into the House of Commons as a means to reform the Canadian Criminal Code to recognize various aspects of violence, including online sexual violence. Notably, one recommendation included prohibiting the distribution of non-consensual deepfakes and increasing the maximum penalty to 10 years for being convicted of distributing sexual deepfakes without the consent of the person originally depicted. While the proposed Protecting Victims Act indicates positive forward movement in addressing a small segment of TFSV, it is insufficient to deter relentless perpetrators and will require considerable provisions that address deterring and penalizing the tools, platforms and strategies yielded by perpetrators.
Regarding the current pulse on sexual deepfakes among prominent federal public servants and politicians, their collective stance remains inadequate and minimal. In an article published by Global News Canada on January 9, 2026, Canada’s Artificial Intelligence Minister, Evan Solomon, shared that the Government of Canada was not “considering additional regulatory action against X over the AI-generated images (referring to the sexual deepfakes)”. More recently, on January 15, 2026, the Government of Canada’s Privacy Commissioner, Philippe Dufresne, indicated that there’s an expanded investigation to closely examine whether “X and xAI (X’s AI tool) properly obtained individual users’ consent to collect, use and share their personal information to create sexual deepfakes through Grok, and whether the platform complied with Canadian laws in doing so”. The announcement of this expanded investigation is hopeful but does not offer any concrete measures for platform regulation or digital safeguards that can provide Canadian survivors satisfaction and reassurance.
Considering that the Canadian legislative landscape on instances of technology-facilitated sexual violence (TFSV) against adults remains limited, here is a non-exhaustive list of suggested considerations:
Borrowing from the United State’s federal Take It Down Act, as initiated by Ellison Berry’s activism, the Government of Canada should create a similar act which would require online and social media platforms to fulfill DMCA requests to take down non-consensual intimate images, videos, audios and deepfakes that are created within a 48-hour period
The Government of Canada should allocate at least $15,000,000 via The Department for Women and Gender Quality (WAGE) to fund sexual violence support organizations (including non-profits, charities, and grassroots collectives) that specifically help Canadian survivors who are experiencing technology-facilitated sexual violence (TFSV). This would help ease the client burden on the currently strained public health care system and get survivors direct access to lived-experience experts through peer support, plus professional guidance from service providers.
The Government of Canada should create a dedicated regulatory body to ensure that the development and implementation of artificial intelligence tools in Canada addresses systemic risks, establish a code of ethics, and safeguards consumers, especially vulnerable and marginalized communities.
While a multi-pronged approach including parental education, the courts, government funding, regulations and community action are required to effectively address the ongoing and emerging harms presented by perpetrators, these proposed considerations would make a meaningful step towards holding perpetrators and platforms accountable for the creation of sexual deepfakes. Plus, they would help protect survivors in real-time and create digital safeguards that have a critical impact on the Canadian digital landscape.




Comments