
A 17-year-old Calgary high school student has been charged with multiple criminal offenses after allegedly using artificial intelligence tools to create and distribute sexualized deepfake images of teenage girls from several local schools. The case, which police are calling "the most extreme form of bullying," highlights growing concerns about AI-enabled sexual exploitation of minors as deepfake technology becomes increasingly accessible.
Alberta Law Enforcement Response Teams' Internet Child Exploitation unit executed a search warrant at the accused teen's home on November 13, seizing two cellphones, a tablet, and a laptop. The investigation began in October after authorities received a tip that child sexual abuse materials were being uploaded to social media platforms.
The Charges and Legal Framework
The accused faces charges of making, possessing, and distributing child sexual abuse and exploitation materials, along with criminal harassment. Under Canada's Criminal Code, child sexual abuse and exploitation material includes any visual representation of someone under 18 years old, or depicted as being under 18, engaged in explicit sexual activity—regardless of whether the images are real or AI-generated.
The terminology itself reflects recent legal evolution. As of October 10, 2024, Parliament replaced the term "child pornography" in the Criminal Code with "child sexual abuse and exploitation material." Child advocates argued the previous term implied consent, while the new language more accurately conveys the abuse and exploitation victims experience.
Staff Sergeant Mark Auger of ALERT's Internet Child Exploitation unit explained how the alleged crime occurred. "If I was the offender, I would capture the picture I want, whether it's on TikTok, Instagram, any website, pull it off and then I can use software to nudify the image, which AI will then give a very accurate assessment of your body type, your skin color, and make it near impossible to distinguish the nude image with just my face attached."
This description reveals the disturbing sophistication of AI nudification tools now readily available online. These applications use machine learning to realistically remove clothing from photos, creating convincing fake nude images from ordinary social media pictures without the subject's knowledge or consent.
A Growing Pattern of AI-Enabled Exploitation
The Calgary case is far from isolated. Law enforcement and child safety experts have warned for years that AI-generated child sexual abuse material represents an accelerating threat as the technology improves and becomes more accessible.
In 2023, a 61-year-old Quebec man received jail time for using AI to produce deepfake videos of child pornography, marking what a provincial court judge believed was Canada's first case involving deepfakes of child sexual exploitation. That same summer, a junior girls' football coach in Lethbridge faced similar accusations from ALERT for allegedly using AI to create child pornography.
The RCMP warned in 2023 that "a wave of AI-generated child sexual abuse material is coming" as criminals gain access to increasingly powerful AI-generating tools. The UK-based Internet Watch Foundation echoed these concerns, stating that AI-generated deepfake images could overwhelm child exploitation investigators without government action.
Beyond criminal exploitation, AI deepfakes have created what experts describe as a new category of technology-facilitated sexual violence affecting teens. In early 2024, a Winnipeg school discovered AI-generated fake nudes of female students circulating online, though no charges were laid in that case—exposing gaps in how Canadian laws address altered images created without original intimate content.
The Impact on Victims
Auger emphasized the severe psychological harm these crimes inflict. "Our biggest takeaway is that we need people to understand that this is not a joke, it's not a prank, this is the most extreme form of bullying and a criminal offense," he said. "We will take steps to stop this behavior."
The staff sergeant described such actions as having a "horrible impact" on victims—teenage girls who discover that fabricated sexual images of themselves are circulating among classmates and on social media platforms. The non-consensual nature of these images, combined with their realistic appearance and rapid digital spread, creates lasting trauma for young people still developing their identities and sense of safety.
All known victims in the Calgary investigation have been provided support services, though police are not identifying the specific schools involved to protect victims' privacy. The accused was released on court-ordered conditions including no contact with people under 16 years old unless incidental through work or school, and restrictions on electronic device use.
Technological and Policy Challenges
The case highlights fundamental challenges at the intersection of AI technology, child safety, and law enforcement. AI tools capable of creating convincing deepfakes are no longer limited to sophisticated criminals with technical expertise—they're available as consumer applications and websites, some requiring nothing more than uploading a photo.
Both the RCMP's National Cyber Crime Coordination Centre and the Canadian Centre for Cyber Security have reported sharp rises in AI-facilitated crimes causing harm or near-harm since generative AI technology exploded into the mainstream in 2022. The accessibility of these tools has outpaced legal frameworks, education systems, and parenting strategies designed to protect children in digital spaces.
While Canada's updated Criminal Code provisions explicitly cover AI-generated child sexual abuse material, enforcement faces practical obstacles. Deepfake creators can operate anonymously, images spread rapidly across platforms with minimal oversight, victims often don't know images exist until they circulate widely, and the sheer volume of content can overwhelm investigation resources.
Technology companies developing AI tools face pressure to implement safeguards preventing misuse. Some AI image generators include content filters attempting to block inappropriate outputs, while major platforms have policies against non-consensual intimate images. However, determined users find workarounds, and smaller platforms with fewer resources often lack robust content moderation.
The Path Forward
Addressing AI-enabled sexual exploitation of minors requires coordinated efforts across multiple sectors. Law enforcement continues developing expertise in investigating AI-facilitated crimes and tracking down perpetrators using evolving technologies. Schools are implementing education programs teaching students about digital citizenship, consent, and the serious consequences of creating or sharing exploitative content.
Parents and guardians need awareness that children's social media photos can be weaponized through AI tools, making privacy settings and digital literacy discussions critical components of modern parenting. Tech companies must invest in detection systems identifying AI-generated abuse material and preventing their platforms from hosting or distributing such content.
Legislative bodies may need to consider whether existing laws adequately address all aspects of AI-generated exploitation, including whether platforms hosting deepfake creation tools should face liability for facilitating abuse.
The Calgary case serves as a stark reminder that as AI technology advances, so do the methods available for exploitation and harm. The 17-year-old accused now faces serious criminal charges that could follow him into adulthood. More importantly, the teenage victims must cope with the knowledge that manipulated images of their bodies circulated among peers—a violation that no amount of legal proceedings can fully undo.
As ALERT's investigation continues and the case proceeds through the justice system, it stands as a warning: AI tools that seem like harmless apps or websites can facilitate real crimes with real victims. The technology's novelty doesn't diminish the severity of the harm it enables.




