In early January, content creator Kylie Brewer's world was turned upside down. Strange messages began flooding her inbox, alerting her to the existence of an OnlyFans account operating under her name. "Someone has an OnlyFans page set up in your name with this same profile," one concerned TikTok user messaged. Another asked, "Do you have 2 accounts or is someone pretending to be you?" These troubling notes were just the beginning of Brewer's ordeal.
The use of AI technology to create explicit, non-consensual images has become alarmingly prevalent, with xAI’s Grok tool being a major player. Between late 2025 and early 2026, Grok generated a staggering three million sexualized images, including those depicting children, as reported by the Center for Countering Digital Hate. The backlash has been swift, with the UK’s Ofcom and various attorneys general demanding investigations into xAI and Grok.
“It was the most dejected that I've ever felt,” Brewer shared in a heartfelt phone call. “I was like, let's say I tracked this person down. Someone else could just go into X and use Grok and do the exact same thing with different pictures, right?”
Brewer, known for her educational content on feminism and history, is no stranger to online harassment. But discovering an OnlyFans account impersonating her, complete with AI-generated nudes, was a new level of violation. The images were disturbingly designed, with AI manipulating her likeness from Instagram photos to create explicit content.
The deception was not only unsettling but also deeply traumatizing. Brewer explained how one image, supposedly based on a swimsuit photo from her Instagram, was altered to depict her in a nude state. "My eyes look weird, and my hands are covering my face so it kind of looks like my face got distorted," she noted. "They very clearly tried to give me larger breasts, where it does not look like anything realistic at all."
The implications of Brewer's experience extend far beyond personal distress. Many victims of AI-generated imagery share her sense of helplessness and violation. This technology, initially developed for innocent purposes, has spiraled into a tool for harassment and exploitation.
For Brewer, who relies on her online presence for her livelihood, the option to "log off" isn't feasible. Her story highlights a grim reality faced by many in the digital age – the fear that anyone could become a target of deepfake abuse. As Brewer pointed out, "It feels like any remote sense of privacy and protection that you could have as a woman is completely gone and that no one cares." Her hope is that increased visibility will lead to more support and a broader discussion on protecting individuals in the digital sphere.