Meta's Misogyny: A Case Study in Predictable Harm
How systemic bias inside Big Tech creates unsafe products outside
Laura Bates' investigation into harassment in Meta's metaverse reads like a case study in predictable harm. Within hours of entering Horizon Worlds, she witnessed virtual sexual assault. Women told her abuse happens "all the time." Children who sounded nine or ten years old performed karaoke while adult men made explicit comments from the audience.
But the details of what Bates experienced, reported this week in The Guardian’s “Misogyny in the metaverse: is Mark Zuckerberg’s dream world a no-go area for women?” read like a psychological horror story written by someone who had studied every form of gendered violence and asked: "What if we could make this immersive?" She heard players shouting, "I'm dragging my balls all over your mother's face," and witnessed male players making claims about "beating off" and comments about "gang bangs." Her virtual avatar's breasts were commented on repeatedly. When she wrote "HAVE YOU BEEN ASSAULTED IN THE METAVERSE?" on a blackboard, the responses were immediate: "Yeah, many times," and "I think everybody's been assaulted in the damn metaverse."
Most tellingly, she witnessed no moderation. There was no action taken by either moderators or other players to stop the abuse.
Welcome to Zuck’s vision of “masculine energy.”
The findings are disturbing, but they're not surprising. They're the logical outcome of building products in environments that systematically dismiss the people most equipped to predict these problems. They’re the misogynistic product of a misogynistic culture.
A Pattern of Systemic Disregard for Women
As someone who spent nearly 15 years at Meta, including as Director of Product Marketing for Horizon Worlds, I lived this pattern firsthand. The misogyny that would define user experiences in the metaverse was already defining employee experiences in the company's meeting rooms.
The gendered dynamics were impossible to ignore. In my sworn statement to the FTC supporting Fairplay's complaint, I detailed how a room full of male executives knew children under 13 were accessing Horizon Worlds using adult accounts. During executive "playtests" starting in October 2022, leadership struggled to communicate over "very young, high-pitched children's voices screaming at us from behind adult accounts." Their solution wasn't to address the child safety crisis—it was to move future testing to private spaces so they wouldn't be "bothered by underage users."
When I flagged these issues, I wasn't met with urgency to fix the problems. I was first asked to silence another senior woman surfacing concerns. When I wouldn't, I was retaliated against.
This was an escalation of a pattern I observed as women challenged the dominant narrative at Mark Zuckerberg’s company. Early in my career, I was told to act “less smart” in meetings so I wouldn’t threaten male colleagues. I was sexually assaulted by a boss on a business trip and later told to have sex with him for a promotion.
As I moved more into leadership, the stakes got higher, but the pattern stayed the same. I was denied promotions because acknowledging my success would have required acknowledging a man’s failure. When I raised safety risks, product performance issues, or violations of public policy, that was the new issue.
As I wrote in “Stonelake v. Meta":
“When I raised the problem, I became the problem – a pattern that silences women everywhere, every day. I’ve been sexually assaulted by a boss on a business trip. I’ve been denied promotions because acknowledging my success meant acknowledging men’s failures. I’ve been told to act “less smart,” I’ve been retaliated against for doing my job.
Some of you are shocked because you had no idea these things happened – some of you are shocked because you had no idea we could be honest about it.”
These weren’t isolated incidents. They were the logic of a masculine, misogynistic system that rewards aggression over insight, growth over safety, and loyalty to power over accountability to the people we claimed to serve. What happened in Horizon Worlds was the inevitable outcome of a workplace where women’s fastest route out is telling the truth.
This isn’t just your typical workplace dysfunction—it’s living demo day like Groundhog Day for a system built on misogyny. The same leadership that dismissed women's concerns about harassment in the office was building a product that would dismiss women's reports of harassment online. The executives who excluded women from decision-making meetings were creating virtual spaces where women would be excluded from safe participation. The culture that treated women's safety as a distraction from business goals was coding that priority directly into the metaverse.
“I thought that as I got more and more senior … I would only be able to protect more people to change the culture,” said Stonelake. “My experience was that the more senior I got, so did my peers, and I noticed that the more senior men were, the less tolerance they had to be challenged.”
When Meta's community guides instructed children not to reveal their ages—attempting to protect the company from the consequences of COPPA violations rather than protecting the children from predators—they were following the same logic that governed how the company treated internal whistleblowers: silence the inconvenient truth, not the harmful behavior.
The Pattern Exists Across Platforms and Time
This isn't Meta's first safety crisis, and the warning signs have been visible for years. Bates work builds on investigations by the BBC, which found children accessing virtual strip clubs in VR with minimal age barriers, and reporting by The Washington Post, which documented Meta's "hands-off" approach to metaverse moderation under the philosophy that users should "protect themselves."
As early as 2022, Kashmir Hill's reporting for The New York Times documented the overwhelming presence of children in Horizon Worlds, despite the platform's 18+ age requirement. During one visit, she met an 11-year-old named Dustin who had spent eight straight hours in the metaverse the day before. When Hill played a zombie-shooting game, she exclaimed about the tiny zombies: "They're little kids!" Dustin replied, "So am I."
Frances Haugen's whistleblower revelations showed how Instagram's algorithms amplified content that harmed teenage girls' mental health, and how internal research documenting this harm was ignored. Arturo Bejar testified to Congress about his unsuccessful attempts to get Meta to address harassment that made his own teenage daughter afraid to post photos online.
Most recently, Sarah Wynn-Williams, Meta's former Director of Public Policy, has detailed through Senate testimony and her memoir Careless People how safety concerns were routinely overridden by growth priorities, including detailing how Meta’s behavioral targeting offerings included selling teen’s most insecure moments to beauty advertisers.
Late last year, Wired detailed a metaverse overrun by children. And Fairplay's recent FTC Request for Investigation, including my 14-page testimony, documents how Meta knowingly allowed children under 13 into Horizon Worlds, collecting their data and exposing them to the exact harms Bates witnessed.
Each of these cases shares a common thread: people inside the company saw the problems coming. They raised concerns. They were dismissed, sidelined, or pushed out. And in each of these cases, the concerns were being raised by women or about women, demonstrating that misogyny isn't just encoded in Meta's products, but embedded in the company's decision-making process itself.
The Real-World Impact
What Bates discovered is causing documented psychological trauma. She notes that when British police investigated the virtual gang-rape of a girl under 16 in the metaverse, a senior officer said the child "experienced psychological trauma similar to that of someone who has been physically raped."
The immersive nature of VR makes this harm more intense. As Bates explains, advanced technology like 3D audio, hand tracking, and haptic feedback combine to make virtual experiences feel real. Your avatar moves and speaks when you do, meaning virtual assault can trigger the same physiological responses as physical assault.
Research Bates cites shows users were exposed to abusive behavior every seven minutes in the metaverse, with repeated instances of children being subjected to sexually explicit abuse. In one case documented by researchers, an adult asked a young user, "Do you have a cock in your mouth?" while another shouted "I don't want to cum on you" at a group of underage girls who had explicitly identified themselves as minors.
The NSPCC data that Bates references—showing 47% of online grooming offenses in the UK occur on Meta platforms—provides crucial context for understanding the stakes. I wrote earlier this week about the mounting list of harms that kids are up against on social platforms. These aren't abstract policy debates; they're about real children facing real harm in spaces that could be designed to protect them.
Importantly, misogyny thrives in these spaces much as it has on traditional online platforms. A 2018 study found that 49% of female VR users had already experienced at least one instance of sexual harassment in virtual environments. Activists note that gender-based harassment in VR is a direct extension of the abuse women face on social media and online games, now amplified by immersion. “Misogyny is alive and well in the metaverse,” writes Moira Donegan for Elle magazine, observing that virtual reality has simply become “a breeding ground for harassment” despite early hopes for a new world.
Academic research increasingly shows that misogyny in digital platforms is not incidental but systemic. A recent study analyzing over 10,000 user reviews across social VR platforms found that reports of sexual harassment were both widespread and psychologically impactful, with women describing embodied violations such as groping or proximity-based intimidation that triggered real-world trauma responses due to the immersive nature of VR. Researchers observed that even with available safety tools, platforms like Meta’s Horizon Worlds often placed the burden of safety on the user rather than on systemic protections (Liao et al., 2022, Sabri et al., 2023).
Meanwhile, foundational scholarship like Safiya Noble’s Algorithms of Oppression (2018) focuses on search engines but nevertheless explains how platforms reflect and perpetuate social biases through the choices made in engineering, moderation, and prioritization.
Together, journalism and research show that the harms faced by women in Meta’s metaverse—rampant harassment, lack of recourse, and psychological harm—are not just the product of bad actors, but of design systems that normalize and even incentivize the marginalization of women and other vulnerable users.
Asking for Help, Finding Silence
When the exodus of senior women from my organization reached a crisis point, I made one final attempt to get help from leadership. In early 2023, I reached out to Andrew "Boz" Bosworth, Meta's CTO, someone I had respected and who had shown me kindness in the past.
"I'm concerned that I continue to see senior women leaving in droves…” I wrote, before explaining. "I have never had women on my team observe the boy's club dynamic, be crushed by it, and ask if they should factor this dynamic into their career track here… From a business standpoint, I lose confidence that we will be able to be successful with this level of organizational toxicity."
Boz's response was telling: "My staff is one-third women as it has been for some time and while we have lost women from that group we have added them in equal measure." The message was clear: women are interchangeable. Our experiences don't matter as long as the numbers look good on paper.
He promised to "follow up on" my concerns about the women who had left. He never did.
This exchange perfectly captures how gender bias operates in tech: when confronted with systemic problems, leadership points to representation statistics while ignoring the actual issues, usually because they personally benefit from the systems enabling the dysfunction.
It's a masterclass in what I’ll call "statistical deflection"—using numbers to obscure rather than illuminate reality. When Susan Fowler exposed rampant sexual harassment at Uber, leadership's initial response wasn't to address the toxic culture she described, but to announce new diversity hiring initiatives. When Ellen Pao detailed systematic discrimination at Kleiner Perkins, the firm's defense centered on how many women they had hired rather than how those women were actually treated within the organization.
Beyond Individual Bad Actors
What looks like isolated product failures is actually evidence organizational dysfunction. When diverse perspectives are marginalized internally, that marginalization gets encoded into the products themselves.
This pattern matters because it reveals how internal dysfunction becomes external harm. The same culture that dismissed women's concerns about workplace safety was designing products that would dismiss women's safety online. The executives who excluded women from leadership meetings were building platforms where women would be excluded from safe participation.
Without meaningful processes to elevate these perspectives, organizations default to amplifying the most confident voices rather than the most informed ones. And they build products that reflect their internal blind spots.
This isn't about identifying villains. It's about understanding systems. When Bates asked Meta for comment on her metaverse investigation, they didn't respond. When harassment victims try to report abuse through Meta's complaint system, research shows the vast majority of reports go unacknowledged. The same pattern of non-engagement appears in how the company handles internal dissent. The message is consistent: the people experiencing harm—whether users or employees—are not credible sources of information about that harm.
As reported by Laura Bates in The Guardian:
What was even more revealing than the virtual assault itself was Meta’s response. Vivek Sharma, then vice-president of Horizon at Meta, responded to the incident by telling the Verge it was “absolutely unfortunate”. After Meta reviewed the incident, he claimed, it determined that the beta tester didn’t use the safety features built into Horizon Worlds, including the ability to block someone from interacting with you. “That’s good feedback still for us because I want to make [the blocking feature] trivially easy and findable,” he continued.
This response was revealing. First, the euphemistic description of the event as “unfortunate”, which made it sound on a par with poor sound quality. Second, the immediate shifting of the blame and responsibility on to the person who experienced the abuse – “she should have been using certain tools to prevent it” – rather than an acknowledgment that it should have been prevented from happening in the first place. And, finally, most importantly, the description of a woman being abused online as “good feedback”.
This creates a feedback loop where the organization becomes increasingly deaf to the signals that could help it course-correct. Warning systems break down. Safety becomes an afterthought. And products launch with predictable harms baked in.
Building Better Systems
The technology industry likes to talk about innovation and disruption. But the most important disruption might be the simplest: building companies that actually listen to the people most likely to understand the problems they're trying to solve.
Creating safer products requires creating safer internal cultures—environments where people feel safe to raise concerns, where diverse perspectives are actively sought rather than reluctantly tolerated, where disagreement isn't treated like disloyalty.
The Legislative and Cultural Crossroads
We're at a critical juncture where two essential questions will determine whether the next generation of digital spaces will encode a disregard for women and other vulnerable groups:
Will we finally legislate meaningful safety protections for vulnerable groups?
Will we stop the systematic erosion of diversity, equity, and inclusion programs just when we need them most?
The timing couldn't be more urgent. As Meta pours billions into building the metaverse, lawmakers are considering comprehensive digital safety legislation that could require platforms to design with child safety as a default rather than an afterthought. The Kids Online Safety Act represent first steps toward protecting kids against the harms Bates documented.
But these legislative efforts are happening alongside a broader corporate retreat from DEI initiatives—the very programs designed to ensure that diverse voices are heard in product development.
When companies dismantle the systems that elevate concerns from women, parents, and marginalized communities, they're removing the early warning systems that could prevent the next safety crisis.
The people most likely to predict how a product might harm children or enable harassment are often the same people whose perspectives get sidelined when DEI programs are defunded or dismantled. This isn't just about fairness—it's about effectiveness.
The women who raised alarms about Instagram's impact on teen girls, the women who flagged child safety gaps in Horizon Worlds, the advocates who documented harassment patterns across platforms—these weren't diversity hires making emotional appeals. They were canaries in the coal mine, providing essential input about product risks that homogeneous leadership teams consistently failed to see.
As Bates concluded her investigation, she noted we risk "sleepwalking into virtual spaces where men's entitlement to women's bodies is once again widespread and normalized with near total impunity." We don't have to sleepwalk. We have the data, the research, the whistleblower accounts, and the investigative reporting to see exactly what’s happening, where it’s rooted, and how it’s growing.
The question isn't whether these harms are predictable—they are. The question isn’t whether Meta can respond to them—they can. The real question is whether we'll have the legislative frameworks and internal diversity necessary to ensure those systems actually work.
Recommended Reading:
Laura Bates' The Guardian investigation: "Misogyny in the metaverse: is Mark Zuckerberg's dream world a no-go area for women?"
Kashmir Hill's New York Times investigation: "This Is Life in the Metaverse"
The Washington Post's reporting: “Meta doesn’t want to police the metaverse. Kids are paying the price.”
Fairplay's FTC Request for Investigation of Meta Platforms, Inc. for violations of the Children’s Online Privacy Protection Act in Horizon Worlds
Frances Haugen's Senate testimony and the WSJ coverage of her leaked documents
Sarah Wynn-Williams' Senate testimony and memoir Careless People
Arturo Bejar’s Senate testimony and written testimony
The Times’ coverage: “Why I fear Mark Zuckerberg — after 15 years of working for him”
Fortune’s coverage: “Kelly Stonelake landed her dream job at Meta. 15 years later she’s suing the company alleging sexual assaults, denied promotions, and a deeply embedded culture of discrimination”
Lyz’s coverage “A woman is good at her job. And she suffers for it.”
"The technology industry likes to talk about innovation and disruption. But the most important disruption might be the simplest: building companies that listen to the people most likely to understand the problems they're trying to solve. Creating safer products requires creating safer internal cultures—environments where people feel safe to raise concerns, where unique perspectives are actively sought rather than reluctantly tolerated, where disagreement isn't treated like disloyalty."❤️🔥❤️🔥❤️🔥
I am so sorry you went through any of this. Thank you for speaking out.