Algorithmic Harm, Explained by Former Meta Vice President Brian Boland
How Facebook and Instagram were optimized for profit, not safety—and why lawmakers must step in
Big Tech spends millions of dollars to fight common sense online safety regulation that could protect kids. A key tactic is to distract and deflect from their capacity and responsibility to protect children, relying on lawmakers’ lack of deep understanding of how algorithms and the internet work.
So last week, former Meta Vice President Brian Boland and I (formerly a Director at Meta) joined survivor parent Taj Jensen and the Children’s Alliance to meet with legislators in Washington State to discuss proposed legislation that aims to protect young people in our state.
We set the record straight.
I’ve written about Brian before, in the update that my lawsuit against Meta was moving forward to discovery:
Early last year when I met a friend and mentor for Vietnamese food, I laid out my dilemma: how could I make a real difference for the women still enduring Meta’s toxic culture, for kids using platforms that treat their safety as secondary? His response was immediate: “Discovery is how.”
In a legal context, discovery is the pre-trial process where parties in a case exchange relevant information. This process aims to ensure that both sides are aware of the evidence and potential witnesses that may be presented at trial, in the spirit of fairness and transparency.
We first met in 2009, sitting across a different table in Facebook’s Palo Alto cafeteria. He had just moved from Seattle to join the company, where I’d been for about a year. We had a mutual friend from back home who’d connected us.
It was Brian who, nearly a decade later, threw my hat into the ring for consideration of a role leading product marketing for Facebook’s third party developer ecosystem, a role that eventually led to my opportunity to lead go-to-market for Meta Horizon Worlds.
In the fall of 2022, when I was experiencing the slow motion car crash of seeing first-hand the lengths the other leaders of this product were willing to go to obscure Horizon’s harm to kids, and to silence anyone who dare speak up about it, Brian had left the company on principle and was testifying before the Senate Committee on Homeland Security.
I encourage you to watch his testimony. In his opening statement, he told the Senate:
“What finally convinced me that it was time to leave was that despite growing evidence that the News Feed may be causing harm globally, the focus on and investments in safety remained small and siloed.”
“Meta leadership chooses growing the company over keeping more people safe. While the company has made investments in safety, those investments are routinely abandoned if they will impact company growth. My experience at Facebook was that rather than seeking to find issues on the platform first they would rather reactively work to mitigate the PR risk for issues that came to light.”

He also discussed the nature of an algorithmic feed, how it’s goaled, measured, and the company’s resistance to transparency.
This topic came up again in our legislative meetings this week, and I was so impressed with Brian’s framing that with his permission, I’d like to share it with you here.
What is an algorithm? How could it have so much influence?
These are questions that Big Tech relies on us continuing to be stalled by.
This week, Brian laid it out clearly:
I worked with Mark Zuckerberg on a monthly basis, former COO Sheryl Sandberg multiple times a week, along with other executives, like the Chief Product Officer and Chief Technology Officer on a daily basis. I can provide context around their approach to these issues, and what I experienced as their lack of care over the people who use these products.
My first job at the company, and the bulk of my time there, was to build the advertising system. This relates to the News Feed because the advertising system is designed to look and work like any other post that someone would see in their feed.
What we built in that News Feed was the ability to personalize an ad to reach someone—and we could measure it—so we could prove that the content that people saw in the News Feed could change their attitudes, opinions, and behavior. We could measure that we could make people feel differently about one brand of toothpaste over a different brand of toothpaste. We could measure how many of them actually changed their purchasing decisions based on what they saw in feed.
With that measurement and that ad system, advertisers spent $121 billion last year on Meta’s ads, because they have experienced the algorithm’s effectiveness.
When we think about the rest of the News Feed, the non advertising content that everyone sees every day, it’s also determined by an algorithm. “Algorithm” is a long word for, effectively, a science experiment. There’s not a well thought out process where a company says, “show them more cat pictures” and “show them fewer political pieces of content.” Instead tech companies create A/B tests, or science experiments, where every change they want to make, and they can make hundreds of changes a week, is based on the outcome of these science experiments. For instance, they test against two groups of people, where one group of people sees a change in the type of content they’ll see, and the other group doesn’t, and Meta can observe the differences between those two groups.
Then, they choose the one that delivers against the metric that they care about. And during my time there, the teams that were responsible for building these News Feed changes cared about revenue—how much money the company was making— and they cared about engagement—whether people were spending more time, whether they were sharing more, whether they were liking more. No one ever asked “is it creating harmful outcomes?” or “how is it hurting people?”
Toward the end of my time at Meta, I learned that the best way to understand the News Feed was to observe it. And what my team and I started to see in our observations was truly concerning. It suggested the News Feed was skewing people towards more hate of each other, more anti-immigrant viewpoints, more anti-other viewpoints, polarizing people, and very aggressively shifting opinion.
It was not being studied heavily, and I raised concerns internally that we were on the wrong side of history. I proposed we deeply invest in understanding the harms we were creating through this algorithmic product. Despite advocating aggressively, I was repeatedly told no. In fact, the Chief Technology Officer’s primary concern was that I sent him an email that a lawyer could discover and ask him questions about at some point in the future.
At no point do these leaders approach these decisions with adequate care for the people who are on the platform. My reason for quitting the company and leaving a very large amount of stock on the table was that I came to believe that they don’t care about the use of their product, and they will do nothing on their own outside of legislative or regulatory pressure to change their behaviors.
During a panel following a screening of Can’t Look Away this summer, I also spoke about the News Feed’s power to drive action, and Meta’s inaction:
It’s no surprise that these products are so effective at driving such profound impact to young people.
I spent 10 years of my 15 at Meta as part of their advertising business, and I flew around the world peddling promises that were true: that these products were the best in the world at driving action, driving influence, connecting people to customers.
But the evidence is clear that when we’re talking about children who have underdeveloped frontal lobes, who struggle with impulse control, who are extremely vulnerable, we are dealing with a devastating situation where no longer is it about buying a new pair of shoes. We saw Mason Eden’s story, he was driven to kill himself by repeated content using the same exact mechanism that Meta is using to sell products. It’s not safe for kids. We see examples like Alexander Neville and Avery Ping where we’re connecting drug dealers directly to kids. We’re connecting sextortionists to kids.
The thing that I really struggle with is companies saying that they are doing everything that they can. As we stand here, Meta is laying deep sea fiber around the world, they flew drones over Africa to increase their user base. They put a computer into a pair of sunglasses. But they’re saying they can’t stop an algorithm from delivering stories about the choking challenge? There are kids who have been dying for 15 years from the choking challenge. It’s completely unacceptable. We need to be sharper than their smoke screens.
Avery’s dad, Aaron, has released four episodes of his podcast “Superhuman” that I can’t recommend enough. He shares transcripts and episode notes on Substack as well:
When panel moderator Sarah Gardner, CEO of Heat Initiative, asked me what changes tech companies need to be making, I responded:
There are a lot of things they could do, but that’s not the conversation we need to be having anymore because they’re not doing it. If it was my decision, I’d be saying, “wow, kids are dying because of these products so tomorrow, we need to take kids off the platform until we know that we can keep them safe.”
The real answer is that it’s all about money. We desperately need legislation. KOSA requires a duty of care, which is just the bare minimum. It means that companies are required to consider the impact and harms of their products when they build them and as they maintain them. All of you, I hope, will go home and call your representatives and tell them that you support KOSA, and that we need to keep for-profit products that are proven addictive off our kid’s phones.
As Brian broke down what an algorithm actually is, a series of A/B experiments, and how they’re programmed to optimize for profit and engagement instead of safety, I watched lawmakers’ wheels turn.
The legislation that Brian, Taj, and I were advocating for this week would restrict social media companies from sending kids notifications in the middle of the night or during school hours. It would also limit young people’s access to predatory algorithmic feeds that have been proven to drive catastrophic outcomes for kids and teens due to factors like:
Addictive Design: The White House warns that platforms “use manipulative design… to promote addictive and compulsive use by young people to generate more revenue.
Compulsive Use: Over 20% of adolescents met criteria for “pathological” or addictive social media use, with an additional ~70% at risk of mild compulsive use.
Sleep Deprivation and Attention Issues: Nearly 1 in 3 adolescents report staying up past midnight on screens (typically on social apps).
Always Online Culture: 95% of teens are on social platforms, and ~36% say they use them “almost constantly” – rarely unplugging. This “always online” culture, fueled by persuasive design, can crowd out offline development and amplify mental health strains
Viral Challenges: Beyond self-harm, algorithms can amplify violent challenges or hateful content. There are many cases of dangerous viral “challenges” that carry devastatingly harmful consequences proliferating among kids (e.g. choking/fainting challenges, etc.) primarily because algorithms boosted those videos’ visibility once they gained traction.
Self Harm and Pro Suicide Content: Mason Edens is one heartbreaking example of a teen who turned to social media for support during a breakup and was flooded with pro-suicide content until he took his own life.
In April, after Sarah Wynn-Williams’ Senate testimony and my sworn statement to the FTC, I wrote:
In the span of 24 hours, two former Directors with a combined tenure longer than Meta has existed have come forward publicly with information that Meta fought to keep buried. Information about how Meta endangers national security and children’s privacy. Information that cost both of us our careers.
We were both Directors at the company. Brian, a former Vice President, is the senior-most former employee to come forward as a whistleblower and vocal critic of the company. I asked him about this.
Kelly: You’re the senior-most former employee to leave on principle and then speak publicly about it. Why aren’t there more like you?
Brian: That’s honestly a good question. I have had a surprising number of former senior employees tell me that I am right but they could never go on the record like I have. I think the personal cost—its stressful—and the potential business cost might shut you out of some Silicon Valley jobs. Some also think it won’t change anything, recalling the various congressional hearings that yielded no results. So high cost, little reward.
I’ve written extensively about how exploitative capitalist and patriarchal systems underlie Meta’s actions and relative impunity. And about how the retaliation I experienced for speaking up at Meta was part of a toxic system of silencing women. Brian is an example of how men can, and must, become agents of change in harmful systems.
And we need legislators to take these efforts more seriously than the millions spent on lobbyists from tech companies. So many of us coming forward, at great personal cost, to warn the government and public that Meta is not to be trusted.
We need your help. 5 Calls is a helpful online directory to find your representatives and contact them. Please ask them to consider the data, the testimonies, and the safety of our children in their current legislative sessions. As I told Washington lawmakers this week, this issue is not theoretical. Children and teens are dying and they need protection now—kids can’t consent to a product designed to manipulate them, proven to cause harm.
In addition to his advocacy work in online safety, Brian, in partnership with his wife Katie, invest their time in building a more just and equitable economic system. They say:
For years, we have sought to understand why, in a world of unprecedented wealth, we witness explosive GDP growth while many people feel financially overwhelmed. Why do we have the global resources to solve our greatest challenges, yet the systems in place seem to make things worse?
The answer we’ve arrived at is both simple and radical: the prevailing economic system is not broken; it is working exactly as it was designed. It was built to prioritize profit over people, to concentrate wealth, and to treat human communities and the natural world as resources to be extracted.
I asked him more about this.
Kelly: You and Katie now invest your time in Delta Fund with a focus on fixing a broken economic system, with your Unlock Ownership Fund and frequent writing about economic empowerment. Why is this your focus now and how does it connect to what you experienced at Meta?
Brian: After leaving Meta, I knew that I wanted to work on something impactful. Mission has always motivated me. Studying the US made it clear that inequality drives massive civic unrest—and so we started by working on minimum wage legislation. That work and resulting research expanded to a deeper understanding of our economy and how it simply doesn’t work for most people. We believe that if our economic system worked for most people we wouldn’t have the deep levels of unrest that we see today.
This work connects to Meta because in many ways Meta is fulfilling the shareholder primacy mandate of our public markets—essentially that you should drive as much profit as you can until the market holds you accountable. Meta is extremely profitable, yet markets haven’t held it accountable for the many harms it creates. I think this is why you have better legislation in Europe than you do in the US as we are so much more driven by capitalist forces that have captured government.
You can subscribe to their blog at delta-fund.org
Big Tech’s resistance to safety regulation is the predictable outcome of a system that rewards profit above everything else. What Brian and Katie are building now—and what so many families and advocates are demanding—is a different kind of future. One where human well-being isn’t collateral damage in an economic model that benefits billionaires.
That future won’t arrive on its own. It needs legislation, transparency, and public pressure. It needs people willing to speak honestly about how these systems work and who they harm.
Last night on LinkedIn, I wrote:
There’s room still at this table. For other tech workers who were inadvertently complicit in building predatory tech, we need your voices now. We must stop putting money, whether a company’s profits or our own financial security, above the safety of children.
We need your voices to urge legislators to act on the evidence. We need everyone who cares about these issues to call their lawmakers today and demand action. Kids need protection from a system, companies, billionaire CEOs, that profit on their harm.






