The Smoking Gun
In what may be the most damning revelation about social media's engagement-driven business model to date, whistleblowers from Meta and TikTok have provided the BBC with evidence that both platforms deliberately amplified harmful content after internal research showed it increased user engagement.
A Meta engineer revealed that senior management explicitly instructed teams to allow more "borderline" harmful content—including misogyny and conspiracy theories—to compete with TikTok, citing stock price concerns. This isn't a bug in the algorithm; it's a feature designed for profit.
Instagram Reels: Launched Without Safeguards
The whistleblowers exposed particularly troubling details about Instagram Reels, launched in 2020 as Meta's TikTok competitor. Internal research showed the platform had significantly higher rates of:
- Bullying and harassment
- Hate speech
- Harmful content compared to other Instagram features
Despite knowing these risks, Meta prioritized rapid deployment over user safety. Internal documents revealed what whistleblowers describe as a "path that maximizes profits at the expense of their audience's wellbeing."
TikTok's Priorities: Politicians Over Children
The allegations against TikTok are equally disturbing. A TikTok employee provided evidence showing the company systematically prioritized cases involving politicians over reports of harm to children, including sexual blackmail cases involving minors. This hierarchy of concern reveals where the platform's true priorities lie.
Real-World Consequences
The impact extends beyond individual users. Counter-terror police reported observing a "normalization" of antisemitic, racist, and violent content across these platforms. This isn't just about offensive posts—it's about how algorithmic amplification of outrage content is reshaping societal norms.
The Business Model Conflict
This exposé reveals the fundamental conflict at the heart of engagement-driven social media: user harm has become profitable. When internal research shows that outrage drives engagement, and engagement drives revenue, the incentive structure ensures harmful content will be amplified.
Both companies have denied the allegations, with Meta claiming suggestions they deliberately amplify harmful content for profit are "wrong," and TikTok calling the claims "fabricated." However, the breadth of whistleblower testimony and internal documentation suggests a systemic issue that goes beyond individual bad actors.
What's Different This Time
Unlike previous controversies based on external research or speculation, these revelations come from insiders with direct access to internal communications and research. This represents a smoking gun—proof that these companies understood the harm their algorithms caused and chose profits over user safety.
As our publication has covered the rapid evolution of AI tools and their integration into daily life, this exposé serves as a stark reminder that the same engagement algorithms driving social media platforms represent a different but equally important challenge: not technical capability, but ethical implementation.
