Twitter CEO Elon Musk has dropped part II of The Twitter Files, and it’s a big one. In this episode, journalist Bari Weiss details the secret blacklists maintained by the company’s employees in the pre-Musk era.
Conservatives have long suspected that Twitter throttled their tweets to make them less visible to other users on the platform. According to this development, their suspicions were spot on.
Weiss started her thread by noting that the investigation revealed that “teams of Twitter employees build blacklists, prevent disfavored tweets from trending, and actively limit the visibility of entire accounts or even trending topics – all in secret, without informing users.”
1. A new #TwitterFiles investigation reveals that teams of Twitter employees build blacklists, prevent disfavored tweets from trending, and actively limit the visibility of entire accounts or even trending topics—all in secret, without informing users.
— Bari Weiss (@bariweiss) December 9, 2022
The journalist noted that at one time, Twitter’s mission was “to give everyone the power to create and share ideas and information instantly, without barriers,” but eventually, these barriers “were erected.”
Stanford University’s Dr. Jay Bhattacharya was her first example. Amid the COVID-19 outbreak, he argued that onerous lockdown orders and school closures would have a negative impact on children. In response, the platform put him on a “Trends Blacklist,” which “prevent his tweets from trending.”
3. Take, for example, Stanford’s Dr. Jay Bhattacharya (@DrJBhattacharya) who argued that Covid lockdowns would harm children. Twitter secretly placed him on a “Trends Blacklist,” which prevented his tweets from trending. pic.twitter.com/qTW22Zh691
— Bari Weiss (@bariweiss) December 9, 2022
Weiss also brought up popular conservative commentator Dan Bongino, who was placed on a “Search Blacklist.” Turning Point USA founder Charlie Kirk was set to “Do Not Amplify,” which means the reach of his tweets was suppressed.
Previously, members of the company’s leadership denied that Twitter engages in these forms of “shadowbanning” and insisted they do not discriminate based on politics:
Twitter denied that it does such things. In 2018, Twitter’s Vijaya Gadde (then Head of Legal Policy and Trust) and Kayvon Beykpour (Head of Product) said: “We do not shadow ban.” They added: “And we certainly don’t shadow ban based on political viewpoints or ideology.”
6. Twitter denied that it does such things. In 2018, Twitter's Vijaya Gadde (then Head of Legal Policy and Trust) and Kayvon Beykpour (Head of Product) said: “We do not shadow ban.” They added: “And we certainly don’t shadow ban based on political viewpoints or ideology.”
— Bari Weiss (@bariweiss) December 9, 2022
As it turns out, Twitter was being deceptive when it said it did not shadow ban. The investigation found that internally, employees referred to this practice as “visibility filtering,” according to “[m]ultiple high-level sources.”
A senior employee told Weiss’ team to look at visibility filtering (VF) “as being a way for us to suppress what people see to different levels.”
“It’s a very powerful tool,” the individual added.
The company used VF to “block searches of individual users; to limit the scope of a particular tweet’s discoverability; to block select users’ posts from ever appearing on the ‘trending’ page; and from inclusion in hashtag searches.”
9. “VF” refers to Twitter’s control over user visibility. It used VF to block searches of individual users; to limit the scope of a particular tweet’s discoverability; to block select users’ posts from ever appearing on the “trending” page; and from inclusion in hashtag searches.
— Bari Weiss (@bariweiss) December 9, 2022
An engineer at the company told Weiss’ team that they “control visibility quite a bit,” and that they “control the amplification of your content quite a bit.” The employee said “normal people do not know how much we do.”
11. “We control visibility quite a bit. And we control the amplification of your content quite a bit. And normal people do not know how much we do,” one Twitter engineer told us. Two additional Twitter employees confirmed.
— Bari Weiss (@bariweiss) December 9, 2022
Content moderation decisions about higher-profile, controversial accounts fell under the purview of the “Site Integrity Policy, Policy Escalation Support,” which is also known as “SIP-PES.”
Weiss explained:
This secret group included Head of Legal, Policy, and Trust (Vijaya Gadde), the Global Head of Trust & Safety (Yoel Roth), subsequent CEOs Jack Dorsey and Parag Agrawal, and others.
14. This secret group included Head of Legal, Policy, and Trust (Vijaya Gadde), the Global Head of Trust & Safety (Yoel Roth), subsequent CEOs Jack Dorsey and Parag Agrawal, and others.
— Bari Weiss (@bariweiss) December 9, 2022
This team was responsible for “the biggest, most politically sensitive decisions.”
Not surprisingly, one of the accounts that rose to this level of scrutiny was Libs of TikTok, which posts videos made by far-left progressive types. Chaya Raichik, who runs the account, elicited significant outrage by exposing these individuals, many of whom have bragged about grooming children and helping to indoctrinate them into progressive ideology on matters pertaining to race, sexuality, and gender identity.
Raichik operated anonymously until Taylor Lorenz, an activist with the Washington Post, exposed her identity. Not only was Lib of TikTok placed on the “Trends Blacklist,” but it was also labeled as “Do Not Take Action on User Without Consulting With SIP-PES.”
16. One of the accounts that rose to this level of scrutiny was @libsoftiktok—an account that was on the “Trends Blacklist” and was designated as “Do Not Take Action on User Without Consulting With SIP-PES.” pic.twitter.com/Vjo6YxYbxT
— Bari Weiss (@bariweiss) December 9, 2022
The account “was subjected to six suspensions in 2022 alone,” and “was blocked from posting for as long as a week.”
The company told Raichik she was suspended for violating its policy against “hateful conduct.”
But after her seventh suspension, the committee acknowledged that she had not “directly engaged in behavior violative of the Hateful Conduct policy.”
19. But in an internal SIP-PES memo from October 2022, after her seventh suspension, the committee acknowledged that “LTT has not directly engaged in behavior violative of the Hateful Conduct policy." See here: pic.twitter.com/d9FGhrnQFE
— Bari Weiss (@bariweiss) December 9, 2022
Nevertheless, the committee justified its actions by arguing that her tweets encouraged harassment of “hospitals and medical providers” by implying “that gender-affirming healthcare is equivalent to child abuse or grooming.”
20. The committee justified her suspensions internally by claiming her posts encouraged online harassment of “hospitals and medical providers” by insinuating “that gender-affirming healthcare is equivalent to child abuse or grooming.”
— Bari Weiss (@bariweiss) December 9, 2022
The report referred to Lorenz’s doxing of Raichik in November by posting a “photo of her home with her address.” When Raichik informed Twitter of the doxing, they responded by saying the tweet was not “in violation of the Twitter rules.”
The tweet is still live on the site.
In the final section, Weiss details how Twitter employees discussed “using technicalities to restrict the visibility of tweets and subjects.”
Yoel Roth, the company’s former Global Head of Trust & Safety,” sent a direct message on Slack acknowledging that the team “used technicality spam enforcements” because it was “under-enforcing” its policies.
23. In internal Slack messages, Twitter employees spoke of using technicalities to restrict the visibility of tweets and subjects. Here’s Yoel Roth, Twitter’s then Global Head of Trust & Safety, in a direct message to a colleague in early 2021: pic.twitter.com/Li7HDZJtIJ
— Bari Weiss (@bariweiss) December 9, 2022
He later sent a direct message to another employee asking for more research related to expanding “non-removal policy interventions” like visibility filtering and deamplification.
24. Six days later, in a direct message with an employee on the Health, Misinformation, Privacy, and Identity research team, Roth requested more research to support expanding “non-removal policy interventions like disabling engagements and deamplification/visibility filtering.” pic.twitter.com/lqiJapHjct
— Bari Weiss (@bariweiss) December 9, 2022
“The hypothesis underlying much of what we’ve implemented is that if exposure to, e.g., misinformation directly causes harm, we should use remediations that reduce exposure, and limiting the spread/virality of content is a good way to do that,” he wrote, also noting that they got former Twitter CEO Jack Dorsey “on board” with rolling out these methods.
This installment of the Twitter Files will likely hit just as hard as part I, in which Musk exposed the details behind the decision to suppress the Hunter Biden laptop story. High-profile progressives and members of the activist media have continually dismissed conservatives’ concerns about politically-biased censorship practices. Many mocked those on the right for complaining about the suppression of their content.
But now, they will not be able to deny what many have known for years: Twitter was brazenly silencing viewpoints that did not align with progressive politics. Nevertheless, this does not mean they will not still try.
Join the conversation as a VIP Member