One Politically Viable Solution for Murthy vs. Missouri - Full Transparency

AP Photo/Gregory Bull, File

The challenge for the Supreme Court is to determine where to draw the line between appropriate and unconstitutional communications between the federal government and these online media platforms. There seems to be no argument about the need for government to communicate with these online platforms for specific and actual national security or law enforcement actions.   

Advertisement

The Biden administration was both directly requesting specific changes to Twitter, Facebook, and YouTube, as well as indirectly making censorship requests through federally-funded entities set up by CISA specifically for the purpose of providing a work-around from constitutional restrictions and hiding the federal government's role in requesting censorship of specific users and topics. The content that received most of the censorship requests included election integrity, effectiveness of wearing masks and shutting down schools, accuracy of statistics on deaths caused by COVID and deaths to patients coincidentally with COVID, and even content related to the Russia-Ukraine war. These government-funded entities have confirmed that they labeled some content as “mal-information” which was, in fact, true but was still highlighted for censorship due to it being inconvenient to the government’s narrative.  

At the same time, the Biden administration was demanding censorship directly or indirectly, many government officials were publicly making threats to these online companies via anti-trust action, changes in Section 230 liability protections, and general PR harassment, such as public commentary from Biden himself that the platforms were literally “killing” people unless they censored content that was inconvenient or disagreed with the current government narrative.   

Advertisement

RELATED: 

Justice Jackson Upset That the First Amendment Might Stop the Government From Suppressing Free Speech

Free Speech Wins: 5th Circuit Upholds Heart of Injunction Against Biden Administration

Federal Judge Destroys the Government's Argument in Favor of Censorship and Educates Them on the First Amendment


Rather than figuring out how to define, and even more difficult how to enforce, the red lines of what the federal government can and cannot communicate to the online platforms, a better solution is to mandate full, detailed, and timely transparency on all communications outside national security and law enforcement.   

Since the executive branch has shown it is willing and able to delay and avoid FOIA requests for transparency, the better target for this mandate is the online media companies, which would be required to publish reports on all relevant communications from government and government-funded entities.   

To satisfy the goals of both online safety and viewpoint neutrality, transparency can be a powerful tool to measure how each online platform performs and how it performs relative to its peers. A simple transparency mandate passed by Congress can avoid the politically difficult work of legally defining the terminology and specific guardrails for online safety and viewpoint neutrality.  

Advertisement

Companies will be required to report on their enforcement actions related to online safety. Companies that regularly take enforcement actions on content which is not imminently harmful nor illegal but is instead considered misinformation, disinformation, or malinformation, will be required to publish their enforcement actions on all content categories and users. 

To show whether the platforms are viewpoint neutral, transparency should identify the specific content categories and the usernames of those affected as long as they are media/news sites, NGOs, corporate entities, or individuals who opt out of remaining private. This would allow the media and public to make their own judgments about the viewpoint biases and online safety performance of each online company.  

Online platforms such as Instagram, Snap, Discord, Facebook, Google, YouTube and others have already demonstrated they can publish generalized reports (e.g., here, here) on quantities and general categories of harmful content they discover. These reports simply lack details regarding enforcement actions taken, content categories and users affected, and specific details to identify government or government-funded entities making specific content moderation requests. 

The glare of publicity provides a powerful incentive to ensure that online platforms moderate harmful content and do not systematically discriminate based on viewpoint content. Government attempts to go beyond promoting its own narrative agenda and violating the online free speech rights of entities and users will also be seen in this glare of publicity. Transparency will empower minority voices and enable truth to speak against government power, especially on topics that are inconvenient or undesirable for whichever political administration is in power. 

Advertisement

Michael Matthys is a co-founder of the Institute for a Better Internet, based in Silicon Valley. He has worked in technology start-ups, large tech companies, and venture capital for more than 30 years.

Recommended

Join the conversation as a VIP Member

Trending on RedState Videos