OPINION: Which Is the Best Mis/Disinformation Test, True/False or Harmful/Not Harmful?

AP Photo/Jeff Chiu, File

The criteria used to determine whether to censor or label content is casually explained as whether such information is true or false. If it's deemed to be false, then it may be censored as misinformation or disinformation. It's recently been revealed that government-funded censors also use a third category called mal-information, which is information that at least some evidence shows is true, but which also happens to be inconvenient to the narrative of a powerful person or entity.

Advertisement

Stories attempting to provide answers to the following questions were censored or labeled false by online platforms:

The key question is, who decides? Who decides whether the answers to these questions should be censored or labeled as false, or whether there's actually a disagreement between the original author of the content and those who wish to censor the content? Should the government decide for us? Fortunately, our Constitution enshrines freedom of speech in the First Amendment; aside from certain very narrow categories of speech (defamation, true threats), "most other content-based restrictions on speech are presumptively unconstitutional," according to constitutional law scholars. This means that the government cannot have influence over what is to be considered true or false and is prohibited from influencing/controlling what can be published or not published online. The founders clearly understood that a partisan government administration has a political agenda to stay in power by promoting its narrative and protecting its leaders. Democracy performs best when all minorities, including those with unpopular viewpoints, are allowed to speak their truth against the power of our then-current government.

Advertisement

Should the unelected executives of the biggest online platforms decide for us? These platforms, such as Google, YouTube, Meta (Facebook/Instagram/WhatsApp), and X, have market shares exceeding 80-90 percent of their respective online markets. Surveys consistently show that most U.S. citizens gather their news from online sources controlled by these monopoly platforms. Therefore, these platforms provide a critical information service and serve as the only avenue for users, news/opinion sites, and information providers to access and publish content available to most of the public. If the search monopoly Google decides a website should no longer appear on its search results or YouTube channels, that website will effectively disappear from the planet.

Some legal analysts regard these companies as private entities with their own First Amendment rights to control user-generated content and pick/choose their users. However, regardless of their origins as competitors in a growing online market, these platforms have become monopoly providers of critical information services and now fit the description of what is typically a regulated Common Carrier, not unlike electric utilities, telecom service providers, airlines, and cable TV. Common Carrier rights to pick/choose customers are superseded by our citizens' rights to publish and participate in the public sharing and debate of information and opinions from various viewpoints.

Advertisement

One exception to these existing content moderation models is the Community Notes feature of X (formerly Twitter). Community Notes is based on a super-majority of screened users who can go through a process to post a declaration of disagreement attached to a specific content post. This may effectively label the specific content as being likely false based on whatever additional information the Community Notes may add for the reader to assess. As designed, Community Notes is effectively content moderation based on a super-majority of users rather than on decisions of the platform itself.

These platforms recognize they should not be arbiters of what is true or false, which is why they typically rely on external fact-checkers or government entities to justify the censorship of controversial or unpopular content. However, fact-checkers are inherently biased with their own funding sources, academic backgrounds, and media relationships. Reliance on government entities introduces the constitutional problems and viewpoint biases of the then-current partisan government administration, as discussed above.

Rather than true/false, a better criterion for censoring or labeling content may be whether such content is imminently harmful or not. This criterion introduces two major improvements and industry benefits over the use of the true/false criterion. First, it dramatically reduces the potential for mistaking true or false determinations that are actually just disagreements between different viewpoints, experts, and media/political opinions. Second, it enables the scale of adjudication of content moderation disputes to be dramatically increased since it can eliminate many disputes regarding content that is not imminently harmful, and focus on assessing the potential for imminent harm to a person or group of persons. There is a large population of people who are already experts trained at assessing imminent harm – these are the retired judges and attorneys who work as arbitrators.

Advertisement

FINRA (Financial Industry Regulatory Authority) offers an example of how a non-government entity can effectively outsource the adjudication of user disputes at a massive national scale through the use of online video platforms and collaboration tools that enable arbitrators to quickly assess harm from investor-advisor disputes. Once users and platforms have exhausted their initial process to resolve disputes regarding content moderation enforcement actions, appeals of these disputes can be outsourced to a non-government entity that is similar to FINRA but focused on appeals of such online content moderation disputes. Using the criterion of imminently harmful or not works much more effectively and at a massive scale relative to attempts to determine whether the content is true or false while maintaining fairness and viewpoint neutrality. "Imminently harmful" as a criterion also works to ensure online safety as well.

Fairness and progress can be achieved only when all voices are heard, regardless of how unpopular the viewpoints of some might be, and whether such voices are inconvenient to the then-current government administration.


Mike Matthys is a co-founder of the Institute for a Better Internet, based in Silicon Valley. He has worked in technology start-ups, large tech companies, and venture capital for more than 30 years.

Recommended

Join the conversation as a VIP Member

Trending on RedState Videos