YouTube and Google Want to Police What You're Allowed to See

This Oct. 21, 2015 photo shows signage with a logo at the YouTube Space LA offices in Los Angeles. (AP Photo/Danny Moloshok)

I freely admit I’m a cynic and certainly look for the angle anytime a company, organization or individual comes promising to do some things for the good of everyone.


I apologize for that in advance because while I think YouTube and Google policing their sites for terrorist and extremist content is a good thing, I also think they’re not just going to be relegating their efforts to those who like to slice off heads. They’ll be going after “offensive” speech, too, unless I’ve misunderstood some of their past statements.

When Facebook was caught suppressing conservative content in their “trending stories” section at the beginning of last year’s primary season, they immediately went into clean-up mode and promised to crack down on “fake news”. Which was odd because they had just been slammed for being the arbiters of what people were allowed to see. Very confusing.

In any event, this new effort to, as the Guardian piece says, “take a tougher stance on videos that contain inflammatory religious or supremacist content” by, among other things, creating new scanning tools and funding those who moderate their own sites and flag offensive content comes just after SCOTUS ruled that “offensive content” is protected by the 1st Amendment.


It just feels like Google and Facebook and YouTube and all social media platforms with an interest in controlling what gets out there — and again, who could ever complain about people trying to stop the radicalization of wannabe extremists? — may be bumping up against speech protections.

In short, these huge tech companies are asking us to trust them to decide what is allowed an audience. And I’m personally not sure yet how to think about that.


Join the conversation as a VIP Member

Trending on RedState Videos